How Do Self-Driving Cars Work and What Problems Remain?

In this article

    Are you ready for your car to become a self-driving chauffeur? 

    Progress in the field of self-driving cars has been enormous over the last decade. Waymo and Uber, both top contenders in the race for an autonomous driving future, weren’t even incorporated before 2009. Between 2015 and 2019, Tesla’s autopilot achieved more than 3.3 billion miles of total use.

    Even with all this progress, accidents and deaths from self-driving cars still pose a real threat. In this article, we cover the ins and outs of the autonomous vehicle industry, the technology driving the progress and what problems threaten public safety as the technology gets more common.

    What is a self-driving car? 

    A self-driving car, also known as an autonomous vehicle, is a connected car that relies on a combination of hardware, software and machine learning to navigate various weather, obstacles and road conditions using real-time sensory data. 

    People commonly associate self-driving cars with artificial intelligence, but many cars today have achieved multiple levels of autonomy without artificial intelligence. Features such as brake assist, lane assist and adaptive cruise control, for example, can be considered autonomous driving to some degree. 

    Self-driving cars do not rely on advances in artificial intelligence to move the industry forward, though the level of autonomy depends on the sophistication of the deep learning models used to control the car. In theory, there are five levels of autonomy that define a self-driving car. 

    The 5 levels of autonomous vehicles

    These 5 levels of autonomous vehicles were outlined by SAE International in 2014 to have a common point of reference for the industry. Each level depends on the level of automation and how much human involvement is required. 

    Level 0

    In this level, there is no automation. Humans control every aspect of the driving environment from acceleration, shifting gears, steering, navigation and weather. An example of a vehicle in the Level 0 phase is the Ford Model T because it doesn’t have any features that reduce the car’s reliance on humans, such as cruise control and even automatic windows.

    In Level 0, the human is responsible for executing maneuvers, monitoring the environment and fall back performance in the event of an error with the car like a flat tired or loss of breaks.

    Level 1

    The first step towards self-driving cars is basic driver assistance. Your car may fall within this spectrum of self-driving if it has lane assist, brake assist or cruise control.

    A feature as small as side-mirror indicator lights to alert the driver when a car is in the next lane can be considered Level 1 driver assistance. Other common driver assistance features include a vibrating steering wheel when an unsignaled lane departure occurs and a self-parallel parking feature.

    In Level 1, there are some aspects of automation in the execution of driving functions such as steering, accelerating and decelerating.

    Level 2

    The next rung on the self-driving ladder is Level 2 autonomy and is a big step up from Level 1.

    In Level 2, the automated system takes control of the functional aspects of driving such as steering, acceleration and deceleration. The human driver, however, is still responsible for monitoring the driving environment.

    Examples of cars currently in Level 2 autonomy include Tesla’s vehicles with autopilot enabled and Nissan’s ProPilot assist.

    Level 3

    Level 3 autonomy is when self-driving cars cross the chasm into monitoring the driving environment conditionally. The conditional caveat is that a human driver is still the fallback redundancy when dynamic driving is required. 

    If a car with Level 3 autonomy cannot adequately navigate an obstacle in the road or dangerous weather conditions, it will require the human driver to intervene. 

    Uber’s self-driving car is an example of Level 3 because while the car controls most of the navigation, the human is still needed for edge-case scenarios the system has not been trained on. 

    Level 4

    This is currently the highest level attained by the autonomous vehicle industry. Level 4 is defined as high automation. The self-driving system is responsible for all execution, monitoring and fall back but is not 100 percent effective in all driving modes.

    This means that the car will not understand how to perform in extremely rare scenarios that the models have not been trained to recognize.

    Waymo the autonomous vehicle company being created by Google, is currently in Level 4 autonomy. Its cars are currently testing self-driving ridesharing in major U.S. cities without human drivers. There are still rare cases where the self-driving car is implicated in a situation that extends beyond the model’s understanding and ability to avoid an accident.

    Level 5

    Level 5 is the goal of self-driving characterized by full automation. 

    Full automation means a human being never has to intervene and the car can adequately handle every road (or off-road), weather, obstacle or any other condition, it faces. A world of Level 5 would work best as a network of only other Level 5 autonomous vehicles. If human error is involved, the system is vulnerable to failure. 

    Because training machine learning models is essential to handle Level 5 driverless scenarios, some believe whoever has the most data has the most autonomy. George Hotz, the founder of self-driving startup, believes Tesla will be the first to reach Level 5 autonomy based entirely on the amount of data they collect. 

    Compare Affordable Auto Insurance Rates

    Save money on auto coverage with our simple comparison tool.

    Matching you with providers.
    We found results in
    Click at least 2-3 companies to find the very best rate.

      Powered by (NPN: 8781838)

      Technology inside self-driving vehicles

      While the body of a self-driving car isn’t a reinvention, companies creating self-driving technologies have had to reinvent the way in which the car interfaces with the world around it. A combination of hardware, software and machine learning are needed to have the abilities and redundancy of a self-driving car Level 3 and above. 

      animated depiction and descriptions of self-driving car hardware


      Radar, or Radio Detection and Ranging, is what self-driving cars use to supplement higher resolution sensors when visibility is low, such as in a storm or at night. 

      Radar works by continuously emitting radio waves that reflect back to the source to provide information on the distance, direction and speed of objects. Although Radar is accurate in all visibility conditions and is relatively inexpensive, it does not have the most detailed information about the objects being detected. 


      LiDAR, or Light Detection and Ranging, is what self-driving cars use to model their surroundings and provide highly accurate geographical data in a 3D map. 

      Compared to Radar, LiDAR has much higher resolution. This is because LiDAR sensors emit lasers — instead of radio waves — to detect, track and map the car’s surroundings with data being transmitted at the speed of light, literally. 

      Unfortunately, laser beams do not perform as accurately in weather conditions such as snow, fog, smoke or smog. 

      Even a small object like a child’s ball rolling into the street can be recognized by LiDAR sensors. LiDAR tracks the ball’s position, speed and direction, which allows the car to yield or stop if the object presents a danger to passengers or pedestrians. 

      Cameras and computer vision

      Cameras used in self-driving cars have the highest resolution of any sensor. The data processed by cameras and computer vision software can help identify edge-case scenarios and detailed information of the car’s surroundings. 

      All Tesla vehicles with autopilot capabilities, for example, have eight external facing cameras that help Tesla understand the world around their cars and train models for future scenarios.

      Unfortunately, cameras don’t work as well when visibility is low, such as in a storm, fog or smog. Thankfully self-driving cars have been built with redundant systems to fall back on when one or more systems aren’t functioning properly.

      Complementary sensors 

      Self-driving cars today also have hardware to enable GPS tracking, ultrasonic sensors for object detection, and IMU (inertial measurement unit) to measure the car’s velocity. 

      An often overlooked but important sensor for self-driving cars is a microphone to process audio information. This becomes vitally important when detecting the need to yield to an emergency vehicle or detecting a nearby accident that could be hazardous to the car. 


      For self-driving software to interface with the hardware components in real-time, processing all sensor data efficiently, it needs a computer with the processing power to handle this amount of data. 

      The computer chips in your standard computer or smartphone are known as Central Processing Units (CPU), but when you consider how much computational power is needed for a self-driving car, a CPU does not have anywhere near the bandwidth to handle the number of operations — measured in GOPS, or giga (billion) operations per second. 

      Graphical Processing Units (GPU) have become the de facto chip for many self-driving car companies. But even GPUs are not the ideal solution when you consider how much data needs to be processed by autonomous vehicles. 

      Neural network accelerators (NNA), introduced in Tesla’s FSD chip in 2019, have far superior computing power for processing real-time data from the various cameras and sensors within their self-driving car. 

      According to Tesla, here is how these chips compare when processing the frames per second for 35 billion GOPS (giga operations per second):

      • CPU: 1.5
      • GPU: 17
      • NNA: 2100

      As you can see, Tesla’s NNAs are a breakthrough technology in self-driving car computation. 

      Compare auto insurance policies

      Just answer a few, simple questions and we’ll do the rest!

      Software technology of self-driving cars

      When self-driving cars reach Level 5 autonomy, they will almost certainly use a combination of three distinct components: hardware, data and neural network algorithms. 

      We’ve already touched on the hardware component, which is currently the one component with the most achievement. The algorithms and data components have a long way to come before we reach Level 5 autonomy.

      Neural network algorithms 

      A neural network is a sophisticated algorithm based on complex matrices designed to recognize patterns without being programmed to do so specifically. Neural network algorithms are trained using the labeled data to become adept at analyzing dynamic situations and acting on their decisions. 

      Some of the algorithms that have been built using neural networks and used in self-driving cars are:

      depictions and descriptions of the software driving autonomous vehicles

      Neural networks must be trained with data about the task they are expected to perform. When Google trains image recognition neural networks, for example, Google must train the model with millions upon millions of labeled images.


      Data is one of the most important components for fully autonomous vehicles (Level 5) to become a reality. 

      Large amounts of data are the raw materials for deep learning models to become finished products, in this case, fully autonomous vehicles. 

      Tesla currently has the largest source of data with more than 400,000 vehicles on the road transmitting data from Tesla’s fleet of sensors. By January 2019, Tesla had 1 billion miles of autopilot usage data. Compare this to Waymo, which only passed 10 million autonomous miles by October 2018.

      According to Rand, for an autonomous vehicle to demonstrate a higher level of reliability than humans, the autonomous technology would need to be 100 percent in control for 275 million miles before it can be proven safer than humans with a 95 percent confidence level.

      Points of failure for self-driving vehicles 

      In engineering, a single point of failure is one that will cause the entire system to stop working if it fails. One of the key tenets of engineering is redundancy, or a secondary system that acts as a failsafe in case one stops working. This is why airplanes have more than one engine. If one fails, the plane can still fly.

      Because self-driving cars use cameras, Radar, LiDAR and other sensors to understand its surroundings, the likelihood of a single point of failure leaving the car inoperable is extremely low.

      When Tesla designed their FSD (fully self-driving) chip, they put in two independent and identical computers, not only for redundancy in case one fails, but for communication between the two to validate decisions. 

      Even with all this redundancy, the main point of failure for self-driving cars is in the software. 

      Deep learning models are trained using real-world driving and simulations, but even after billions of miles of experience, there are still rare edge cases these learning models won’t understand how to handle. 

      These edge cases are a major point of failure for self-driving cars since deep learning models do not equate to intelligence. Some of the looming problems threatening the future of self-driving cars are:

      1. Predicting agent behavior: It’s currently difficult to entirely understand the semantics of a scene, the behavior of other agents on the road and appearance cues such as blinkers and brake lights. It is even harder to predict human error such as when a person signals a left turn but turns right.
      2. Understanding perception complexity: Self-driving vehicles fail when objects are blocked from view such as during snowstorms, objects viewed in a reflection, fast-moving objects around a blind spot and other long-tail scenarios.
      3. Cybersecurity threats: Software is written by humans, and humans write code with vulnerabilities. Although few people understand neural networks well enough to exploit these vulnerabilities, it can and will be done.
      4. Continuous development and deployment: One problem facing self-driving vehicles is the process of re-validating changes to the software. If and when the code base changes, does this require testing for another 275 million miles to validate performance?
      animated depictions of the problems yet to be solved for self-driving cars

      Real-world examples of self-driving system failure

      On March 18, 2018, Uber’s self-driving car killed a pedestrian who was crossing the street illegally. Uber’s Level 3 autonomy likely failed in the machine learning model’s ability to make a decision based on the sensory detection of a pedestrian. 

      There was also a failure on behalf of the fallback system: the human. The Uber safety driver behind the wheel failed to take action to prevent the accident.

      Only 5 days later, on March 23, 2018, Tesla’s Level 2 autonomous vehicle hit a median divider head-on, killing the driver. 

      Tesla confirmed autopilot mode was engaged and that the system failed because the lane divider lines were not clearly defined. 

      Is Tesla the only self-driving car in the market?

      Tesla is probably the most known self-driving car brand because its cars are on the road in many places. However, Tesla isn’t the only company making self-driving cars. In fact, many major auto brands are researching, developing and manufacturing semi-autonomous and self-driving cars.

      For example, Toyota, Subaru, Volkswagen, Volvo, and Mercedes Benz all sell car models with autonomous features. These vehicles are outfitted with sensors and cameras that help the driver avoid obstacles, stay in their lane, and automatically apply the brakes in possible collision situations.

      Additionally, Tesla has a few competitors that are working on next-generation self-driving technology. Waymo, Renault, and GM are all racing to develop self-driving cars and put them on the roads in major cities.

      Who insures self-driving cars?

      If you own a car capable of autonomous driving, like a Tesla, you can purchase insurance coverage from any auto insurance provider. When it comes to insurance, cars with self-driving features aren’t treated any differently than standard cars.

      No car today is capable of being driven without a human driver behind the wheel. Even if you’re using the autopilot feature for most of your trip, you still have to turn on the car, program the autopilot and take the wheel if the autopilot falters.

      When you purchase car insurance for a self-driving car, you’ll need to have the minimum amount of insurance required by your state. That usually includes liability insurance and uninsured/underinsured motorist coverage. Because most cars on the road are not semi-autonomous or self-driving, you should also consider getting collision and comprehensive insurance.

      What is the average cost of insurance for self-driving cars?

      Because self-driving cars are insured the same way standard cars are, it’s difficult to determine the average cost of insurance for an autonomous car. However, cars that have self-driving technology tend to be newer models with an expensive price tag. Those two factors alone mean you’ll probably pay an expensive premium.

      To give you an idea of what insurance might cost for a self-driving car, let’s look at the average price of insurance for a Tesla. According to CNBC, the average insurance premium in California for a Tesla Model 3 is $1,913 per year. For higher-end Tesla models, like the $75,000 Model S, the average price of insurance is $2,963 per year.

      What are the benefits of a self-driving car?

      More people are becoming interested in the prospect of self-driving cars, and for good reason. One of the biggest benefits of a self-driving car is that it’s much safer than a regular car.

      Autonomous cars have built-in computers, sensors and cameras that track other cars, pedestrians and objects and have the power to avoid accidents and react to situations much faster than a human could.

      The hands-off nature of self-driving cars also means that people with disabilities and seniors could independently go from one place to another without needing to fully operate the car.

      Self-driving cars are also better for the environment. Because most self-driving vehicles are electric, there will be fewer carbon emissions created by cars. As more cars become self-driving, that could have a profound impact on air quality. It also means that you’ll save money by charging your car, rather than filling up at the gas station.

      The future of self-driving cars

      Despite the definite problems outlined above, self-driving car companies are moving forward and improving every day. 

      Considering an estimated 93 percent of car accidents are caused by human error, the opportunity for self-driving cars to remove a major threat in the daily lives of billions of humans is too great to pass up. There will be many debates over the efficacy of self-driving cars as well as regulatory hurdles before we see Level 5 autonomy deployed globally.

      animated infographic about self-driving cars

      Compare Affordable Auto Insurance Rates

      Save money on auto coverage with our simple comparison tool.

      Matching you with providers.
      We found results in
      Click at least 2-3 companies to find the very best rate.

        Powered by (NPN: 8781838)

        Elizabeth Rivelli

        Contributing Writer

        Elizabeth is a contributor to The Simple Dollar, where she reviews insurance providers and policies. She has more than three years of experience writing for top online insurance and finance publications, including Bankrate, and