When you hear the phrase “self-driving car,” your mind might be flooded with images from sci-fi, scenes of sleek, hi-tech vehicles speeding down the highway with an empty driver’s seat, and a passenger peacefully reading in the back. Automobile manufacturers have been chasing that vision since the 1920s, and nowadays, self-driving features are becoming increasingly mundane. Along with them, real-life self-driving cars are much harder to identify than their imaginary Hollywood counterparts.
Currently, the NHTSA uses a 0-5 scale to describe how autonomous a self-driving vehicle is. Zero is the lowest level, with some basic driver assistance features that require oversight, such as anti lock brakes and cruise control. Five is the highest level, which represents a fully autonomous car that can drive all by itself, without oversight, in any place or conditions, and may not even have the option for a human driver to take over.
These classification levels are important when it comes to liability. Although Level 4 and Level 5 vehicles don’t yet exist for purchase by consumers, once they do, there will be no question of liability when they crash. These vehicles will receive no human input, so it will be entirely on the manufacturer if they fail; holding a human accountable for the crash of a Level 4 or 5 autonomous vehicle would be akin to holding a passenger in a taxi accountable if the driver crashes the cab. However, when it comes to Level 2 and 3 autonomous vehicles, which do exist and are commonly driven today, the line is more blurred. Because they do require some input from a driver, it is much less clear whether that input or the car itself is what causes a crash. A person can make mistakes, but so can sensors and cameras. This is where questions of liability come in.
Liability and Autonomous Vehicles
There are a number of people that can be liable when a partially self-driving car gets into an accident. In fact, most of the time, it’s some combination of multiple factors that causes a crash, not just the car or just the person driving it. Here are some factors that may be responsible for the crash of a self-driving vehicle:
- The operator. As in the case of a test drive of an autonomous Uber in 2018 which resulted in the death of a pedestrian, sometimes the person supervising the self-driving car is at fault. In this particular instance, the test driver was watching an episode of The Voice on her phone, so she wasn’t able to stop the car before it hit a pedestrian. If the driver is openly distracted or driving recklessly, they can be liable for injuring someone with a self-driving car.
- Mechanical malfunction. The sensors of a self-driving car, while advanced, are far from infallible. In the same way that a regular vehicle’s brakes can fail or its tires can deflate, a self-driving car’s cameras and software can sometimes fail to detect obstacles, which can lead to a crash. In 2018, an Apple engineer was killed when his Tesla failed to detect a concrete barrier on the highway while in Autopilot mode – a malfunction that it had actually presented multiple times before the accident.
- Technical limitations. Self-driving cars are becoming more autonomous very slowly. Sometimes, a car crashes, not because its mechanics failed, but because its mechanics never had the capability to handle a situation in the first place. Some Teslas, for example, have crashed into parked emergency vehicles, not because of a malfunction, but because their sensors are not easily able to detect nonmoving objects. This was the case in a 2019 Indiana crash between a Tesla in Autopilot mode and a parked fire truck.
These variables of liability have great areas of overlap, and it’s easy to see how the true culprit of a crash could get confusing. Was the operator at fault even though the vehicle warned them of an obstacle too late to avoid it? Was the manufacturer at fault even though their customer put too much trust in a car that wasn’t fully self-driving? There’s a vast gray area. As a result, experts in the field are suggesting that liability would be divided in cases where a single cause of a crash isn’t clear.
Is False Advertising to Blame?
Another culprit for crashes with autonomous vehicles could go much further back than the moments before a wreck occurs. One of the major problems could be within the advertising itself, as some manufacturers market their cars online as having “autopilot” features or being totally self-driving. When people buy these cars, they understandably believe that the vehicle is capable of driving all by itself, without the guidance of a human being. This can lead drivers to feel that they are safe taking their hands off the wheel even when they aren’t, which naturally leads to accidents. The NTSB has referred to this phenomenon of misplaced trust in technology as “automation complacency,” and some companies such as Tesla have been investigated to see whether their advertising is contributing to higher levels of it. There’s a possibility that, in the future, companies could be the ones found liable for self-driving car accidents if they tricked consumers into believing their vehicle is more autonomous than it really is.
Safety and Self-Driving Cars
Although it is important to consider how the law would handle liability with self-driving cars, it’s equally as important not to become too concerned about this technology. The reason self-driving cars are being invented and developed in the first place is because they are safer than human-operated vehicles. Unlike a person, an autonomous car can never drive drunk, get distracted, or fall asleep at the wheel. Furthermore, the bits and pieces of driverless technology that already exist have been shown to be extremely safe. We might not think of them as hi-tech features, but elements such as anti-lock brakes, which significantly decrease crashes in wet conditions, and cruise control, which reduces speeding, are driverless features that keep us safe. In fact, the NHTSA reports only 18 fatalities have been associated with driverless cars, and each of these fatalities has been attributed to a Level 2 autonomous vehicle; as far as we know, no one has ever been killed by a fully autonomous car. Some may argue that this is just because the technology is new, but cars with Level 2 autonomy have existed since the 90s. A track record of 18 deaths in nearly 30 years would seem to suggest that, as the engineers designing these vehicles believed, autonomous cars are safer than those with a human pilot. Crashes involving self-driving cars make up .02% of all crashes that happen in the U.S. yearly, and these crashes cause as little as .005% of all car crash fatalities. The likelihood that your car will ever collide with a self-driving car is incredibly low, and the odds that you will be injured or killed by this collision are even lower.
Injured in an Indiana Automobile Accident? Call Stewart & Stewart Attorneys
Did you or your loved one suffer a car accident in Indiana? The team at Stewart & Stewart Attorneys is here to fight for your rights and work hard to bring your car crash case to a fair resolution. Don’t wait; call 1-866-925-3011 24/7/365 or contact us online for a free consultation with an experienced vehicle accident attorney. No fee unless you win.