Vehicle manufacturers are working with tech companies to put more fully autonomous cars on the roads in the next years. But who will be held responsible for the crashes that will still occur?
Our last post explained the five levels of autonomous driving capability from minimal assistance with acceleration and braking to full automation. Now we offer several theories of liability when things go wrong and these vehicles do serious harm.
The traditional theory of negligence
This involves investigating the how and why of a crash. A choice to look at a phone may have been the cause of a serious rear-end multi-car pileup. The driver who made this poor decision is thus at fault. As cars become more autonomous, negligence is still an issue if a driver ignores a system alert to take over the controls.
In a California Tesla X fatal crash, the NTSB investigation found that the driver received visual and audible alerts to take over from the autopilot system. His hands were not detected on the wheel, however, in the seconds leading up to the crash.
But what if the cause of an accident was because one of the operative modules — sensors, radar, cameras, etc. – in the autopilot system fails?
Manufactures held accountable under products liability law
Vehicle and component part manufacturers have the responsibility to design and produce safe products. These companies may have to make cost/benefit-type decisions as they bring new types of vehicles to the market.
An illustration of what can go wrong is Ford’s decision not to include a safety feature on its Pinto in the late 1960s that would have only cost $11 per car. An internal analysis placed the cost of 180 lives at less than the projected cost of making the design change. Many people died unnecessarily.
Manufacturers must take efforts to discover and prevent defects, because they are in the best position to do so. Generally, when a defect makes a product unreasonably dangerous strict liability rules apply – meaning intent or exercise of reasonable care do not matter. This theory would allow victims to hold manufacturers and/or component part suppliers liable when autopilot systems fail.
Breach of contract through the supply chain
The number of parts suppliers involved in putting together a vehicle are numerous. Contracts detail the timing for delivery and specs that suppliers need to meet. Quality control should occur on both the supplier and manufacturer side, but sometimes a batch of defective sensors will still get through. Contract-type indemnification claims may be another remedy between companies.
Governmental Liability
As states and cities work with manufacturers to put in place new infrastructure for autonomous vehicles, they could open themselves to liability. In the California case, an added issue was apparent damage to the concrete lane divider that had not been repaired, which may have contributed to the severity of the crash. Was the state liable for failing to replace a dangerous divider?
Road designs may need to change to account for pedestrians and cyclists who also use roadways. A flaw could lead to numerous fatalities. In Arizona – a testing ground for robots – an autonomous Uber struck and killed a pedestrian. In that case, the AV perception system got confused and the human safety operator couldn’t react fast enough. If a flaw in a crosswalk system created by the city was instead the issue, it could lead to governmental liability.
The law is certain to evolve
Liability theories will continue to evolve as more autonomous vehicles share the road. Injury claims may become more complicated, to account for the new features of self-driving cars. If history is any guide, however, the law is bound to find ways to hold responsible parties accountable for the injuries they cause.