A White, Waymo-Brand Autonomous Car

Does today’s hype around autonomous vehicles remind you of the flying car predictions we heard back in the ‘90s? There is much debate about the timeframe for widespread adoption of autonomous vehicles, from Silicon Valley promising five years to Detroit predicting five decades. However, something of little debate is the societal benefits that autonomous vehicles will provide. A few include:

  • Reduced frequency of auto accidents and the injuries and deaths that accompany them
  • Increased mobility for the elderly and those with disabilities
  • Reduced transportation costs due to vehicle sharing

Despite these benefits, there are several questions to consider, including:

  • How will we protect vehicles from hackers? Whose responsibility is it to do so?
  • How will autonomous vehicles interact with human drivers, pedestrians, and bicyclists?
  • Will autonomous vehicles function properly in heavy rain, sleet, and snow?

While these questions may seem difficult to answer, the most difficult question may be the one that is least discussed: who is liable when an accident involving an autonomous vehicle occurs? The answer will likely change as the technology used in vehicles advances from driver assistants (adaptive cruise control, automatic braking, lane keeping assist) to semi-autonomous (human intervention required in limited instances) to fully autonomous (no human intervention needed) and the percentage of vehicles on the road that are fully autonomous increases from .001% to 100%.

Let’s first address one of the easier liability questions. Who is to blame when an accident occurs with a fully autonomous vehicle? It’s hard to fathom a court assigning liability to a “driver” in these instances, so the liability must fall on the automaker. However, that leads to a deeper question that’s nearly impossible to answer: how should autonomous vehicles be programmed to operate when an accident is unavoidable?

For instance, if a tree falls in the road and an autonomous vehicle must decide whether to hit the tree or swerve onto the sidewalk and hit a pedestrian, which does it do? Would it be programmed to consider the age of the driver and pedestrian and take the action that preserves the youngest life? Would all automakers be required to program their vehicles to act in the same way? In the best case scenario, society will collectively decide how autonomous vehicles should be programmed and each vehicle will act accordingly. In the worst case scenario, certain automakers will program their vehicles to always preserve the life of the “driver.” If those automakers charge a premium for such vehicle behavior, the preservation of life would essentially be determined by wealth.

If assigning liability for a fully autonomous vehicle seems nearly impossible, it must be easier for a vehicle that is semi-autonomous, right? Not necessarily. What happens when a vehicle notifies its “driver” that human intervention is necessary? If that notification occurs three seconds before a crash, would it be up to a court to determine if three seconds was sufficient warning? What if a crash would have resulted regardless of the “driver’s” involvement? How will a court evaluate the adequacy of the evasive action the “driver” took?

A final liability question surrounds the inevitable period of time autonomous vehicles will share the road with human-driven vehicles. Autonomous vehicles will be programmed to behave cautiously and, in most instances, that’s a good thing. However, can you imagine being stuck behind an autonomous vehicle that’s trying to make a Michigan left on the East Beltline during rush hour? It could take hours for it to find enough space to pull out. A human-driven vehicle has the distinct advantage of using eye contact and body language to communicate with other vehicles as it forces itself into the flow of traffic.

Also, driving styles vary wildly across the country. An autonomous vehicle programmed to fit in with drivers in New York City would cause quite a ruckus on the streets of Holland, MI. Humans can adapt their driving techniques when needed – something autonomous vehicles won’t be able to do. Who is responsible when an autonomous vehicle causes an accident not because it broke a driving law, but because it just didn’t quite fit in with traffic?

However the determination of liability plays out, one thing is for sure – insurance will follow suit. In the meantime, let’s all do our best to achieve the main thing autonomous vehicles promise to accomplish – reduced car accidents. After all, for the foreseeable future, we’re all still liable!

This article was written by Derek Boer.