Liability

The Importance of New Liability Laws

Since AVs vary in levels of integrated automation, human drivers and automation components may share control of the vehicle. Therefore, this issue requires careful legislative considerations. Through assessing Texans’ opinions on C/AVs, Bansal and Kockelman found that legal liabilities are the second biggest concern in adopting self-driving vehicles (Bansal and Kockelman, 2017, p. 11). They note, “in light of the limited federal regulation of C/AV transportation, there are questions about the most useful role of states and local governments in overseeing this new technology” (Kockelman and Boyles, 2018, p. 14).

NHTSA created a new term, highly automated vehicle (HAV) and reset its stratification to mirror the global industry reference for the six levels of automated driving. The model state policy articulated in 2013 states that motor vehicle liability and insurance rules are responsibilities of the State. NHTSA stated that these general areas of responsibility should be unchanged for HAVs. Kockelman and Boyles summarize the elements of a model state framework. It recommends that “states should consider how to allocate liability among HAV owners, operators, passengers, manufacturers, and others when a crash occurs,” (Kockelman and Boyles, 2018, p. 206). However, “if responsibility is legislated to be mainly on the manufacturers and the federal government, manufacturers may avoid the insecurity of a state-by-state-legal liability patchwork,” (p. 212).

Current Liability Laws

Under the current tort law, operators of vehicles must behave “reasonably” while driving. When they fail to do so, they can be held liable for the damages they cause. This is not always the case of course; in crashes that are the result of design defects of a vehicle, the plaintiff can sue and recover against the manufacturer of the defectively designed vehicle as well as the operator if the latter was also negligent. In the new world of AVs, however, product liability claims against manufacturers will become the rule rather than the exception. If a C/AV is a potential cause of a crash and the C/AV was operating in automated mode, the manufacturer will be joined as a defendant in the litigation and the primary claims brought against the manufacturer will be complex product liability causes of action. For platooned vehicles, the CAV industry could face a risk of tort liability for large scale, multiple car accidents that is beyond the existing risk of auto product liability claims (Kockelman et al., 2016, p. 47). 

Security Conflicts

In order to determine fault or liability in a HAV collision, one needs access to the HAV’s proprietary machine learning, data, and algorithms. As noted in the legal chapter of the forthcoming book by Kockelman and Boyles on smart transport, “Legislators and agencies need to evaluate carefully whether mandating access to proprietary data is fair and/or necessary. If this problem is solved now among the stakeholders, it can save everyone time and money later on. If responsibility is legislated to be mainly on the manufacturers and the federal government, manufacturers may avoid the insecurity of a state-by-state-legal liability patchwork,” (Kockelman and Boyles, 2018, p. 212). 

Current Liability Standards

“Several states impose special insurance requirements on C/AVs before they can be tested or deployed on public roads. Both California and Nevada, for example, impose a $1-5 million insurance requirement before allowing testing of AVs on public roads. Michigan, by contrast, does not impose additional insurance requirements on AVs for testing or deployment purposes. Florida, Nevada, and the District of Columbia have liability protection for post-sale conversion of vehicles to AVs. Liability protection is given to OEMs whose vehicles are converted to C/AVs. California, has no explicit mention of such liability protection” (Kockelman and Boyles, 2018, pg. 227). 

Criticisms of AV Ethic and Decision Making

Goodall (2014) examines various criticisms of machine ethics and automated vehicles. It highlights the following concerns regarding the decision-making of AVs in cases of ethically complex decisions, particularly prior to a crash.

  1. Regardless of fault, an automated vehicle should behave ethically to protect not only its own occupants, but also those at fault.
  2. Crashes requiring complex ethical decisions are extremely unlikely.
  3. In level 2 and 3 vehicles, a human will always be available to take control, and therefore the human driver will be responsible for ethical decision making.
  4. Humans rarely make ethical decisions when driving or in crashes, and automated vehicles should not be held to the same standard.
  5. Overall benefits outweigh any risks from an unethical vehicle.


The design of AVs would face the classical dilemma, first proposed by philosopher Philippa Foot, is called the Trolley Problem: You see a runaway trolley moving toward five tied up (or otherwise incapacitated) people lying on the tracks. You are standing next to a lever that controls a switch. If you pull the lever, the trolley will be redirected onto a side track and the five people on the main track will be saved. However, there is a single person lying on the side track. Should you do nothing and allow the trolley to kill the five people on the main track, or pull the lever, diverting the trolley onto the side track where it will kill one person? (Foot, 1967)

We would hope AV operating programs would choose the lesser evil, but it would be an unreasonable act of faith to think that programming issues will sort themselves out without a deliberate discussion about ethics, such as which choices are better or worse than others. Is it better to save an adult or child? What about saving two or three adults versus one child? We don’t like thinking about these uncomfortable and difficult choices, but programmers will need to instruct an automated car exactly on how to act for the entire range of foreseeable scenarios, as well as lay down guiding principles for unforeseen scenarios. Programmers will need to confront this decision, even if we human drivers never have to in the real world. And it matters to the issue of responsibility and ethics whether an act was premeditated (as in the case of programming a robot car), or reflexively without any deliberation (as may be the case with human drivers in sudden crashes). Ethics by numbers alone seems naïve and incomplete; rights, duties, conflicting values, and other factors need to come into play (Lin, 2013).

See the Automated Vehicle Policy page for more information.