Tesla Autopilot Manslaughter Case Raises Larger Questions About Car Maker Liability
This week, Los Angeles County prosecutors filed two counts of vehicular manslaughter against the operator of a self-driving Tesla, which ran a red light, crashed into another car, and instantly killed two people.
This Tesla Autopilot manslaughter case is the first felony prosecution in the U.S. of a driver accused of causing a fatality while operating a vehicle with an engaged driver-assist system.
The fatal crash itself occurred on December 29, 2019. A Tesla Model S, driven by Kevin George Aziz Riad, who was utilizing Tesla’s Autopilot system, ran through a red light crashing into a Honda Civic that was being driven by Gilberto Alcazar Lopez. Mr. Lopez and his passenger, Maria Guadalupe Nieves-Lopez, were killed instantly.
Mr. Riad has pleaded not guilty. According to the Los Angeles Times, the family of Mr. and Mrs. Lopez will be following the criminal case closely as they pursue a civil case against Mr. Riad and Tesla.
The case raises larger questions. While there is a possibility that Mr. Riad is responsible for what occurred, what is Tesla’s share of responsibility in this crash? As the use of vehicle automation becomes ubiquitous on roadways, don’t automakers have a growing ethical, moral, and legal responsibility when their technology fails?
Answers are abundant in the data extracted from vehicles involved in self-driving car crashes. However, automakers, especially Tesla, are reluctant to share that data. Even more troubling is that our government agencies aren’t holding these automakers accountable and demanding more transparency.
NHTSA, the National Highway Traffic Safety Administration, seems to be giving automakers a free pass when it comes to liability. Although NHTSA ordered some, but not all, automakers to report crash data on automated vehicles, it has also issued a statement saying that crashes involving automated technology are the human driver’s responsibility.
This statement insulates automakers and points the finger outward instead of inward. It shifts the responsibility paradigm away from the product and back onto the consumer in a way that is inapposite to product liability law.
Other government agencies are also failing to hold automakers responsible. The California Department of Motor Vehicles allowed Tesla to opt out of a requirement that autonomous technology companies report crashes and system failures to the agency.
Shouldn’t state and federal governments be doing everything in their power to lean in on car makers, to prevent accidents like these?
Without more transparency, we will have a hard time evolving to a place where autonomous technology is truly safe.
In his book, Factfulness, Hans Rosling surveyed airplane crashes between the 1930s through 2016. Rosling discovered that plane crash deaths had decreased significantly during that span. Probing why, he found that flight authorities across the world had met in 1944 in Chicago — the Chicago Convention on International Civil Aviation — to agree to use a common form for incident reports. Once authorities agreed to share information about flight incidents, they made it possible for all participants in the entire aviation industry to learn from each other’s mistakes. Crashes involving plane fatalities have plummeted to an almost non-existent level.
If we do not learn from our past, the past will assuredly repeat itself. Tesla autopilot manslaughter cases could become commonplace.
If auto manufacturers do not voluntarily disclose data regarding self-driving car crashes, in an effort to safeguard the public, then our government agencies must demand this data.
If our government agencies won’t demand self-driving car crash data, then plaintiff attorneys must pursue this data through our judicial system. Our focus should be on bettering technology and public safety, not increasing company profits at the risk of human lives.