,

NHTSA Probes Tesla Autopilot Over Emergency Crashes

WASHINGTON D.C. – August 16, 2021 – The National Highway Traffic Safety Administration (NHTSA) announced today it has launched a wide-ranging preliminary evaluation into Tesla’s advanced driver-assistance system, Autopilot, following a series of crashes involving emergency vehicles. The investigation encompasses approximately 765,000 Tesla Model 3, S, X, and Y vehicles manufactured between 2014 and 2021.

The federal safety agency’s probe was triggered by 11 specific collisions that occurred between December 20, 2018, and July 26, 2021. In each incident, a Tesla operating with Autopilot or Traffic-Aware Cruise Control engaged struck an emergency vehicle that was stationary at a roadside scene. These crashes involved a variety of first responder vehicles, including police cars, fire trucks, ambulances, and road maintenance vehicles. Tragically, these incidents have resulted in 17 injuries and one fatality.

NHTSA highlighted troubling commonalities among the crashes. Many occurred at night, and in almost all cases, the emergency vehicles had active warning lights, flares, or arrow boards clearly indicating their presence. Specific examples cited include a Tesla Model 3 striking a parked police car in The Woodlands, a suburb of Houston, Texas, and another collision with a police vehicle in Lansing, Michigan.

A preliminary evaluation is the first step in NHTSA’s investigation process. The agency aims to understand how Tesla’s Autopilot system detects and responds to emergency vehicles and their scene controls. The investigation will also delve into human factors, examining how drivers interact with and oversee the system, as well as the operational design limits of the Autopilot technology itself. Should the preliminary evaluation uncover significant safety concerns, it could escalate to an Engineering Analysis, a more intensive phase that can ultimately lead to a mandatory recall.

Tesla has consistently maintained that its Autopilot system requires active driver supervision and that drivers are ultimately responsible for operating their vehicles safely. Despite this, the company began activating in-car cameras in May 2021 to monitor drivers for attention, a move seen by some as an acknowledgment of the need for improved driver engagement.

Critics have long argued that Tesla’s naming conventions for its driver-assistance features, such as “Autopilot” and “Full Self-Driving,” are misleading and can foster a dangerous sense of complacency or over-reliance among drivers. Consumer Reports, a prominent advocate for automotive safety, has repeatedly called for greater clarity in the marketing of these systems. The Center for Auto Safety has also voiced concerns about the potential for misuse. Earlier this year, Senator Richard Blumenthal (D-CT) urged the Federal Trade Commission to investigate Tesla’s marketing practices, citing concerns about potentially misleading claims regarding the capabilities of its driver-assistance technology.

This investigation marks a significant escalation in regulatory scrutiny for Tesla, whose advanced driver-assistance systems have been at the forefront of the debate surrounding the safety and regulation of automated driving technologies. The outcome could have profound implications for Tesla’s future software development, marketing strategies, and potentially lead to mandated updates or changes across its fleet.

Media

Senior Editor
Share this article:

Comments

No comments yet. Leave a reply to start a conversation.

Leave a Reply

Your email address will not be published. Required fields are marked *

Subscribe to Space

By signing up, you agree to receive our newsletters and promotional content and accept our Terms of Use and Privacy Policy. You may unsubscribe at any time.

Categories

Recommended