On March 23, 2018, just after 9:30 am, a Tesla SUV plowed straight into a road divider as it was passing by the exit ramp to Route 85 from Route 101, in Mountain View, California. The impact caused the battery to burst into flames, and the driver was trapped in his seat by a seat belt. He died in the hospital.
Last week, the National Transportation Safety Bureau (NTSB) issued a preliminary report providing further details. Here are the key findings:
- The "AutoPilot" feature was switched on at the time of the crash. It was on continuously for the last 19 minutes of the drive (out of 32 minutes total).
- The driver's hands were not on the wheel in the last 6 seconds before the crash. In the last 60 seconds, hands were detected on three separate occasions, totaling 34 seconds.
- In the final 3 seconds, still with no hands on the wheel, the car accelerated from 62 mph to 71 mph, with "no pre-crash braking or evasive steering movement detected."
- The autopilot system was set to a cruise speed of 75 mph, which is above the speed limit of 65 mph.
- During the 19-minute stretch before the crash, the Autopilot system warned three times about putting hands on wheel, the last time was 15 minutes prior to the crash.
- The car with the single driver was on the HOV lane, which should be in effect at that time in the morning.
These are all telling observations. What did the reporter doing the Tesla beat at the Wall Street Journal think is the most salient finding? Well - the title of their article says it all: "Tesla Autopilot System Warned Driver to Put Hands on Wheel, U.S. Investigators Say." (link)
It is hard to imagine how that particular finding is the most important... unless you work for Tesla, or someone who wants to pin the blame on the driver who could not defend himself.
The Autopilot was continuously turned on for almost 20 minutes prior to the crash. The last warning to the driver happened "more than 15 minutes" before the crash. When driving at 60 mph and above, 15 minutes is 15 miles or more from the crash site. For example, the last warning might have been issued when passing Palo Alto, but the crash happened 15 minutes later in Mountain View. Rather than supporting the view that the driver recklessly ignored repeated warnings, this finding raises the question of why the car did not detect imminent collision, and take evasive action.
Hands not on wheel is taken to be a finding of major importance but here is the problem: every accident investigation will discover hands not on wheel. Why? Because our sample consists of only car-casses (pun intended).
If the driver had taken evasive steering action, it would have compensated for computer error, and the accident would have been avoided!
This is a reverse of the classical Wald paradox that I covered some years ago. In that example, statistician Abraham Wald warned that you couldn't inspect the damage on warplanes that returned home to determine which parts of the planes were most subject to damage - because your sample is missing the planes which were shot down!
In our example, we get to see the dead but not the survivors. In order to understand whether the computer made errors, we'd need to also include cases in which the driver prevented the Autopilot from crashing the car.
Given that the driver did not steer the wheel in the last seconds, why did the Tesla vehicle accelerate? Why was it allowed to go above the speed limit? This suggests technical challenges.