A fatal car accident involving a Tesla Model S in autonomous driving mode is drawing widespread scrutiny both in the United States and overseas.
Joshua Brown was killed in May this year when a tractor trailer made a left turn in front of his Tesla and the self-driving car failed to apply the brakes.
The National Highway Traffic Safety Administration (NHTSA) said it is investigating the incident and will examine the design and performance of the automated driving systems in use at the time of the crash.
Its preliminary evaluation of the incident doesn’t indicate any conclusion about whether the Tesla vehicle was defective, the NHTSA said.
In a blog post, Tesla noted that this is the first known fatality in just over 130 million miles where autopilot was activated:
Tesla further noted that neither Autopilot nor the driver noticed the white side of the tractor trailer against a brightly lit sky, so the brake was not applied:
As companies continue to innovate and invest in self-driving technology, the crash indicates that fully automated cars are still a thing of the future.
The crash also raises important concerns over regulation.
According to this New York Times article:
And the Wall Street Journal reports:
The crash also highlights liability concerns regarding this emerging technology. Most car crashes are caused by human error, but presumably the NHTSA investigation will also evaluate potential product liability on the part of the manufacturer.
The crux of the issue is weighing up the risk of crashes versus crashes avoided via the use of self-driving technology.
As the Insurance Information Institute (I.I.I.) notes:
Liability laws might evolve to ensure autonomous vehicle technology advances are not brought to a halt, the I.I.I. adds.