We don’t live in a black and white world. Reality has a whole messy collection of grays too, not to mention stripes and plaids. And while we like simple answers that cut through the morass, real life doesn’t always provide them.
On Thursday the National Highway Transportation Safety Administration (NHTSA, pronounced “nit-sah”) announced plans to investigate the May 7 fatality of a Florida man behind the wheel of – but not driving – a Tesla Model S. The car has an Autopilot feature that allows it take full control of highway driving, and during this accident, the car was in control.
So is Tesla at fault? The real answers are far from black and white.
Beta testing at 80 mph
Tesla’s Autopilot feature is a “beta” that’s disabled every time you turn the car off. This driver (and every driver who wants the feature) had to turn it on and click through the warnings. And there are many warnings. Among them is this one:
Warning: Traffic-Aware Cruise Control can not detect all objects and may not detect a stationary vehicle or other object in the lane of travel. There may be situations in which Traffic-Aware Cruise Control does not detect a vehicle, bicycle, or pedestrian. Depending on Traffic-Aware Cruise Control to avoid a collision can result in serious injury or death.
Maybe the driver is responsible. That warning is pretty clear — but disclaimers are just that: disclaimers. You don’t get to absolve yourself of responsibility simply because you post a note saying you aren’t responsible. If restaurants had notes saying “eat at your own risk,” are they responsible for food poisoning?
That said, what does “beta” mean in this context? Cars aren’t computers. We’re fine dealing with “beta” software on a computer, where crashes are as frequent as unpopped kernels in a bag of popcorn. Crashes on the highway don’t lead to rebooting, they lead to twisted metal. Simply by dint of the potential outcomes, unfinished software shouldn’t be released to users.
A note on Tesla’s website carries more than a tinge of defensiveness, as though a project manager at the company is already preparing to be excoriated for the death. The blog post is titled “A Tragic Loss,” but opens not with notes of sadness but this comment on the incidence of collisions:
“This is the first known fatality in just over 130 million miles where Autopilot was activated. Among all vehicles in the US, there is a fatality every 94 million miles. Worldwide, there is a fatality approximately every 60 million miles.”
It’s as if the company were saying, “Hey, we didn’t do it! Lots of people die every year!!” Only in the final paragraph of the note does the company acknowledge that “the customer who died in this crash had a loving family and we are beyond saddened by their loss… We would like to extend our deepest sympathies to his family and friends.”
Humans will be humans
But it’s not Tesla’s fault, at least not completely. When Tesla enabled the Autopilot feature, people invariably posted videos of themselves jumping in the backseat while the car steered down the highway. One man was caught napping behind the wheel of his Tesla as the car blithely drove itself down the highway. Even in a fully autonomous vehicle, which Tesla doesn’t claim to manufacture, we should be awake and alert as 5,000 pounds of steel, leather, and batteries zips us along at 80 miles per hour.
Cars aren’t toys, and cars that can steer themselves and avoid obstacles shouldn’t turn us into passengers or children.
For another thing, records reveal that the driver had 8 speeding tickets in 6 years. In theory, a self-driving car could turn him into a better driver, one who obeys the speed limits and doesn’t change lanes recklessly. That’s in the future, of course, when cars are fully autonomous. Today’s cars are hardly smart enough.
Perhaps the trillion-dollar question in this case – “Is it Tesla’s fault?” — should be rephrased as, “How do you deal with human nature?”
It’s inevitable that people will act recklessly – the videos of people pulling stupid stunts are evidence of that. How do self-driving cars (and the people who program them) deal with that? Google has said it wants to make its cars drive more like humans. After all, human drivers expect other vehicles on the road to act as they would, and humans aren’t good drivers. Imagine if the car in front of you came to a full stop at that yellow light as it’s supposed to, rather than tearing through as you would. Would that catch you buy surprise? Having a car that anticipates human foibles and can know enough to accelerate through a red light may reduce accidents.
A speed bump on the road to autonomy
The ultimate point of self-driving vehicles is just that: reducing accidents. Call them what they really are: collisions, and usually avoidable ones at that. More than a million people die every year in vehicle crashes, and the vast majority of them are caused simply because humans are human. We look at cell phones. We get distracted by others, our own thoughts, the radio, passing street signs, UFOs, whatever.
While this incident was a tragedy, it shouldn’t detract from the larger goal of reducing vehicular deaths. If designed right, computers will be much better drivers than we are – they never tire, they don’t get distracted, they come to a full stop and respect yellow lights. The road to complete autonomy for cars is potted and full of obstacles. But let’s keep the destination in our sights.
- Tesla Model S news roundup: All you need to know about the world-class EV
- Driver who stuffed Tesla Model S into a fire truck at 65 mph blames Autopilot
- Tesla revives its plan for a coast-to-coast autonomous drive
- The Tesla Semi is now ready to (silently) conquer the freeway
- Tesla promises to implement safety changes after Redditor totals a Model 3
The views expressed here are solely those of the author and do not reflect the beliefs of Digital Trends.