Recently a Tesla car in autopilot mode collided with a fire truck in California. Meanwhile, a California man sued GM for an unrelated accident involving one of their autonomous vehicles. Both incidents feed into fears about the safety of driverless cars. It’s easy to be scared of new technologies — but what happens if we stop innovating?
Driverless car crashes are newsworthy not because the technology is unusually dangerous, but because they happen so rarely. If we let our fears be swayed by splashy headlines about something far less common than a conventional car crash, we’re keeping ourselves at greater risk.
In 2015, the National Highway Traffic Safety Administration found nearly 6.3 million police-reported traffic accidents. So when a standard car collides with a fire truck, it may not even make the local news. And while the typical driver has an accident about once every 165,000 miles (or about every 10 years) Google’s autonomous car logged more than twice that mileage without any collisions.
The odds of dying in a car accident are unacceptable (about one in 112), yet we misplace our focus on events that are actually far less likely to kill us, like plane crashes (one in 96,566). In some cases, the more catastrophic and rare an event is, the more likely we are to focus our safety concerns on it. After the September 11 terrorist attacks, many people feared air travel and switched to driving. As a result, academics have estimated that more than 1,500 additional deaths took place in car accidents.
It’s not just our misplaced focus on the wrong things that can make us less safe. Even our attempts to make a technology safer sometimes backfire, resulting in even more dangerous choices.
For example, a group of pediatricians found that a proposal to require children to be in car seats on airplanes would cost $1.3 billion per life saved if a plane crash did occur — but would also likely result in an additional four infant deaths when parents inevitably chose to drive instead.
More generally, warnings on everything from hairdryers to Dr Pepper bottles make us less likely to take any warning seriously. As Dima Yazji Shamoun of the University of Texas Center for Politics & Governance has written, “When everything has a warning label, nothing has a warning label.”
So, while each new technology comes with risk, yesterday’s technologies are sometimes far more dangerous. Improving upon them — or even making them obsolete — is essential to making the world safer and more prosperous.
The more carefully we examine the risks of existing versus future technologies, the clearer it becomes that we engage in risky behavior every day. So, while robot cars can sound scary, a few things are certain: They won’t get drunk, drowsy or easily distracted, leading to the kinds of accidents caused by human drivers. Those factors alone will eventually constitute a major public health victory.
More important, even when some accidents occur — and some certainly will — algorithms and machine-learning will constantly be improved through trial and error, making it less likely that the same accident will happen twice. This, unfortunately, is not something we can say about people.
When considering new technologies to come on the scene, let’s take a deep breath and put the benefits on equal footing with our fears and doubts. Saint Thomas Aquinas once noted, “If the highest aim of a captain were to preserve his ship, he would keep it in port forever.” Ship captains brave the high seas and take bold risks because they understand that a great reward is possible.
The same is true for individuals, organizations, and even society in general. There can be no reward without a certain degree of risk-taking.