Polls reveal most Americans fear artificial intelligence. Blame the culture, not the machines.
Start with Hollywood’s long love affair with robots running amok. Two iconic examples: the “thinking machine” HAL in Stanley Kubrick’s 1968 movie “2001,” and the rogue “Terminator” robots in a movie series that began in 1984.
Combine such dystopian images with ubiquitous forecasts claiming such “thinking machines” will eliminate work. True, computers outperform people in many tasks. Cars also outperform people walking or riding horses. Labor-saving is why humanity keeps inventing such machines. Word processors replaced typists and, before that, replaced accounts doing “ciphering.” But overall employment — and the economy — kept growing.
Computers began replacing “knowledge workers” 75 years ago. And thank goodness. British mathematician Alan Turing said that without the computer he’d built during World War II, it would have taken “100 Britons working eight hours a day on desk calculators 100 years” to crack the German code.
But it wasn’t until 1997 that a computer, IBM’s Deep Blue, could trump a chess grandmaster (Garry Kasparov). That event sparked a spate of apocalyptic visions of the imminent dominance of artificial intelligence (AI).
Today algorithms continue to amaze: they recognize pictures of cats, or your face, they drive cars (truthfully, only for short distances), and answer simple questions like “Siri, when was Paris Hilton born?” So of course pundits are at it again, honing the old theme that AI will soon take over everything.
When confronted with past failed predictions, forecasters insist this time it’s different. Actually, this time they have a point. Technology has finally become good enough to launch a new era of computing that promises to be truly useful. The first two eras were annoying and limited
In the first era, costly mainframes were cloistered in rooms overseen by a priesthood of operators. The second era was better; machines became cheaper and hand-size. But smartphones are still distracting and require humans to adapt to them, rather than vice versa. Computing is still too much in the foreground, requiring special attention and unnatural behaviors.
The ideal computer should operate intuitively and disappear into the background. Useful “thinking” machines adapt to how humans behave, communicate and operate. That’s how cars are designed. That’s what AI finally offers.
A lot of people already, unknowingly, embrace AI. AI enables such things as smartphone maps, streaming video and audio apps, and underlies ride-sharing and social media. And AI powers the Intelligent Virtual Assistant (IVA), a new product class typified by Amazon’s Alexa, Apple’s HomePod Voice or Google’s Home.
As AI gets more adept, it can, for example, recognize your specific voice or face. That’s not only the fastest path to better cybersecurity, but enables things like starting a car by just asking. If an AI could yell at you if it notices you’re about to fall asleep while driving, would that be so bad? AI as co-pilot will improve driving safety sooner and cheaper than robo-cars.
AI will improve safety, accuracy and efficacy for doctors and nurses, factory workers and retail sales people, too. On average, it won’t replace, rather it will facilitate tasks with interaction both audio and visual, including where useful, projecting “augmented-reality” around a task. AI will be especially useful for enhanced learning and skills training.
All of this, in economists’ terms, improves labor-productivity, which over all history has driven growth and created more overall work. But could thinking machines soon outstrip human intelligence?
Since we don’t know what intelligence actually is, or fundamentally how human brains operate, that’s a Hollywood trope rather than a real worry. Cars are not artificial horses anymore than jets are artificial birds, or hammers are artificial hands. The invisibility and complexity of algorithms seems different to us. But people in every earlier era reacted similarly to the “magic” of new technology.
A more apt label for AI might be “savant computing” taken from the (unfortunately named) class of mental disorder called “idiot savant.” Rare individuals are born with a mental disability associated with a superhuman skill in just one narrow area, say calculating or painting, but puzzlingly deeply dysfunctional in all else.
As AI gets better and cheaper — the hallmarks of computing — it will democratize supercomputing and raise everyone’s competence, in truly revolutionary ways. Of course AI will, as have all machines over history, change the nature of work.
Training and retraining will be needed. This time the very machines that make that necessary will make that task easier.