The leaked video of Google’s executives soothing Googlers (Google’s name for its own employees) after the 2016 presidential election shows how unaware one of the world’s best informed companies can be.
Kent Walker, Google senior vice president for global affairs, explained that the election results show that “fear, I think, not just in the United States, but around the world is fueling concerns, xenophobia, hatred, and a desire for answers that may or may not be there.” The executives appeared completely confident that almost all Googlers uniformly were offended by the election, thought disparagingly of anyone who voted for the president, and believed that others’ opinions are simply wrong.
This is called living in a bubble.
The shock and dismay in the video raise a question: How could people that work for an organization that is (1) thought to have more or nearly more information than any organization in history, and (2) reputed to have world-leading artificial intelligence (AI) have been in such a bubble?
It’s a lesson in the limits of AI and big data. Google and other tech companies do have indeed unprecedented, massive amounts of data. And they are constantly working to improve what they know about people’s wants so that they can place ads more effectively than anyone else. The companies appear quite proud of these accomplishments, and rightly so.
But their leadership appears to have massively missed what voters would do on Election Day. And based on the Google execs’ choice of words and emotions used to describe the roughly 48 percent of the voters who voted differently than (apparently) every Googler, they seem to have missed what many of those 48 percent are like. While I have not met the vast majority of Trump voters, I have yet to meet one that fits the descriptions embraced in the video.
Why did vast amounts of data and unprecedented analytical powers fail to penetrate the worldview bubble? The answers provide a window into the limitations of AI.
AI and big data are good at predicting outcomes when the future will be a lot like the past. When the world changes, the algorithms — the set of instructions that tell the computers what to do — are caught flat footed because AI learns only from past data. This is why racial and gender biases appear prevalent in AI systems.
AI has no intuition, creativity or empathy. Most AI systems are based on statistical analysis that assign probabilities to future events. They lack the ability to make judgments based on alternative understandings of how the world might work. They lack surprise. And they cannot gain an understanding of people by feeling their emotions.
And AI has no curiosity. It is unable to interpret and answers only the questions it is asked. If the user lacks sufficient insight to ask the right question, AI at best provides results that are nonsensical in the user’s worldview.
This limited scope of AI presents a challenge for systems and organizations that are built around it. Humans are not very good at rethinking worldviews when they are challenged. As John Kenneth Galbraith is reputed to have said, “Faced with the choice between changing one’s mind and proving that there is no need to do so, almost everyone gets busy on the proof.”
This is true for big data projects: Gartner’s Nick Heudecker recently tweeted that the failure rate of big data projects is around 85 percent, and that the problem is with people, not technology.
If AI provides what appear to be illogical answers, execs and AI practitioners alike are tempted to ignore the results or twist the algorithms until the answers align with prior beliefs. It takes something more than AI for people to gain insight and wisdom. It takes humility and open engagement.
What can be done? Those relying on AI and big data should seek diversity and engage with people who make them uncomfortable. As Heineken demonstrated with its “Worlds Apart” ads, human interaction across viewpoints can make worlds of difference, even more than can AI and big data.
People who are worried that AI and big data give companies insurmountable market power can relax. Clearly these technologies are powerful, as demonstrated by the financial successes of tech companies. But the video demonstrates that the technologies are no more perceptive than the human minds that design them.
People in bubbles fail to appreciate the meaning of evidence that contradicts their worldviews. That is the market opportunity for the entrepreneurs and investors that are poised to dethrone today’s tech leaders.