The truth lies. In an information infrastructure of fake news, echo chambers, and social media filter bubbles, our governing rationale supporting the U.S. Constitution’s First Amendment—that perspectives from a range of disparate sources stock a marketplace of ideas from which society can collectively come to the truth—may be in need of repair. To decide whether the First Amendment is currently having a bandwidth problem, though, we need a grip on two predicate questions: how new is the problem of fake news, and what we can do about it.
Of course, the notion that the media purveys its truth instead of the truth isn’t new. As far back as the 1870s, British newsmen critiqued their American counterparts as “literary Barnums” who provided the public “news without truth or reliability.” And claims that the public craves unbiased accountings of government policies and their potential effects seem overstated as well. Research shows that in deciding how we vote on economic issues, what governs our choices are not facts about those issues, but our loyalties to a political worldview—even when that worldview leads us to positions that are not only provably false, but self-damaging.
Most of us believe what we believe, and except at the margins, that’s that. The receptionist at the doctor’s office whom you overheard responding to the latest bombshell revelation out of the White House with “they must be spying on Trump again”? You have about as much chance of persuading her she’s wrong as I do persuading my 4-year-old son to put away his shoes.
So the reason a particular piece of information resonates with particular people is not because it tells them what they want to know. It’s because it affirms what they want to believe to be true. That has long been so, whether the affirming bit of news is real or fake. But something is different now. Even if the Internet and social media have not altered society’s collective commitment to the search for truth because that commitment may never have actually existed, there’s a fair argument that the amount of information in circulation has reached interference-causing levels. Facebook and Twitter have leveled the knowledge and dissemination hierarchies that pervaded old media, but social media have also added a layer to the information infrastructure—the low-friction sharing of news. Studies show that the reader’s trust in the sharer of the content, rather than the trust in the content’s author, determines whether the reader believes that the shared content is true. Our information systems have been flooded with facts that are intended not to persuade, but to confirm. And with so much grist for the confirmation bias mill, the entrenchment of our existing positions becomes more, well, entrenched.
Even if all this is (pardon my use of the term) true—if we have reached a state of pure subjective truth—it’s difficult to know whether that’s a First Amendment utopia, dystopia, or the status quo. And it’s even harder to decide what should come next. Do social media companies have the obligation to be arbiters of truth over their platforms? Is there a role for government in trying to solve this problem, as one California lawmaker believed when he proposed a law making it unlawful to spread fake politics-related news via the Internet? Is fact-checking a worthy endeavor, or a hopeless game of Whack-a-Mole? If the Supreme Court was correct in 1974’s Gertz v. Welch when it said “there is no such thing as a false idea,” is it worth arguing about truth at all? And if that’s true, shouldn’t we all just ¯\_(ツ)_/¯, go home, and finish binge-watching GLOW?
In the end, our modern First Amendment problem might be not with our communications infrastructure, but with the constitutional rationale we use to support it. We have proceeded for decades under the presumption that a marketplace of ideas will lead to the truth, but have never rigorously examined what we mean by “truth.” Maybe the marketplace takes place not primarily in the spaces we dialogue with each other, but in the dialogues in our own heads—what philosopher John Stuart Mill 150 years before Facebook deemed “the law of likings or dislikings” that should be left to the judgment of individuals, not of society. And if one person’s subjective truth turns out in some sense to be objectively fake, perhaps the best way forward is not to instinctively call that fakeness out on our feeds, or ask Mark Zuckerberg to do it for us, but to first acknowledge that some of our own beliefs—our own truths—might be fake as well.