Big Tech companies like Facebook, Twitter and YouTube have content moderation policies warning users against uploading or posting “extremist” or “hateful” content. Every day, tech companies’ algorithms flag posts and videos and tweets, some of which are taken down for violating those policies.
In response to the recent El Paso, Texas and Dayton, Ohio shootings, President Donald Trump called for a violent extremism summit with social media companies, which will convene Friday.
So how does Big Tech hope to stop more mass shooters from uploading their “manifestos,” or rants, to the internet?
“Our thoughts are with the victims and their families,” a Facebook spokesperson told InsideSources. “Content that praises, supports or represents the shooting or anyone responsible violates our Community Standards and we will continue to remove as soon as we identify it.”
Carl Szabo, vice president and general counsel for Big Tech lobbying group NetChoice, said that between July and December 2018, Facebook, Twitter and Youtube “took action against over 11 million accounts that had broken policies on hate speech and extremism.”
“The 11 million accounts, that’s the number you don’t hear about because it’s gone before you even see it,” Szabo told InsideSources. “The large platforms and even the small ones do work really hard to take down harmful content and do that through algorithms and bots, through user tagging, and through manual reviews.”
The Internet Association, another Big Tech lobbying group, said “violent and terroristic speech … have no place either online or in our society.”
“IA members work every day to find dangerous content and remove it from their platforms,” IA President and CEO Michael Beckerman said in a statement emailed to InsideSources. “IA members are committed to continuing to work with law enforcement, stakeholders, and policymakers to make their platforms safer, and to prevent people from using their services as a vehicle for disseminating violent, hateful content.”
The rant of the El Paso gunman, who killed 22 and injured 24 others at a Walmart in El Paso, Texas early Sunday morning, was found on unmoderated social media site 8chan. Cloudfare, the web infrastructure company supporting 8chan, withdrew support Monday, forcing the site off the internet.
In a YouTube video, 8chan founder Jim Watkins said the El Paso shooter’s rant was originally uploaded to Instagram and then later shared on 8chan.
Facebook denies Watkins’ claim.
“We have found nothing that supports this theory,” a Facebook spokesperson told InsideSources.
Facebook also told InsideSources that the El Paso shooter’s Instagram account was disabled on Saturday, and hadn’t been active for a year, and that Facebook is working with law enforcement.
“Once the platforms were able to identify the video, then automated systems are pretty good at preventing re-uploads,” Szabo said. “That’s typically the problem you see, it’s not just takedowns of content, it’s preventing content that’s trying to be re-uploaded either through fake accounts or other platforms.”
But critics of Big Tech say social media platforms aren’t doing enough to take down mass shooters’ screeds and similar content supporting such violence, which experts and law enforcement worry can lead to “copycat” shootings.
The Brookings Institution published an analysis Wednesday about how Big Tech cracks down on extreme content after the fact, and isn’t prepared for the ways their platforms will be abused a year from now.
“There are dozens of these barely-moderated havens for extremism online right now, and none of them will be affected by legislation proposed to supposedly rid US-based Big Tech of hatred,” wrote Megan Squire, a professor of computer science at Elon University, for Brookings. “In fact, most of these sites do not bother to conduct any meaningful self-regulation and are too small to qualify for monitoring under proposed legislation. Making things worse, when a ‘Big Tech’ site is removed, an ‘Alt-Tech’ clone rises up to take its place. Thus, the real danger slips right through the cracks.”
But much of the content posted by mass shooters and circulated amongst individuals who advocate for violence against people of color, Jews, or LGBT people is legal (although it is illegal in other democracies), and tech companies are within their legal rights not to take down such content. (Section 230 of the 1996 Communications Decency Act offers tech platforms like Facebook and Google legal immunity from what their users post.)
Eric Goldman, a professor of law at Santa Clara University, said that trying to decide what content to remove is often a fuzzy line for Big Tech.
“The fundamental question is, how do you get rid of legal content? And my pushback would be, why? What problems do you want to solve? We saw this problem with the Christchurch video,” he told InsideSources, referring to the mass shooter who gunned down more than 50 people at a mosque in Christchurch, New Zealand in March and livestreamed his massacre on Facebook. “In the US, that’s legal content. We have to start with, what kind of content are we talking about? I personally am ok with that [El Paso shooter’s] rant not being widely disseminated. It’s pernicious content. I think there’s value to dissecting the rant to understand what happened, why it happened, and what we might do differently to prevent the next attack, so treating it as verboten might make it harder to diagnose the problem. I personally don’t want it widely available as a reference point for the next extremist.”
In 2017, Facebook, Microsoft, Twitter and YouTube founded the Global Internet Forum to Combat Terrorism (GIFCT), which Goldman said was “at the behest of the U.S. government.” GIFCT swiftly identifies and blacklists terrorist content which helps give social media platforms guidance for keeping such content off their platforms.
The problem, Goldman said, is that GIFCT is a “black box” and there is no transparency about its methods for identifying and blacklisting what qualifies as terrorist content.
Goldman also worries about the U.S. government’s attitude towards tech companies’ moderation practices, specifically Sen. Josh Hawley’s (R-Mo.) bill to rescind Section 230 protections for tech companies that don’t moderate content in a “politically neutral way.”
“Section 230 was designed to do the socially valuable work of policing problematic content,” Goldman said. “So Section 230 is part of the solution, not part of the problem. If we want legal content offline, the government can’t ban it, but the internet companies might choose to suppress or obscure that content. Section 230 provides a path to a society-wide win. Hawley’s approach would effectively enforce internet companies to carry content like the terrorist’s rant even if they would choose to delete it. To me, his law is about the worse policy outcome we could imagine.”