Even if you apply First Amendment principles, the 17 companies (and counting) that deplatformed former President Trump are not acting unlawfully. But at a time when Big Tech has near-monopolistic influence over what we see and say, the power to censor should not be in the hands of so few.

Big Tech must self-regulate, and they have the unique capability to do so.

How can they protect free speech and draw hard lines against hateful rhetoric or calls to violence, which is not protected under the First Amendment?

The answer: Use AI to enforce objective policies on users’ hateful, inflammatory, racist or aggressive speech; apply them consistently whether they result in limiting or labeling content; and be transparent about how decisions are made.

I know the first call to action — objectively tagging language that is toxic — is possible, because we do it at Writer, an AI writing assistant for the enterprise. In our case, we’re trying to keep colleagues from being passive aggressive to each other, but we built a second model applying the same principles to a model trained exclusively on Twitter’s data.

Defining “toxic” as content that is aggressive, mean, offensive, racist, sexist or hateful (on either side of a debate), we analyzed millions of tweets from October 20, 2020, through January 6, 2021, the date of the insurrection at the Capitol. On that day, 32 percent of all tweets were toxic, a 40% jump from the previous average. Trump himself contributed “just” 4 toxic tweets that day, but by virtue of his having 88 million followers, tweets and retweets like, “Get smart Republicans. FIGHT!” have an outsized impact. Indeed, nearly one-third of all toxic tweets in the US on January 6 contained at least one of these four phrases: “stop the steal,” “stolen,” “rigged,” or “fight.”

It’s clear that Trump’s language on Twitter has been toxic for a long time, and like a deadly virus, his influence spread across the entire platform, increasing its overall toxicity.

The artificial intelligence behind our analysis is not rocket science: human labeling of online content that is itself hateful, inflammatory, racist or aggressive — our definition of “toxic.” Is “fake news inflammatory? If it’s in reference to a factually inaccurate “stolen election,” it is in our models. Yes, editorial decisions on datasets are required to build a robust, trustworthy model as are frequent updates to include ever-evolving dog whistles on both sides of a debate, whether “MAGA as shorthand for “delusional” or “illegal alien to refer to “immigrant.”

A transparent, open, collective effort to create a standard for healthy communication is necessary. It’s also how Big Tech can bring its AI expertise to solve the problem we have on social media. By balancing First Amendment principles with a user’s toxicity and reach—how much influence does a user’s content have on the overall social media platform?—I hope we can make decisions like the one Twitter had to make more transparent and fair.

Jack Dorsey himself tweeted that banning Trump sets a dangerous precedent and was a result of Twitter’s failure to “promote a healthy conversation.” I couldn’t agree more, and it’s possible to work together as a tech community to create a new open standard around the definition of healthy.

Though it’s good-riddance for the boot to Trump, an exodus of conservative voices from the mainstream social platforms serves no one. Communities are stronger when more voices are represented. To earn back trust, Big Tech needs to be clear about what it will and won’t tolerate in its terms of service, apply those rules consistently over time and across all people, and be utterly transparent about why and how it arrives at decisions. We have the technology and the know-how to get there. Do we have the will?

There are 88 million reasons that we should.