A senior U.S. intelligence official told reporters Monday that the U.S. is tracking social media disinformation campaigns from Iran, days after the Islamic republic downed an unmanned U.S. reconnaissance drone, sparking the threat of a U.S. military strike. President Donald Trump threatened to “obliterate” Iran on Tuesday, and Iran called him “mentally retarded.”

As foreign policy conflicts escalate, social media companies like Facebook and Twitter are increasingly caught in the middle. U.S. history is full of government-sanctioned disinformation campaigns against foreign adversaries, and the U.S. recently conducted its own social media disinformation campaign against Iran (which went horribly awry by targeting journalists and human rights activists, after which the State Department cut funding to the program).

“Cyberwar is hacking networks, likewar is hacking people’s mindsets on social media platforms,” New America Senior Fellow Peter Singer told InsideSources. (Singer’s newest book, “LikeWar: The Weaponization of Social Media,” covers this subject.) “We’ve seen an evolution in how [social media] companies think about this problem. I liken them to parents going through the stages of grief about what happened to their baby. Initially they were in denial, anything bad going on and the significance of it, and now they’re in bargaining mode, saying yes we realize these are problems and these are all the things we’re doing so you don’t have to make us do more.”

Social media disinformation campaigns from the U.S. and Iran have been going on for a while, Singer said, but whenever tensions escalate, disinformation campaigns increase and escalate, too.

“There’s already been a variety of campaigns that Iran has carried out,” Singer said. “A range from planting false stories to elevating voices artificially to creating false personas that are then used as a way in for more traditional forms of cyber attacks … creating false Arabic persons or trying to draw false stories viral in Arabic media and the like. Iran has been hitting U.S. networks that range from critical power infrastructure, oil and gas companies, to eight to 10-plus years back.”

But, he said, the U.S. government does the same thing.

“We’ve run campaigns in terms of both cyber attacks, going back many many years, to sabotage nuclear research, to some kind of attack on Iranian systems, and we’ve done the same thing on the social media side,” he said.

While Facebook and Twitter are taking more posts down and deactivating more fake accounts, Singer said they don’t do nearly enough, partly because of the way their business model works, and partly because individuals still have to take responsibility for what they share on social media.

“Individuals are part of this battle whether they know it or not,” Singer said. “The weak links are the ignorant [social media users], those that don’t understand what’s going on. In this space, there’s a major challenge, over 60 percent of social media users can’t tell the difference between real and fake news or an advertisement and an article, and don’t even know how the social media companies make money, and how they make money off of you. They share articles without ever reading more than the headline. We’re individually partly to blame.”

The other big problem is the U.S. government’s attitude toward social media disinformation campaigns. As Robert Mueller’s investigation on the 2016 election revealed, the White House is terrified that admitting social media disinformation campaigns exist and do disrupt the political process will threaten Trump’s legitimacy.

As a result, Singer said, the phrase “social media” doesn’t exist in the federal government’s cybersecurity strategy.

“That is despite the clearly proven threat that this presents to everything from national security to elections to public health, you name it,” he said. “If you don’t have a strategy, you don’t have division of labor or funding. Back in the Cold War we had the Interagency Active Measures Working Group, the info sharing clearinghouse for identifying KGB disinformation campaigns. We don’t have a version of that today. A huge part of this is to stop framing it in terms of technology. You’re never going to get an algorithmic change, you need digital literacy. You’re not just going to achieve this just through cyber command.”

To stop the spread of disinformation on social media, he said, individuals should take responsibility for where they get their news and be careful about what they share, repost or reblog.

But there’s still a role for social media companies — to educate their own users and reevaluate what kinds of products they introduce.

“Another option is to explore the vulnerabilities of pushing a product out there and seeing how it does, but also seeing how it could be abused,” Singer said. “An example is Live feed. Teenagers use it to Live feed their suicides and killers their mass killings. Anyone who knows anything about teenagers or terrorists could have seen this [coming]. When a company can control the flow of information and shift algorithms to lower or elevate visibility, then you are in the middle. It doesn’t mean you’re bad or good, it’s just who you are, and you can see them coming to grips with this.”

Follow Kate on Twitter