The #ExxonKnew campaign has been an ongoing battle as environmentalists have spent the past several years attacking ExxonMobil. The two sides have been embroiled in a sprawling legal struggle for more than a year, arguing over how much the energy company knew about the risks of global climate change and if it hid this information from investors. At the heart of this contention was an academic study by Harvard University’s Geoffrey Supran and Naomi Oreskes, both anti-fossil fuel activists, that used content analysis techniques to determine the themes of ExxonMobil’s communications over a 40 year period. This analysis was harshly criticized this week by Kimberly A. Neuendorf, Ph.D, herself an academic with extensive background in the field of quantitative content analysis.

In a review submitted to the court by Exxon on Thursday, Neuendorf found seven fundamental flaws in Supran and Oreskes’s analytical methods. They included improper selection of documents, flaws in the coding used to analyze the documents, and a lack of properly defined research questions. These flaws affected both the selection of documents for analysis and the authors’ own biases, which could influence how they undertook the study.

At heart, the flaws in the study mean that Oreskes and Supran’s research is insufficient to justify the claims they were making, Neuendorf says.

“[Supran and Oreskes] improperly infer from content analysis that ‘ExxonMobil misled the public,'” writes Neuendorf. “Content analysis cannot legitimately be used to reach conclusions about the effect particular statements have on the public.”

She also criticized the study for relying on techniques that lack “reliability, objectivity, and validity” which fall outside of those generally employed in content analysis. One of the most egregious examples of this was the researchers’ own direct involvement in the coding and analysis process, which, given their own opinions on the reality of global warming, likely tainted their results. Supran and Oreskes have both faced significant criticism for starting their study with a predetermined outcome of finding fault with Exxon.

“Content analysis coding ought to be conducted with coders who are at arm’s-length with regard to the research, in order to maximize objectivity. Optimally, coders should be blind to the research questions or goals,” writes Neuendorf. “In the [Supran and Oreskes] study, the coders were not blind. In fact, they were as non-blind as could be imagined.”

Supran and Oreskes had been criticized earlier for their involvement in the ExxonKnew campaign. Oreskes is credited with the idea for the 2012 La Jolla conference which laid out a strategy to link oil companies to climate change using a strategy similar to that employed to connect tobacco firms to lung cancer. Supran was involved in the divestment movement and tweeted about how Exxon’s actions “may have imperiled all of humanity.”

Neuendorf explains that these biases are continued into the structure of the analysis.

It becomes even more difficult to study the researchers’ methodology since the original study did not provide essential information needed if the study was to be replicated by others. The methodology the researchers described includes several techniques that diminished the reliability and objectivity of the analysis. They included having coders “skim” various documents to locate codable material and encouraging coders to use contextual information.

“Given that different coders are likely to have different contextual knowledge approaching the coding task, this precludes objectivity and reliability of the content analysis,” Neuendorf concluded.

Released last summer, the original study analyzed the text of ExxonMobil documents covering a period between 1977 and 2014, which have been made publicly-available by the company. Using these documents, the researchers attempted to determine if the company hid information about global warming risks from the general public, even if they were known to scientists working for ExxonMobil.

Neuendorf argued that the selection of documents used in the original study did not meet the specifications of normal content analysis, since they comprised neither a census of all available documents, nor a probability sample that was a statistically accurate sampling. The sampling problem was compounded by the study’s failure to account for major shifts in corporate organization over the period of some 40 years.

The study has been criticized since its release on a variety of grounds, including the relatively small number of documents studied, the fact that the scientific research was in fact published at the time in academic journals, and the study’s conflation of Exxon and Mobil, which existed as two separate companies prior to 1998.

However, Neuendorf’s paper is one of the first formal academic analyses of the study.

Follow Erin on Twitter.