Editor’s Note: For another viewpoint, see Point: A Humbling Time for Pollsters

Conducting survey research is of paramount importance for a variety of purposes, ranging from supplementing the U.S. Census to better understanding public opinion and consumer interests on a variety of topics.

Pre-election polls are a subset of survey research that provide an opportunity to test new and old survey methodologies against real-world results, with the goal to improve and advance the methods used to understand public sentiment.

At times, polling innovations in survey research lead to new forms of data collection, for example the inclusion of random samples in 1936, or the transition from conducting face-to-face interviews to telephone interviews in the 1960s and 1970s.

The advancement and continuous evolution in communication technology provides survey researchers with new opportunities for collecting data, including cellphone and online sampling. However, the validity and reliability of newer methods remain in question, as they require more time and opportunities to be tested as were traditional forms of data collection.

In response to the recent election backlash, pre-election polls should never be viewed as precise predictions, but as instruments that provide a range in which to interpret the data. For example, a national poll that projects Joe Biden leading Donald Trump 51 percent to 47 percent, may have a 3 percent margin of error.

This means that Biden’s expected vote could range between 48 percent and 54 percent, while Trump’s share of the vote could range between 44 percent and 50 percent. Therefore, Trump could theoretically win by as many as 2 percentage points, and the poll would have performed as expected, within the margin of error.

However, the public’s expectation would likely conclude this as a polling error, and subsequently would claim such findings as a failure of the pollster or polling industry. Herein lies the disconnect between the media and public’s expectations of polling versus the reality of what can be extrapolated from the data.

As pollsters, our goal is for 95 percent of pre-election polls, like surveys, to fall within the margin of error of each poll. However, we recognize that pre-election polls can be rife with potential errors unique and uncharacteristic of traditional survey research.

For example, an essential first step in the pre-election polling process is determining who are most apt to vote in an election. In 2020, this issue was compounded by attempting to figure out who planned to vote early, or by absentee. The early voters appeared to be breaking heavily for Biden, while Election Day voters tended to be heavy supporters of Trump.

In addition, a record turnout was expected, but the magnitude of just how many people would vote was unknown, all of this in stark contrast to surveys in which the population remains fairly static. This unique dilemma resulted in pollsters attempting to calculate what percent of the turnout would vote early, and what percent of the vote would be on Election Day.

A miscalculation of this pertinent data would significantly throw off the validity of the poll.

While polls overall missed the mark this season of achieving a 95 percent confidence level, the general trends projected in the presidential race were consistent with the final results. It is within this context, and the rich data we now have on polls that missed the mark, that our quest continues to refine and perfect the polling process.

At Emerson College, we are not content to just follow the science but committed to always pushing and testing new modes and methods that will advance our ability to assess public opinion, and thereby enhance trust and credibility in public polling.

As we move on to the next election it would be wise not to throw the baby out with the bathwater, but to think of pre-election polls within the context of the proverb that close only counts in horseshoes, hand grenades and polling.