Skip to main content Skip to secondary navigation
Main content start

The margin of error is bigger than you think

A Stanford study of election surveys finds that common error patterns show up in separate polls by different companies.

It’s important for pollsters and voters alike to better understand what polls can reveal. | iStock/hermosawave
It’s important for pollsters and voters alike to better understand what polls can reveal. | iStock/hermosawave

If you were shocked at how wrong the pollsters were in the 2016 elections, get ready for more surprises.

new study led by Houshmand Shirani-Mehr and Sharad Goel, assistant professor of management science and engineering, finds that the actual error rates of polls are about twice as high as their reported “margin of error.”

To put that in concrete terms, it’s common for pollsters to say that their surveys should be accurate within 3 percentage points in each direction. But Goel and his colleagues estimate that the actual margin of error is 6 or 7 points.

That’s their conclusion after comparing more than 4,000 election polls from 1998 to 2014 with actual election results.

Doubling the margin of error greatly increases uncertainty. If a poll shows that 52% of voters support a candidate, a 7% margin of error means that the actual share of votes could be anywhere from 45% to 59%. 

Although the study doesn’t analyze the 2016 presidential election, it highlights the kinds of errors that could have understated Donald Trump’s strength. Among them:

  • Inaccurate screening techniques to identify likely and unlikely voters, which can drastically throw off a forecast.
  • Misinterpretation of “non-response” rates. It turns out that people are much less likely to answer a survey if they think their candidate is behind.
  • Errors in weighting techniques. Pollsters often weight responses so that the distribution of people in the survey better matches the overall population. But those techniques can be off.

Unfortunately, the study also finds that different pollsters often fall into the same traps. The study finds that polling errors are highly correlated—the same error patterns show up in separate polls by different companies.

What that means, says Goel, is that forecasters can’t necessarily improve their accuracy by averaging the results of many different polls. It’s possible for all of the polls to be off in the same direction.

Shirani-Mehr, Goel and two colleagues—David Rothschild of Microsoft Research and Andrew Gelman of Columbia University—presented their findings on February 18 at the annual conference of the American Association for the Advancement of Science. Their paper has been accepted for publication in the Journal of the American Statistical Association.

Quantifying the problem

“There is much more error than most people think,” says Goel, an assistant professor of management science and engineering. “Professional pollsters knew this was a problem, but they hadn’t quantified it and even I didn’t understand what a big deal it was before the 2016 election.”

Although the new study doesn’t analyze the 2016 election, it highlights the kinds of errors that could have understated Donald Trump’s strength.

One common issue, for example, stems from how pollsters try to screen out unlikely voters. If pollsters misread the clues, they can easily understate or overstate the votes a candidate is likely to get.

Another likely error, says Goel, is to misinterpret the significance of people who refuse to answer surveys.

In a previous study, Goel and his colleagues found that people are much less likely to answer surveys if the candidate they support appears to be trailing. If Hillary Clinton seemed to have a solid lead over Donald Trump, for example, the polls might report a dip in support that was just a reluctance by gloomy Trump supporters to answer surveys.

The new study looked back at the accuracy of 4,221 polls in 608 elections for senators, governors and president.

All of the polls were conducted less than three weeks before election day.

The highest polling error occurred in senatorial and gubernatorial races. If the polls are accurate, the actual election results should be within the poll’s margin of error 95% of the time.

What Goel and his colleagues found, however, was that only 73% of the Senate races and 74% of the gubernatorial races fell within the polling margins of error. About 88% of the presidential elections fell within the margin of error.

Goel says there are no simple solutions, but both pollsters and the public would do well to better understand what polls are revealing.

“I’d like to see more transparency among the pollsters,” Goel says. “I don’t think people understand the difficulties in forecasting elections.”