Skip to content Skip to navigation

Research & Ideas

Search this site

Sharad Goel: How the presidential election flummoxed the pollsters

The results on election night reflect systemic polling weaknesses, says a professor of management science and engineering.

polling booths

Swing voter? | Reuters/Brendan McDermid


Donald Trump’s election victory surprised many, but it was a particular jolt to pollsters and election forecasters.

In the final week of campaigning, the overwhelming majority of polls gave Hillary Clinton a healthy lead nationwide and in many swing states. Leading independent forecasters placed Clinton’s chances of winning the Electoral College at anywhere from 71% to 99%.

Was it a statistical fluke, a surprise but within the polls’ margins of error? Not by a long shot, says Sharad Goel, assistant professor of management science and engineering at Stanford University. Goel, who recently co-authored two important pre-election studies of polling errors, admits that he was as surprised as anybody by Trump’s victory. But he is certain that the results on Nov. 8 reflected systemic polling weaknesses, and he has a strong suspicion about two weaknesses in particular. “This wasn’t just random noise in the polls,” Goel says. “This was a systemic failing.”

The first error, Goel says, is a mistaken belief about what he and his colleagues call the “mythical swing voter.”

Pollsters and political campaigns focus intensively on swing voters, those who appear either undecided or wavering in the home stretch of a campaign. And that focus seems reasonable on the surface: Polls in the final stretch often show what appear to be sudden shifts in favor of one candidate or the other.

In reality, Goel says, what really swings is not so much voter intentions as their willingness to participate in election surveys. To be specific, people who think their candidate is losing are less likely to answer a pollster’s questions. If they think their candidate is winning, they are more eager.

The result is a significant sampling error that can understate or overstate the depth and volatility of a candidate’s support. What might look like a sudden rise or drop in support for a candidate, perhaps in the wake of a bad news development, turns out to mainly be a shift in supporters’ willingness to talk to pollsters.

Goel and his colleagues found dramatic evidence of the phenomenon in a newly published study of the 2012 presidential elections titled “The Mythical Swing Voter.”

In September 2012, polls showed that President Obama was ahead of Mitt Romney by an average of 4 percentage points. In early October, right after Obama was criticized for a lackluster performance in the first presidential debate, polls showed that Romney suddenly had a 12-point lead. By late October, after the third debate, Obama had regained the lead.

Most forecasters and pundits ascribed Obama’s apparent swoon in popularity to his weak performance in the first debate, just as they ascribed his apparent rebound in part to better performances in the second and third face-offs.

But in analyzing the 2012 polls, Goel and his colleagues found an important clue that this conventional wisdom was wrong. When the same pollsters asked people whom they had voted for in the previous election, in 2008, 48% of those in the September polls said they had voted for Obama. But in early October, that share slipped to 42%. Likewise, the share of people who said they had voted in 2008 for the Republican nominee, John McCain, jumped from 32% in September to 42% in early October.

Unless a surprising share of people changed their own recollection of how they had voted four years earlier, that was a sign that the sample of respondents had shifted toward a less Democratic and more Republican group.

To dig deeper, Goel and his colleagues carried out their own series of online large-scale surveys during the final months of the 2012 presidential campaign. Using the Xbox platform to reach people, they conducted more than 750,000 interviews. To track people’s views over time, they focused on 83,000 people who had responded more than once and whose first response was before the first debate. On average, each person was interviewed four times.

Naturally, the demographic characteristics of Xbox users are different from the American electorate as a whole. For one thing, the Xbox users are more likely to be male and to be younger than the average.

When the researchers used standard statistical adjustments to correct for those demographic issues, the Xbox surveys produced essentially the same picture as most other polls: a sudden dip in Obama’s popularity after the first debate.

But it was also true that likely Democratic voters were much less likely than Republicans to take part in the survey in the two weeks after the first presidential debate. When the researchers added adjustments based on how people had voted in 2008 and how they described their party and ideological leanings, they found that the change in participation rates erased nearly all of Obama’s apparent swoon.

In a separate study of senatorial election polls from 1998 to 2014, Goel and his colleagues found that the actual margin of error was much wider than the pollsters had estimated. The poll results were supposed to come within 2 or 3 percentage points of the actual results – the margin of error – 95% of the time. Instead, only 75% of the polls were within the margin of error.

On top of that, Goel says, was a second weakness: The polling errors were surprisingly correlated. Errors that showed up in one poll were likely to show up many others. That’s a disturbing finding, because forecasters tend to assume that a higher number of polls with similar results reduces uncertainty.

Likewise, assumptions that turned out to be wrong in one state – such as the expected participation rates of women or white men – turned out to be wrong in other states as well.

Goel suspects that this kind of correlated error was important to understanding Trump’s victory, and that the voting rate for white men was probably well above most pollsters’ assumptions in many of the swing states. So while the polls showed Clinton solidly ahead in four swing states and a toss-up in five others, Trump ultimately won all but a handful of those battlegrounds.

Polling and forecasting have become steadily more sophisticated over the past century, Goel says, but it’s clear they both have a long way to go.

Get Updates from Stanford Engineering