Global Strategy Group
9 min readNov 20, 2020

Polling After 2020

By Nick Gourevitch, Managing Director and Partner, GSG Politics

If you are reading this, odds are that you have probably heard a lot of takes on the polls this year and you probably have your own too. Frank Luntz says the “polling profession is done,” while Nate Silver says he’s “amazed that polls are as good as they are.” David Leonhardt of the New York Times wrote recently about 2020 being a “black eye” for polling but then asked a very good follow-up question: “Is part of the problem the public’s overly high expectation of precision?”

It’s been a roller coaster of a few weeks — for the country obviously — but also for us pollsters! I’d be lying if I said my own views on polling haven’t gone through a wide range of emotions. On the Wednesday after the election, I was pessimistic about the future of polling and frankly, I was upset. As the days have gone by (and more results have come in), my views have evolved, and I’ve come to appreciate that there’s far more nuance to the story than appeared upon first glance.

I’m writing this note to you because we here at Global Strategy Group (GSG) — a firm that conducts polling for dozens of Democratic candidates, campaigns, and causes — are getting a lot of questions about the polls. What went right and what went wrong? And, more importantly — how are we going to make it better? We don’t have all the answers today, but we are working on it, and we are committed to improving the work we do. So let’s talk about what we know today.

What actually happened with the polls? A miss in one direction.

It is important to level-set about what actually happened. Our firm, and many others, made changes after the polling miss of 2016, and in the following midterm and off-year elections of 2017, 2018, and 2019, our polling was generally quite accurate.

In 2020, at the presidential level, our polls overstated Biden’s edge but not dramatically so. The bigger error occurred down-ballot in Senate and House races, where some polls significantly overstated the Democratic share of vote. As has been well documented, we were not alone in this — as nearly every other pollster in America had the same problem. The main problem with our polling was not that error existed, but that it almost always overstated Democratic share of the vote. We should always expect some error, but bias is more troubling.

So what went wrong? We don’t know exactly just yet, but there are some theories.

There are lots of theories about this year’s polling error, and plenty of articles — like this one from Nate Cohn of the New York Times — that go into these theories in much more detail than I will here. I’d note that almost none of them are verifiable until we get more data about the election — including both the final results and the voter file records of who voted. But very broadly speaking, explanations for this error fall into two buckets:

Turnout error. This is a miscalibration of the composition of the electorate — who actually showed up to vote relative to who a poll predicted would show up to vote. Here’s an example: If our poll says there will be 40% Democrats, 40% Republicans, and 20% Independents and then on Election Day it’s actually 37% Democrats, 43% Republicans, and 20% Independents — then our poll is likely to be wrong by a few points. In 2020, there was a massive turnout surge across the board and very preliminary analysis suggests that higher-than-expected Republican turnout may have had at least a partial role to play in polling error.

Measurement error. This is a miscalibration of attitudes within groups beyond the margin of error inherent in every poll. Here’s an example: If our poll says that Republicans would vote for Trump by a 90%-10% margin, but they actually vote for Trump by a 95%-5% margin — then our poll is likely to be wrong by a few points. There are lots of potential causes of measurement error that are being discussed in relation to 2020, but a few of them include:

— Voters opting out of polling due to lack of trust. This is a popular explanation right now — that some Republican voters, potentially the most ardent Trump voters, are not taking polls because they don’t trust pollsters — a direct result of Trump’s rhetoric about the media, about polls, and about institutions in general. Under this theory, we are missing the most die-hard Trump Republican voters and replacing them with less die-hard Republicans. This theory would help explain why our polling has been more accurate when Trump was not on the ballot and was also more accurate in 2020 in bluer states and districts.

— COVID-related error. Another explanation is a cousin of the one above — but more specifically that the error was COVID-related. Under this theory, those with more progressive attitudes on COVID (both Democrats and more importantly — less conservative Republicans and Independents) were the ones social distancing, staying at home, and answering our poll calls. That meant those with more extreme attitudes — think anti-maskers and QAnon supporters — were responding to our polls at lower rates, and just replacing them with other white non-college voters wasn’t enough, leading to a bias in our polls.

— Late movement. This explanation is always at play in any polling miss and deserves some scrutiny due to the higher error in more volatile down-ballot races than in the more stable presidential campaign. Under this theory, polls are a snapshot in time and people’s attitudes can change from when they take our poll to when they go vote. Based on post-election callback surveys conducted in previous election cycles, GSG found down-ballot races moved against Democrats in multiple years and there is some evidence this happened again in 2020.

Both GSG and the entire polling community are digging into these theories — and others — to identify the culprits. Odds are the explanation will be a bit messy and it will not be just one thing — it never is.

How do we fix it? Embrace — and quantify — the uncertainty of our predictions.

Once we identify the problems of 2020, GSG — and every other pollster — will get to work addressing them for the future. But we need to think bigger than that because this is not going to be the last time pollsters will face unexpected error. Whatever afflicted polls in 2020 might not be the same problem that we face in a midterm election in 2022. Or it might not be the problem we face when we go poll in the New York City Democratic primary next year or in some future special Congressional election.

So while fixing the polling problems of the last election is a necessary step, it is not sufficient to prevent us ending up in a Groundhog’s Day of polling error election after election if we don’t also adapt our analytical tools moving forward.

In recent days, the answer to this question has become clearer — we need to embrace (and quantify) the uncertainty. Pollsters too often present things as certain when they simply are not. Worse, we know they are not! Take the issue of turnout. We declare a prediction for turnout based on a lot of analysis and smart thinking. But even the smartest electoral analysis could be wrong. Rather than doggedly stick to a single prediction on turnout, we need to think about the range of possibilities for turnout and how that might impact our predictions — and most importantly, our campaign strategy. We can’t hide behind uncertainty either, but to ignore it completely is a mistake.

This extends to issues around measurement error too — though dealing with the potential lack of trust in polling from conservative voters is no doubt a harder problem to solve. However, we can try to figure out who we are missing and put estimates around the potential error that might result from being unable to reach people. It will be harder — but not impossible.

There’s a lot still to figure out, but the bottom line is this — we need to do a better job of explaining to our clients what the range of possible outcomes could be, and not just present every poll as a singular prediction. That sets up false expectation and more importantly, it is simply not an accurate assessment of reality.

Further, I would be skeptical of any pollster who says they have the magic methodology or process to predict the future. Rely on “magic numbers” at your own peril. Don’t get me wrong — there is high-quality polling and low-quality polling — and how you poll makes a big difference. But the smart pollsters of the future will be the ones who accept the limitations of the tool and provide insights around the uncertainty that exists, rather than promise they are soothsayers with the only answer about the future.

If polling can’t perfectly predict the future, then what is it good for? Understanding the electorate.

Over the last decade or so, the proliferation of public polling and sites like FiveThirtyEight, have focused the world onto the predictive side of polling. But what makes for a good pollster in the private world of campaigns is not just about predicting the future — it’s about generating insights, learning about voters, and developing strategy based on that information. Consumers of polls — along with qualitative research tools like focus groups and their online equivalents — should refocus their use back on their primary purpose: understanding the electorate.

As such, it’s important that we not dismiss everything we’ve ever learned from a poll just because the polls were biased a few points in one direction. The following things are still true and we know them because of polls:

— Trump was a historically unpopular president though also had a historically stable approval rating.

— Trump’s economic approval rating was higher than his generic approval rating, which was higher than his rating in handling coronavirus. In other words, Trump lost the election in part because a group of voters who gave him a positive approval rating on the economy voted against him.

— That some voters de-prioritized the economy — traditionally the most important issue in presidential campaigns — is a fascinating finding and very important for understanding what just happened! It also has big implications for understanding future presidential elections. And it’s a point that would have been lost without polls.

Polls can also identify demographic problem spots, even if they cannot perfectly calibrate them. I have heard an argument that we should ignore polling and just rely on historical election returns to influence decision-making. But if we did that, we would miss any chance of capturing change from election cycle to election cycle. The polls before 2016 identified Trump’s strength with white non-college voters relative to Romney in 2012. The polls in 2018 identified suburban strength for Democrats relative to 2016. The polls in 2020 identified clear differences in the Latino vote for Biden and the Democrats — foreseeing weakness in Florida, while measuring strength in Arizona. In all of these cases, we may not have fully understood the scale or scope of these trends, but they were identified — and they would have been missed without polls.

Ultimately, we poll to learn what voters are thinking and to identify how they might be responding to campaigns — both our own and our opponents. These remain critical functions for any campaign, any reporter, or any student of electoral politics.

I’ve heard all the critiques of poll-driven campaigns — that we Democrats focus too much on issues and not enough on emotions. That Republicans fight harder and dirtier than we do. That polling can’t measure the impact of disinformation. Some of these are fair. Some of them are not. But regardless, the answer is not to stop asking voters what they are thinking. It’s to adapt to new realities and to ask different and better questions.

It’s easy for me to say that as a pollster because our business depends on it, but I can’t think of more dangerous reaction to 2020’s polling miss than to throw polls out the window, stop measuring the electorate, and stop asking voters questions about what they think. Because then what replaces that? It would just mean that the people who work in politics would project their personal views onto the electorate based on their own biased instincts and anecdotes.

As we move forward past 2020, polling and importantly — qualitative research — should continue to be a tool for learning and understanding what the public is thinking. And when it comes to predicting? We should not ignore what the polls are saying — they can be a canary in the coal mine for important trends. But we also need to reset expectations about their ability to precisely predict the future — and to embrace that uncertainty of the tool in how we use it to inform decision-making.

Global Strategy Group

Partnering with companies, causes, and campaigns to build their reputations, tackle big challenges, and win.