Blog

A Conversation: How Researchers Make Sense of Polls in a Historic Election Year

September 24, 2024

Back to Blog
Share:

2024 is shaping up to be one of the biggest election years in modern history, with campaigns taking place in major democracies like the UK, France, India, and, of course, the United States. With so many elections taking place, polling has become a hot topic, grabbing headlines and sparking debates.

Recent polling has tracked the tight race between Kamala Harris and Donald Trump on a national and state level, generating significant attention online. However, as polling becomes a centrepiece of political discussions, it’s essential to think critically about the elements of a survey to understand how reliable its results – and reporting – really are.

To help us navigate this complex landscape, DS Account Manager Gabriel Helfant sat down with Rosalie Nadeau, our VP of Insights at DS. Rosalie has run hundreds of research polls in over 50 countries, including during elections at multiple levels of government. She shared her expertise on how we can better understand and interpret polls during this high-stakes election year.

Gabe: What kind of perspective do you take when reading about polls in the news or working on a campaign? 

Rosalie: One poll isn’t the full story, and it always has to be put in context. During election campaigns, when there’s a lot happening every day, numbers can sometimes change very quickly. So, we should think of a poll as a snapshot of one point in time — you usually need multiple of these signs to identify a real significant shift. 

Unless I see multiple polls over different days that paint the same picture, I remain very skeptical. 

Gabe: For those times when polling is valuable, what are the key elements of a survey, and when you’re reading an article, what kinds of things give you a red flag?

Rosalie: The first element is who conducted and paid for the poll. Is it a credible and well-known research firm, does it have a strong partisan bias? This will help you put their results in the context of their previous findings, even if it doesn’t necessarily mean you should dismiss entirely any poll. 

The method is a core element. Is it an online survey or a phone survey? Does it use landline phones, cellphones, or a mix of both? Was it an IVR or a live interviewer asking the questions? The data collection mode has a huge impact on the type of people who answer a survey, and different modes—or a mix of modes—will work best with different audiences. 

The final key element is the sample size. Is this sample size enough for the population we’re trying to understand? We should doubt results based on very small sample sizes. 

There are other important elements such as quotas, timing of when the poll was conducted, and so on, but the pollster, method, and sample size are the first elements I look at. Good reporting should always offer some of this information, even if it’s not front and center.

Gabe: I want to dive a bit deeper into sample size because that one seems important when it comes to accuracy. How does a researcher decide the right sample size when they start a new poll or research project, and how does that impact the margin of error?

Rosalie: That’s a good point, and when you look back to some of the recent Times/Siena results, it’s not immediately clear who is in the lead… Most of the  promising results are within the margin of error. When polling is consistent across multiple states, it’s still a very good directional result but it may not be statistically significant at the state level.

When you plan a probability survey, you would decide on your sample size based on the size of the population you’re surveying and the margin or error you seek to have.  Typically, the bigger the population, the bigger the sample. 

Gabe: Even with a sufficient sample size, how do you make sure a poll actually reflects the population you’re studying? If you’re sampling Canadians, how do you ensure you get enough French speakers, for example, represented in your survey?

Rosalie: You’re going to do two things: 

First, you’re going to set quotas in your sampling plan (i.e. before you collect data)  based on the reliable demographic information from sources such as census. You can set criteria for how many respondents you want from a given province, or how many women, for example. Depending on how precise these quotas are and how much time you have to collect your data, it can be easier or harder to meet exactly these quotas.

Quotas are a good way of promoting polling accuracy ahead of a survey. Most times, though, you’ll also need to adjust the survey after your data has been collected.  Say, for example, you field a survey with education quotas. During your analysis, you find that despite these quotas, your sample is quite imbalanced when it comes to income distribution — a variable you hadn’t set a quota for. In this case, results will be skewed unless you weight the survey results such that the sample reflects the population.

With all these variables in mind, it’s a reminder of why you should always think about polls in context and always look for trends instead of just one proof point. 

Gabe: My final question is around that context. I think many of us want to get more and more info when a campaign heats up. Every morning, I’m trying to see the latest news on the US presidential election. Do you think we should take some of those surveys with a grain of salt when the ground is shifting so dramatically?

Rosalie: Absolutely. Timing is really important to consider – think about how perspectives and indicators can change after a debate and with the media coverage that follows — the recent US presidential debate is a really good example of that. There’s a tendency among political junkies to assume that everyone is caught up with the most recent news and that there will be an immediate impact, but that’s not always the reality for voters, especially outside of high-salience political periods like elections. So timing your poll correctly so it measures the full impact of the event you’re trying to understand is crucial. 

To keep us in check, campaigns don’t base decisions on just one poll but rely on tools like forecasts, which amalgamate diverse sources of data, including ongoing polls, to create a more comprehensive picture on how a given Party or candidate is doing. The best forecasts will also leverage external signals – both from online sources and from the field – to provide a more holistic understanding of how a political environment is changing. These are all important tools that modern campaigns rely on, and that’s why they’re usually in the best position to predict future outcomes.