In the final week of the 1980 US presidential election, Republican challenger Ronald Reagan famously asked: “Are you better off than you were four years ago?”
The question reframed the incumbent Democratic president Jimmy Carter’s first-term achievements for voters, and his numbers tanked. Reagan won by a landslide on election day.
In this age of division in America, the Financial Times in collaboration with the Peterson Institute will attempt to answer the question: Does the economy still guide voters’ choices at the ballot box, or has it been overtaken by identity politics?
To do this, we have commissioned a series of monthly polls to gauge economic sentiment in the US in the lead-up to the 2020 election.
These polls are not aimed at working out who will win the 2020 election and do not ask respondents to state their preferred candidate. Rather, they provide a national picture of how likely voters feel about the US economy on issues including the equity bull run, global trade wars, personal finances and more.
How do we do it and why should you trust our results? Here is a brief overview of our approach:
Who conducts the polls?
The polls are conducted by a team of pollsters from Global Strategy Group on behalf of the Financial Times and Peterson.
Who are the pollsters speaking to?
The pollsters survey around 1,000 likely voters each month, aiming for a representative sample of the US voting base. This means they try to get responses from a group that looks as much like the US voting population as possible using factors including age, gender, ethnicity, education and geography.
The interviews are conducted online and all respondents are matched to a list of currently registered voters.
How are the polls conducted?
The polls are carried out using online surveys, which have become increasingly popular among pollsters as it becomes more difficult to get large samples of respondents over the phone. People are simply more reluctant to answer phone calls these days and fewer voters have landlines.
Global Strategy Group says online polls enable it to reach likely voters where they are, enabling larger sample sizes and correspondingly smaller statistical margins of error.
What are the pros and cons of online polling?
An ideal poll would be one that was equally likely to question any registered voter, with a sample large enough to all but ensure the respondents were a cross-section of the voting population.
In reality, some demographics are more difficult to poll than others, and online polls face concerns about their ability to capture representative samples. These concerns are not absent for live telephone polls either. Young people and those without a graduate degree are less likely to answer phone calls, for example, while older people may be less likely to respond to online surveys.
How are the results calculated?
When pollsters cannot get this naturally representative sample — perhaps because they are unable to reach a sufficient number of younger voters, for example — they “weight” their responses. This means giving more value to the responses of the young voters they did speak to.
If, for example, a pollster spoke to 100 people she might start by making each respondent worth 1 point — or 1 per cent of the sample. By extension, that respondent would be worth 1 per cent of the population.
Sometimes, however, the pollster might have a group, say 25 to 35-year-olds, who make up 10 per cent of the population, but only make up 5 per cent of the sample. If each one is valued at 1 point, then 25 to 35-year-olds will be under-represented in the sample. To compensate for this, the pollster will “weight” each response from a 25 to 35-year-old as worth more than 1 point, while adjusting the weight of other respondents that were over-represented.
How the weighting calculations are worked out — and what specific factors pollsters weight for — are a large part of what makes a poll successful. These weighting processes have to be handled with care. If pollsters only manage to capture very few respondents from a certain demographic and their responses are randomly not typical, extrapolating from such a small base can lead to inaccurate conclusions.
The FT-Peterson Economic Monitor weights by age, gender, ethnicity, education, region and regional density. In addition, it also takes into account how likely certain groups are to turn out to vote on election day.
What problems can pollsters encounter when contacting respondents?
Analysts from Global Strategy Group say that Nevada and New Hampshire are generally the most difficult states to poll online, due to their small populations. Nevada also has many residents for whom English is not their first language.
However these issues have not affected our polling so far.
Analysts also say it is important to pay close attention to potential changes in the composition of the 2020 electorate compared with previous cycles, and any potential changes in turnout patterns. In 2016, on average, African Americans were less likely to turn out to vote than in 2012, for example.
Can we trust the polls?
There has been a lot of discussion about whether or not polls can still be trusted after the 2016 election, when many polls predicted a victory for Hillary Clinton against President Donald Trump. There are, however, reasons why polling can be trusted to give a good indication of underlying sentiment.
First, the polls in 2016 were correct at a national level — Mrs Clinton won the popular vote, despite losing the Electoral College. While this poll does look at voters in battleground states specifically and not just national trends, the wide sweep of the sample means the polls are less likely to be drastically wrong.
It is important to remember that all polls are an estimate based on calculations using the available evidence, and as such are unlikely to be percentage-point correct. This degree of accuracy is more important in a presidential two-horse race where fewer than 5 points can miss-call the victor in the election.
With sentiment polling, however, small margins are unlikely to drastically alter our understanding of voter concerns. If a poll says 61 per cent of Americans do not think the stock market has risen in 2019 but the true figure is 58 or 64, our conclusion that most Americans hold this view does not alter substantially.
What does polling 1,000 likely voters allow us to do?
Polling 1,000 people allows analysts to analyse to two levels of depth — for example Republican men’s sentiment — but not three levels — the sentiment of Republican men aged over 65. This is because the margin of error becomes too large and any conclusions might lead to spurious results.