Michael Bruter, a political scientist at the London School of Economics. Credit: Christian Lionel-Dupont

Pollsters are still rubbing their eyes at the unexpected result of the 7 May UK election. The Conservative party romped to an outright majority, meaning that incumbent Prime Minister David Cameron forms a new government. Yet polls had predicted a knife-edge race between opposition party Labour and the Conservatives. Labour's share of the nation's vote, at some 30.4%, was not only below the 35% prediction that UK polling company Ipsos MORI released on 7 May (on the basis of data collected in the two days before); it was also outside the typical 3–4% margin of error. The British Polling Council, an association of polling organizations, announced on 8 May that it was setting up an independent inquiry into the failure.

Why did the polls get British voters’ intentions so wrong? Nature asked Michael Bruter, a political scientist who studies electoral psychology at the London School of Economics.

Are you surprised that the results were so different?

There was an obvious gap between what the polls predicted and the results, but I was not surprised. In our research we find in election after election that up to 30% of voters make up their minds within one week of the election, and up to 15% on the day itself. Some people either don’t know ahead of time or change their mind when they’re in the booth.

Usually what happens is that some of these people cancel each other out. Some who thought they would vote Conservative ended up voting Labour, and vice versa. What seems to have happened yesterday is that more people changed their mind in one direction than in the other.

Why did that happen in this election?

One of the main sources of information for voters is precisely what the pollsters tell them — and I think many pollsters do not take that into consideration.

In this case, the pollsters were predicting that there would be no overall majority: that Labour would be the second party but that it would still be able to form a coalition government.

Britain has had such a ‘hung parliament’ before, but never in British history had pollsters predicted this before the election.

So the question that voters were asking themselves was no longer ‘Which of the two parties do I want to win?’ but ‘Which coalition do I want?’ And that’s not something that the polls were equipped to deal with.

There is another important factor. We have found that when you ask people who they are going to vote for, they very often think about what is best for them. But when you go back to the same people after the election and ask them whom they voted for, we find that they voted much more in terms of what they think is best of the country.

Perhaps in this election, even some people who have been left out by the [Conservative and Liberal Democrat] coalition’s policies in the past five years still voted Conservative because they decided — rightly or wrongly — that that was the best choice for the country.

Don’t polling companies take these factors into account?

Not really, and it would be unfair to blame polling companies for the way things are done. They always state quite openly that they are only taking a snapshot of public opinion at a particular point in time. My research uses very long questionnaires that can take 15–25 minutes to answer. This takes a lot of time and money. Election polls normally take one minute to answer.

Polls have margins of error that are supposed to factor in their limitations, but they didn’t cover Labour’s fall in this case — why?

Margins of error are a complicated thing to measure. The way we calculate them traditionally assumes use of a random sample. But pollsters use quota samples, in which you try to create a representative mini-population based on a number of criteria: gender, age, group, region and social class. In that case it’s problematic to talk about margins of error.

When you do quota samples, you are making assumptions about what actually matters. From the point of view of the psychological behaviour of people, this isn’t correlated very much with gender, religion and so on. So the quota samples could be biased from the point of view of psychological behaviour.

Why don’t polls use a random sample?

Random samples are much more expensive. The other thing that companies use to drive the cost down is the mode of polling. Face-to-face polling would be much more expensive [than calling or doing Internet surveys], but you’d be more likely to have a random sample and to have a real representation [of the electorate]. Many people on the phone will refuse to answer a survey.

So random, face-to-face polls would be more accurate, but would cost more money and take more time. And it would still not change the fact that many people act differently when they are in the polling station.