Before the Iowa caucuses kicked off the US presidential primary elections on 19 January, most polling organizations were predicting a virtual tie between the top four Democratic candidates. But when the results were in, two men — Senators John Kerry and John Edwards — were far ahead. Vermont governor Howard Dean, a hot favourite in some nationwide polls, was a distant third.

What looked like a serious polling blunder was more likely due to a last-minute surge of support for Kerry, and the quirky rules of the Iowa caucuses, which allow voters to switch their support after the voting begins. But it was another reminder that poll results often need to be taken with a large pinch of salt.

Public opinion polls have been growing in popularity ever since market researcher George Gallup pioneered them in the 1930s as a way to sell newspapers. Most polls claim a margin of error of only 3%. But people tend to miss the fine print describing everything — from question wording to refusals by subjects to be interviewed — that can skew the results. A typical Harris poll disclaimer concludes: “It is impossible to quantify the errors that may result from these factors.”

Jon Krosnick, a psychologist and political scientist at Stanford University, would go further. “That margin of error you hear about is an illusion,” he says. All it really guarantees is that the people sampled were statistically representative of the larger population. In fact, he says, there are many other sources of error, such as interviewers who mis-enter responses and respondents who mis-hear questions or answer quickly just to get off the phone. “At some point, surveys need to stop publicizing that silly number,” Krosnick says.

Despite the theoretical problems, pollsters tend to do very well in predicting the outcomes of elections, particularly on the eve of election day when undecided voters make up their minds. By one count, 84% of the polls taken before the 2002 US Senate and gubernatorial elections differed from the actual vote by less than their theoretical margin of error.

Counted out

But pollsters live in perpetual fear of embarrassments such as the Iowa result. Or the 1992 British election, in which they wrongly predicted a Labour Party win. Or the 2002 French presidential contest, in which far-right candidate Jean-Marie Le Pen made a strong showing in the first round of voting, despite being counted out in pre-election polls.

To avoid nasty surprises, pollsters are always tweaking their methods. One hot area of research, says Krosnick, is aimed at determining which survey respondents are likely to vote. It's not as straightforward as it seems, as many who say that they plan to vote, do not. Pollsters know this, and sometimes use follow-up questions to get a better idea of what the respondent will do. Did you vote in the last election? Are you registered? Do you know where your polling station is? No one has worked out which filters work best, says Krosnick, but he and other researchers have found something curious: “The more people you throw out of the sample — that is, the smaller the group that you pick as likely voters — the more accurate you get, even when you get too small to be representative of the country.”

Another option available to pollsters is to weight survey results to cancel out known or suspected biases in the sample, such as an under-representation of minorities or excess of Republicans. This is especially true of the tracking polls that have proliferated in recent elections. Produced by companies such as Zogby International, tracking polls now appear almost every day in newspapers and on the Internet. These snapshots of opinion generally use smaller sample sizes than weekly polls. Academic survey researchers say that they tend to be less accurate, despite the reader's perception that one poll is as good as another.

Weighting formulas, like those used to determine probable voters, are idiosyncratic and often proprietary. They are the black boxes of the polling business, with the exact formulas seldom being disclosed, says Michael Traugott, a professor of communications studies at the University of Michigan, Ann Arbor, and past president of the American Association for Public Opinion Research. He says they should receive greater public scrutiny, in part because tracking polls are often interpreted as real swings in opinion that can lead candidates to change their tactics or messages.

Although the public seems hungry for instant polls, it is increasingly reluctant to participate in surveys of any kind — a problem not just for pollsters, but for all survey researchers, from telemarketers to social scientists. Pollsters are having to work harder than ever to get the 1,000-person sample needed to achieve 95% confidence of being accurate to within 3%.

Some say that market researchers have poisoned the well for survey research. Answering machines and caller-identification technology have made it easier to avoid unwanted calls, and those who do answer the phone are less cooperative. By one estimate, only 35% of people reached by phone during the 2000 campaign answered pollsters' questions, compared with 65% in 1985.

Hanging up

This makes polling harder. But do falling response rates increase the risk of error? Probably not, according to one study, which showed that people who hang up on pollsters give similar answers to those who cooperate. Scott Keeter, of the Pew Research Center for the People and the Press in Washington DC, and his colleagues compared the results from a quick, five-day survey of adults who happened to answer the phone to those from a more rigorous, eight-week survey that tracked down and interviewed elusive subjects. Keeter found that attitudes across 91 categories varied little between the easy-to-reach and hard-to-reach groups1.

Even with 100% response rates and perfect knowledge of who will vote, pollsters would still find it hard to call the winner of an election as close as the 2000 US presidential race, where 48.6% of 105 million votes went to Al Gore, and 48.3% went to George Bush. To accurately predict a contest decided by only 0.3% would require surveying about 100,000 people. “And under current economic conditions, nobody's even talking about interviewing 10,000 people,” Krosnick says.

Nobody, that is, except web-based pollsters. The Internet offers the benefit of potentially huge sample sizes at much lower cost than phone surveys. But most web surveys introduce a whole new bias. Instead of being picked randomly, respondents sign up to participate. As a result, “researchers have absolutely no idea where the respondents came from”, Krosnick says. Even with weighting factors, it's just not possible to get a truly representative sample, he adds.

On the plus side, research done by Krosnick and others has shown that web respondents tend to be more careful and precise in their answers than phone respondents, who are more likely to misunderstand questions and give rushed answers. Web respondents also tend to be more honest, even on sensitive topics.

Knowledge Networks, a research firm in Menlo Park, California, founded in 1998 by two Stanford professors, combines the features of phone and web surveys. The company selects its sample through random digit dialling, which allows it to reach both listed and unlisted numbers, it then provides willing participants with computer access to answer questions on the web.

In last year's California gubernatorial election, Knowledge Networks flagged eventual winner Arnold Schwarzenegger's rise long before conventional pollsters did. The reasons for this are not clear, says Krosnick. “Maybe respondents were a little embarrassed to say that they were going to vote for a weightlifting actor.”

Points of order

Once pollsters find willing opiners, there's the matter of what to ask and how to ask it. Surveys of all kinds are vulnerable to errors introduced by question order and wording2. An analysis of polls during the 2000 campaign by Monika McDermott of the University of Connecticut, Storrs, and Kathleen Frankovic, director of surveys for CBS News in New York found that it even makes a difference whether the question “Who will you vote for?” comes at the beginning or end of the interview. The percentage of undecided voters dropped sharply when it came last, leading the researchers to conclude that placement of this question accounts for at least some of the variance in poll results3.

None of this would be so bad if the public knew about the inaccuracy and the bias. But Susan Herbst, a political scientist at Temple University in Philadelphia, doubts that they do. Under the guise of scientific objectivity, polls have diminished public involvement in the political process, she says. Expressions of public opinion that are harder to quantify, such as town meetings or letters to the editor, are now all but ignored. “Journalists have less incentive to highlight a political demonstration by 100 people when a professionally executed random sample survey on the same issue indicates that the demonstrators are a minority,” she says.

And so the love–hate relationship with political polls continues. According to Traugott, who has studied attitudes towards polling, the public is generally dismayed about the proliferation of polls. But the single most important factor in people's judgments of a poll's accuracy is whether it agrees with their own view.