Nature | News

How to find the right answer when the 'wisdom of the crowd' fails

A new algorithm succeeds by asking members of large groups how they think others will respond.

Article tools

Rights & Permissions

David Ramos/Getty Images

The best answer isn't always the most popular one.

Ask a group of people to guess a stranger’s weight, the thinking goes, and the average answer should be close to correct. This idea, termed the wisdom of crowds, is that in a large group errors of judgement should cancel each other out.

But there are situations in which this classic theory falls apart. If you ask a group of people whether Philadelphia is the capital of Pennsylvania, most will incorrectly answer yes. That’s because they know one set of facts: Philadelphia is a large city in Pennsylvania, and capital cities are large. But another, smaller group will give the correct answer: Harrisburg.

A new algorithm could help pull the correct answer out of a crowd, even when the most popular answer is wrong, a team led by Dražen Prelec, a social scientist at the Massachusetts Institute of Technology in Cambridge, reports on 25 January in Nature1.


Reporter Noah Baker finds out how to make crowds a little wiser.

You may need a more recent browser or to install the latest version of the Adobe Flash Plugin.

The team asked study participants to answer a given set of questions. Then the researchers asked those respondents to guess how other people would answer. The algorithm then looked for answers that were ‘surprisingly popular’, or more popular than most respondents thought they would be. In most cases, the answers that exceeded expectations were the correct ones.

“In society, I think there is an assumption that the average opinion is generally right, and that’s been supported by past statistical arguments on crowd wisdom,” says Prelec, “But that’s not the way evidence works. There are specialists with special knowledge, like doctors. This lets us identify that knowledge.”

What's it worth?

Prelec and his colleagues asked groups of 20–51 participants a variety of questions. Sometimes, they were simple geographical ones, such as naming capital cities. They asked people to estimate the value of art, and asked dermatologists to identify skin lesions. Prelec says that most of the time, the algorithm was 21–36% more effective at identifying the correct answer than other methods, such as relying on the most popular answer or ranking answers by confidence. It was better at answering yes or no questions such as the Philadelphia one than it was at estimating the value of art.

“This is a very clever technique, and a very simple way of polling people,” says Mark Steyvers, a cognitive scientist at the University of California, Irvine. Steyvers notes that in real life, people ask each other about their backgrounds and skills to determine the validity of their information. But for anonymous polling situations, he says, Prelec’s method can be a good way to identify views informed by specialized knowledge.

And Prelec and Steyvers both caution that this algorithm won’t solve all of life’s hard problems. It only works on factual topics: people will have to figure out the answers to political and philosophical questions the old-fashioned way.

Journal name:


  1. Prelec, D., Seung, S. H. & McCoy, J. Nature (2017).

For the best commenting experience, please login or register as a user and agree to our Community Guidelines. You will be re-directed back to this page where you will see comments updating in real-time and have the ability to recommend comments to other users.


Commenting is currently unavailable.

sign up to Nature briefing

What matters in science — and why — free in your inbox every weekday.

Sign up



Nature Podcast

Our award-winning show features highlights from the week's edition of Nature, interviews with the people behind the science, and in-depth commentary and analysis from journalists around the world.