Nature | Comment

Policy advice: Use experts wisely

Policymakers are ignoring evidence on how advisers make judgements and predictions, warn William J. Sutherland and Mark A. Burgman.

Article tools

Subject terms:

Illustration by David Parkins

Many governments aspire to evidence-based policy and practice. The predominant, conventional approaches to using experts are either to seek the advice of one highly regarded individual, or to convene a panel with diverse expertise across relevant areas. For example, quarantine services worldwide routinely rely on expert judgement to estimate the probability of entry, establishment and spread of pests and diseases.

The accuracy and reliability of expert opinions is, however, compromised by a long list of cognitive frailties1. Estimates are influenced by experts' values, mood, whether they stand to gain or lose from a decision2, and by the context in which their opinions are sought. Experts are typically unaware of these subjective influences. They are often highly credible, yet they vastly overestimate their own objectivity and the reliability of their peers3.

Happily, a large and growing body of literature describes methods for engaging with experts that enhance the accuracy and calibration of their judgements4, 5. Unhappily, these methods are rarely used to support public policy decisions. All the methods strive to alleviate the effects of psychological and motivational bias; all structure the acquisition of estimates and associated uncertainties; and all recommend combining independent opinions. None relies on the opinion of the best-regarded expert or uses unstructured group consensus.

The cost of ignoring these techniques — of using experts inexpertly — is less-accurate information, and thus more frequent and more serious policy failures.

Knowns and unknowns

For an important subset of questions, expert technical judgements about facts plays a part in policy and decision-making. (We appreciate that political context may determine what comprises relevant, convincing evidence, and that that evidence rarely leads directly to policy and action because decision-makers must balance a range of political, social, economic, practical and scientific issues.)

Policymakers use expert evidence as though it were data. So they should treat expert estimates with the same critical rigour that must be applied to data. Experts must be tested, their bias minimized, their accuracy improved, and their estimates validated with independent evidence (see ‘Eight ways to improve expert advice’). That is, experts should be held accountable for their opinions.

Eight ways to improve expert advice

Use groups. Their estimates consistently outperform those of individuals'.

Choose members carefully. Expertise declines dramatically outside an individual's specialization or experience.

Don't be starstruck. Age, number of publications, technical qualifications, years of experience, memberships of learned societies and apparent impartiality do not explain an expert's ability to estimate unknown quantities or predict events. This finding applies in studies from nuclear-safety systems and geopolitics to ecology.

Avoid homogeneity. Diverse groups tend to generate more-accurate judgements.

Don't be bullied. People who are less self-assured and assertive, and who integrate information from diverse sources tend to make better judgements.

Weight opinions. Calibrate experts' performance with test questions. This improves risk estimates in many domains, including earthquakes and nuclear-safety systems.

Train experts. Training can improve experts' abilities to estimate probabilities of events, quantities or model parameters.

Give feedback. Chess players, weather forecasters, sports people, gamblers, intensive-care physicians and physicists solving textbook problems generally make accurate judgements, probably as a result of rapid feedback from mistakes that are visible and personal. Experts deserve the same — give them immediate and unambiguous feedback.

For example, experts who are confident and routinely close to the correct answer provide more information than do experts who regularly deviate from the correct answer or are under-confident. Highly regarded experts are routinely shown to be no better than novices at making judgements. Opinions from more-informative experts can be weighted more heavily, whereas the opinions of some experts may be discarded altogether6. These strategies will illuminate where advice is robust, and where it is contradictory, self-serving or misguided. This will generate evidence for policy decisions that is more relevant and reliable. Roger Cooke, a risk-analysis researcher at the Delft University of Technology in the Netherlands and his colleagues have used this approach effectively to better predict the implications of policy for transport and nuclear-power safety4.

Experts themselves must make explicit the sensitivity of their decisions to scientific uncertainty, assumptions and caveats. When invited to advise, they should demand that state-of-the-art techniques are used to harvest and process what they offer. If not, all involved risk wasting substantial time, resources and opportunities.

Importantly, all parties must be clear about what they expect of each other: estimates of facts, predictions of event outcomes or advice on the best course of action. Properly managed, experts can help with the first two. Providing advice assumes that the expert shares the same values and objectives as the decision-makers.

Several processes have been shown to improve experts' performances on estimates of facts and predictions of event outcomes. In using specialists to weigh up the best course of action, the researchers themselves, and the policymakers using them, should identify all possible changes, options and threats, a process known as horizon scanning. Policymakers must list all possible known solutions using a wide group of experts and reference to the literature, to reduce the risk that valuable alternatives are overlooked (known as solution scanning7).

Deliberations should be underpinned by a systematic collection of evidence, an assessment of its relevance, and an identification of the knowledge gaps that might change the decision. The information can be collated so that it is ready for use — rather than in response to a policy need (as is being done for biodiversity8, see

Rules of engagement

A few more rules of engagement, routinely applied, will enhance the quality and reliability of expert judgements.

Ensure that questions are fully specified and unambiguous, so that language-based uncertainties do not cloud judgements. For example, a seemingly straightforward question such as 'How many diseased animals are there in the region?' could be interpreted differently by different people. The question does not specify whether to include only those animals that are known to be infectious, or also those that have died, have recovered, are diseased but yet to be identified as such, and so on.

Structured question formats counter tendencies towards over-confidence for individual estimates. For example, Andrew Speirs-Bridge at La Trobe University has shown9 that questions that elicit four responses — upper and lower bounds, a best guess and a degree of confidence in the interval — generate estimates that are relatively well calibrated. Consider a range of scenarios and alternative theories. Ensure that several experts answer each question.

“Structured question formats counter tendencies towards over-confidence”

Unstructured group interactions are subject to 'groupthink': the group gravitates towards an initial or even an arbitrary estimate; dominant individuals drive the outcome; or individuals are ascribed greater credibility than they deserve because of their appearance, manner or professional background. Structured, facilitated interactions counter factors such as these, which distort estimates3.

Review assumptions, reconcile misunderstandings and introduce new information. Ensure that decision-makers do not rely on experts to choose between options but rather use an appropriate decision tool. One such is structured decision-making, in which experts populate decision tables with estimates of the expected outcomes for each criterion under each policy option, but do not decide the best option.

For example, an analysis6 of volcano-eruption risks by Willy Aspinall, an Earth scientist at the University of Bristol, UK, used structured interactions. These substantially improved the quality of estimates, because he ensured that well-specified questions were answered by several experts in such a way that he avoided or mitigated the psychological tripwires that compromise many group interactions.

Similarly, a study led by conservation ecologist Marissa McBride10 at the University of Melbourne in Australia engaged with groups of experts remotely, using structured questions and group interactions to assess the conservation status of threatened Australian birds. They used telephone conferences to outline the context and purpose of the interactions, which was to reassess the International Union for Conservation of Nature's Red List assessments for a suite of threatened species. They then used e-mail to elicit initial judgements and to clarify questions, introduce further data and explanations. Finally, they circulated a spreadsheet and compiled a second round of private, anonymous judgements.

In many cases, incorporating the formal stages described here will improve decision-making. The benefits are substantial improvements in the reliability of judgements, relatively free of personal biases and values. The costs in time and resources are modest.

Journal name:
Date published:


  1. Tversky, A. & Kahneman, D. in Judgement Under Uncertainty: Heuristics and Biases (eds Kahneman, D., Slovic, P. & Tversky, A.) 2330 (Cambridge Univ. Press; 1982).

  2. Englich, B. & Soder, K. Judgm. Decis. Mak. 4, 4150 (2009).

  3. Burgman, M. A. et al. PLoS ONE 6, e22998 (2011).

  4. Cooke, R. M. Experts in Uncertainty: Opinion and Subjective Probability in Science (Oxford Univ. Press, 1991).

  5. O'Hagan, A. et al. Uncertain Judgements: Eliciting Experts' Probabilities (Wiley, 2006).

  6. Aspinall, A. Nature 463, 294295 (2010).

  7. Sutherland, W. J. et al. Ecol. Soc. 19, 3 (2014).

  8. Sutherland, W. J. et al. What Works in Conservation (OpenBooks, 2015).

  9. Speirs-Bridge, A. et al. Risk Analysis 30, 512523 (2010)

  10. McBride, M. F. et al. Methods Ecol. Evol. 3, 906920 (2012).

Further reading

Biosecurity Australia. Final Import Risk Analysis Report For Fresh Apple Fruit From The People’s Republic Of China (Biosecurity Australia, 2010).

Fischhoff, B., Slovic, P. & Lichtenstein, S. Am. Statistician 36, 240–255 (1982).

Tversky, A. & Kahneman, D. in Judgement under Uncertainty: Heuristics and Biases (eds Kahneman, D., Slovic, P. & Tversky, A.) 23–30 (Cambridge Univ. Press, 1982).

Slovic, P., Monahan, J. & MacGregor, D. G. Law Human Behav. 24, 271–296 (2000).

Benson, P. G., Nichols, M. L. Decis. Sci. 13, 225–239 (1982).

Kahneman, D. & Tversky, A. Econometrica 47, 263–291 (1979).

Campbell, L. M. Ecol. Appl. 12, 1229–1246 (2002).

Englich, B. & Soder, K. Judgm. Decis. Mak. 4, 41–50. (2009)

Burgman, M. A. et al. PLoS ONE 6, e22998 (2011)

Meiri, S., Mace, G. M. PLoS Biology 5, 1385–1386 (2007).

Poynard, T. et al.  Ann. Intern. Med. 136, 888–895 (2002).

Cooke, R. M. Experts in Uncertainty: Opinion and Subjective Probability in Science (Oxford Univ. Press, 1991).

O’Hagan, A. et al. Uncertain Judgements: Eliciting Experts’ Probabilities (Wiley, 2006).

Morgan, M. G., Henrion, M. Uncertainty: A Guide to Dealing with Uncertainty in Quantitative Risk and Policy Analysis (Cambridge Univ. Press, 1990).

Morgan, M. G. Proc. Natl Acad. Sci. USA 111, 7176-7184 (2014).

Cooke, R. M., Goossens, L. H. J. Radiation Protection and Dosimetry 90, 303–309 (2000).

Aspinall, A. Nature 463, 294–295 (2010).

Sutherland, W. J. et al. Trends Ecol. Evol. 30, 17–24 (2015).

Sutherland, W. J. et al. Ecol. Soc. 19, 3 (2014).

Cartright, N. & Hardie, J. Evidence-Based Policy (Oxford. Univ. Press, 2012).

Sutherland, W. J. et al. What Works in Conservation (OpenBooks, 2015).

Dicks, L. V. et al. Conserv. Lett. 7, 119–125 (2013).

Speirs-Bridge, A. et al. Risk Analysis 30, 512–523 (2010).

Koriat, A., Lichtenstein, S. & Fischhoff, B. J. Exp. Psychol. Hum. Learn. Mem. 6, 107–118 (1980).

Kruger, J. & Dunning, D. Psychology 1, 30–46 (2009).

Gregory, R. & Failing, L. Structured Decision-Making (Wiley-Blackwell, 2012).

McBride, M. F. et al. Meth. Ecol. Evol. 3, 906–920 (2012).

Galton, F. Nature 75, 450–451 (1907).

Burgman, M. A. et al. Conserv. Lett. 4, 81–87 (2011).

Tetlock, P. Expert Political Judgment: How Good Is It? How Can We Know? (Princeton Univ. Press, 2005).

Evans, D. Risk Intelligence (Atlantic, 2012).

Chi, M. T. H. in The Cambridge Handbook of Expertise and Expert Performance 21–30 (eds Ericsson, K. A. et al.) (Cambridge Univ. Press, 2006).

Winkler, R. L. & Poses, R. M. Mgmt Sci. 39, 1526–1543 (1993).

Author information


  1. William J. Sutherland is professor of conservation biology in the Department of Zoology, University of Cambridge, UK.

  2. Mark Burgman is professor of botany and managing director of the Centre of Excellence for Biosecurity Risk Analysis, School of BioSciences, University of Melbourne, Parkville, Australia.

Corresponding author

Correspondence to:

Author details

For the best commenting experience, please login or register as a user and agree to our Community Guidelines. You will be re-directed back to this page where you will see comments updating in real-time and have the ability to recommend comments to other users.

Comments for this thread are now closed.


Comments Subscribe to comments

There are currently no comments.

sign up to Nature briefing

What matters in science — and why — free in your inbox every weekday.

Sign up



Nature Podcast

Our award-winning show features highlights from the week's edition of Nature, interviews with the people behind the science, and in-depth commentary and analysis from journalists around the world.