Worldwide and across many fields, there lurks a hidden assumption about how scientific expertise can best serve society. Expert advice is often thought most useful to policy when it is presented as a single 'definitive' interpretation. Even when experts acknowledge uncertainty, they tend to do so in ways that reduce unknowns to measurable 'risk'. In this way, policy-makers are encouraged to pursue (and claim) 'science-based' decisions. It is also not uncommon for senior scientists to assert that there is no alternative to some scientifically contestable policy. After years researching — and participating in — science advisory processes, I have come to the conclusion that this practice is misguided.

A UK crop circle, created by activists to signify uncertainty over where genetic contamination can occur. Credit: G. GRAF/GREENPEACE

An overly narrow focus on risk is an inadequate response to incomplete knowledge. It leaves science advice vulnerable to the social dynamics of groups — and to manipulation by political pressures seeking legitimacy, justification and blame management. When the intrinsically plural, conditional nature of knowledge is recognized, I believe that science advice can become more rigorous, robust and democratically accountable.

An overly narrow focus on risk is an inadequate response to incomplete knowledge.

A rigorous definition of uncertainty can be traced back to the twentieth-century economist Frank Knight1. For Knight, “a measurable uncertainty, or 'risk' proper ... is so far different from an unmeasurable one that it is not in effect an uncertainty at all”. This is not just a matter of words, or even methods. The stakes are potentially much higher. A preoccupation with assessing risk means that policy-makers are denied exposure to dissenting interpretations and the possibility of downright surprise.

Of course, no-one can reliably foresee the unpredictable, but there are lessons to be learned from past mistakes. For example, the belated recognition that seemingly inert and benign halogenated hydrocarbons were interfering with the ozone layer. Or the slowness to acknowledge the possibility of novel transmission mechanisms for spongiform encephalopathies, in animal breeding and in the food chain. In the early stages, these sources of harm were not formally characterized as possible risks — they were 'early warnings' offered by dissenting voices. Policy recommendations that miss such warnings court overconfidence and error.

The question is how to move away from this narrow focus on risk to broader and deeper understandings of incomplete knowledge. Many practical quantitative and qualitative methods already exist (see 'Uncertainty matrix'), but political pressure and expert practice often prevent them being used to their full potential. Choosing between these methods requires a more rigorous approach to assessing incomplete knowledge, avoiding the temptation to treat every problem as a risk nail, to be reduced by a probabilistic hammer. Instead, experts should pay more attention to neglected areas of uncertainty (in Knight's strict sense) as well as to deeper challenges of ambiguity and ignorance2. For policy-making purposes, the main difference between the 'risk' methods shown in the matrix and the rest is that the others discourage single 'definitive' policy interpretations.

Any justification

There are still times when 'risk-based' techniques are appropriate and can yield important information for policy. This can be so for consumer products in normal use, general road or airline-safety statistics, or the epidemiology of familiar diseases. Yet even in these seemingly familiar and straightforward areas, unforeseen possibilities, and over-reliance on aggregation, can undermine probabilistic assessments. There is a need for humility about science-based decisions.

For example, consider the risk assessment of energy technologies. The other graphic (see 'The perils of 'science-based' advice') summarizes 63 studies on the economic costs arising from health and environmental impacts of different sets of energy technologies. The aim of the studies is to help policy-makers identify the options that are likely to have the lowest impact. This is one of the most sophisticated and mature fields for quantitative risk-based comparisons. Individual policy reports commonly express their findings as if there were little room for doubt. Many of the studies present no — or tiny — uncertainty ranges. But taken together, these 63 studies tell a very different story3 — one usually hidden from policy-makers. The discrepancies between equally authoritative, peer-reviewed studies span many orders of magnitude, and the overlapping uncertainty ranges can support almost any ranking order of technologies, justifying almost any policy decision as science based.

Figure 1
figure 1

SOURCE: REF. 3

This is not just a problem with quantitative analysis. Qualitative science advice is also usually presented in aggregated and consensual form: there is always pressure on expert committees to reach a 'consensus' opinion. This raises profound questions over what is most accurate and useful for policy. Is it a picture asserting an apparent consensus, even where one does not exist? Or would it be more helpful to set out a measured array of contrasting specialist views, explaining underlying reasons for different interpretations of the evidence? Whatever the political pressures for the former, surely the latter is more consistent both with scientific rigour and with democratic accountability?

I believe that the answer lies in supporting more plural and conditional methods for science advice (the non-risk quadrants shown in 'Uncertainty matrix'). These are plural because they even-handedly illuminate a variety of alternative reasonable interpretations. And conditional because they explore explicitly for each alternative, the associated questions, assumptions, values or intentions4. Under Knightian uncertainty, for instance, pessimistic and optimistic interpretations can be treated separately, each explicitly associated with assumptions, disciplines, values or interests so that these can be clearly appraised. It reminds experts that absence of evidence of harm is not the same as evidence of absence of harm. It also allows scenario analysis and the consideration of sensitivity, enabling more accountable evaluation. For example, it could allow experts to highlight conditional decision rules aimed at maximizing best or worst possible outcomes, or 'minimizing regrets'5.

The few sporadic examples of the application of this approach show that it can be practical. One particularly politicized and high-stakes context for expert policy advice is the setting of financial interest rates. The Bank of England's Monetary Policy Committee, for example, describes its expert advisory process as a “two-way dialogue” — with a priority placed on public accountability. Great care is taken to inform the committee, not just of the results of formal analysis by the sponsoring bodies, but also of complex real-world conditions and perspectives. Reports detail contrasting recommendations by individual members and explain reasons for differences6. Why is this kind of thing not normal in science advice?

When scientists are faced with unmeasurable uncertainties, it is much more usual for a committee to spend hours negotiating a single interpretation across a spread of contending contexts, analyses and judgements. From my own experiences of standard-setting for toxic substances, it would often be more accurate and useful to accept these divergent expert interpretations and focus instead on documenting the reasons. In my view, concrete policy decisions could still be made — and possibly more efficiently. Moreover, the relationship between the decision and the available science would be clearer and the inherently political dimensions more honest and accountable.

Problems of ambiguity arise when experts disagree over the framing of possible options, contexts, outcomes, benefits or harms. Like uncertainty, these cannot be reduced to risk analysis, and demand plural and conditional treatment. Such methods can highlight — rather than conceal — different regulatory questions, such as: “what is best?”, “what is safest?”, “is this safe?”, “is this tolerable?” or (as is often routine) “is this worse than what we have now?” Nobel-winning work in rational choice shows that when ambiguity rules there is no guarantee, as a matter of logic, that scientific analysis will lead to a unique policy answer7. Consequently, definitive science-based decisions are not just potentially misleading — they are a fundamental contradiction in terms.

Methods that work

One practical example of ways to be plural and conditional when considering questions and options, as well as in deriving answers, is multicriteria mapping. Other participatory and deliberative procedures include interactive modelling and scenario workshops, as well as Q-method and dissensus methods. Multicriteria mapping makes use of simple but rigorous scoring and weighting procedures to reveal the ways in which overall rankings depend on divergent ways of framing the possible options. In 1999, Unilever funded me and colleagues to use multicriteria mapping to study the perspectives of different leading science advisers on genetically modified (GM) crops8. The backing of this transnational company helped draw high-level UK government attention. A series of civil servants told me, in quite colourful terms, that results mapped out in plural, conditional fashion would be “absolutely no use” in practical policy-making. Yet when a chance finally emerged to present results to Mo Mowlam, the relevant cabinet minister, the reception was very positive. She immediately appreciated the value of having alternative perspectives laid out for a range of policy options. It turned out in this case, that the real block to a plural, conditional approach was not the preferences of the decision-maker herself, but of some of those around her.

In my experience, it is the single definitive representations of science that are most vulnerable to political manipulation. Plural, conditional approaches are not immune, but they can help make political pressures more visible. Indeed, this is what happened during another GM policy process in which I was involved: the 2003 UK science review of GM crops. Reporting included explicit discussion of uncertainties, gaps in knowledge and divergent views — and was described as “neither a red nor a green light” for GM technology. A benefit of this more open approach is that it helped GM proponents and critics to work more effectively together during the committee deliberations, without a high-stakes, 'winner takes all' dynamic. There was more space to express alternate interpretations, free from implications that one party or another was wrong. This is important in a highly-politicized area such as GM science, where there are entrenched interests on both sides. Yet this unusual attempt to acknowledge uncertainty was not universally popular. Indeed, it was also the only occasion, to my knowledge, on which the minutes of a UK science advisory committee formally documented covert attempts to damage the career of one of its members (me, in this case)9. Perhaps for political — rather than scientific — reasons, this experiment towards plural and conditional advice has not been repeated.

A further argument for using more plural approaches arises from the state of ignorance, in which 'we don't know what we don't know'. Ignorance typically looms in the choice of which of a range of feasible, economically viable future paths to support — either through funding or regulation — for emerging technologies. In a finite and globalizing world, no single path can be fully realized without detracting from the potential for others. Even in the most competitive consumer markets, for instance, development routinely 'locks in' to dominant technologies such as the QWERTY keyboard or VHS tape. The same is true of infrastructures, such as narrow-gauge rail, AC electricity or light-water reactors. This is not evidence of inevitability, but of the 'crowding out' of potential alternatives. Likewise, locking-in occurs in the prioritizing of certain areas of scientific enquiry over others. The paths taken by scientific and technological progress are far from inevitable. Deliberately or blindly, the direction of progress is inherently a matter of social choice10.

A move towards plural, conditional advice would help avoid erroneous 'one-track', 'race to the future' visions of progress. Such advice corrects the fallacy that scepticism over a specific technology implies a general 'anti-science' sentiment. It defends against simplistic or cynical support for some particular favoured direction of change that is backed on the spurious grounds that it is somehow synonymous with 'sound science', or uniquely 'pro innovation'.

Instead, plural, conditional advice helps enable mature and sophisticated policy debate on broader questions. How reversible are the effects of a particular path, if we learn later that it was ill-advised? How flexible are the associated industrial and institutional commitments, allowing us later to shift direction? How adaptable are the innovation systems? What part might be played by the deliberate pursuit of diverse approaches — to hedge ignorance, defend against lock-in or foster innovation — in any given area?

Thus, such advice provides the basis for a more-equal partnership between social and natural science in policy advice. Plural and conditional advice may also help resolve some polarized fault-lines in current debates about science in policy. It shows how we might better: integrate quantitative and qualitative methods; articulate 'risk assessment' and 'risk management'; and reconcile 'science-based' and 'precautionary appraisal' methods.

A move towards plural and conditional expert advice is not a panacea. It cannot promise escape from the deep intractabilities of uncertainty, the perils of group dynamics or the perturbing effects of power. It differs from prevailing approaches in that it makes these influences more rigorously explicit and democratically accountable.