It's now widely accepted that government policy, at least in democratic nations, should be 'evidence based', meaning that it ought to rest on careful consideration of the facts. For more complex issues in particular, where the probable consequences of any action are far from clear, policy should use the best available science to establish cause–effect relations and practically useful 'if ... then' propositions, enabling the policymaker to make the best decision. Evidence is crucial, whether it's in the pursuit of better financial regulation, safe drug development or environmental protection.

This seems like an obviously sensible idea, given the alternative of making decisions on the basis of something other than evidence. Even so, some scientists and philosophers argue that the matter isn't so cut and dried (http://arxiv.org/abs/1607.07398). The rhetoric of evidence-based logic can seem so persuasive, they suggest, that discussions get prematurely diverted into narrow measurement exercises and policy often ends up being designed as an optimal solution to the wrong problem, rather than a crude but decent solution to the right one.

As Andrea Saltelli and Mario Giampietro point out, the adoption of an evidence-based approach to policy hasn't diminished the controversy over a host of contemporary issues, including the use of pesticides and their possible role in the disappearance of bees, whether the culling of badgers in the UK is the wise approach to countering bovine tuberculosis or the benefits of shale gas fracking. Even more obviously, questions over how best to respond to climate change remain mired in argument and confusion despite decades of analyses that propose supposedly optimal policy responses, all claiming to be based on 'the evidence'. Why isn't the evidence winning out?

An evidence-based approach hasn't diminished the controversy over a host of contemporary issues.

One all-too-easy answer is to blame the participants in these debates, who may not be acting in good faith or who, for various reasons, refuse to accept legitimate evidence. Self-interested and powerful lobbies exert huge influence over policy on smoking or climate change, for example, and some people believe things that have little if any empirical support —that vaccines cause autism, for instance. Yet this is only a superficial answer, Saltelli and Giampietro argue, and the trouble actually lies much deeper — in lingering disagreements over implicit and unshared assumptions about the goals of policy, or over what kinds of evidence get included or excluded from an analysis.

For example, take the politically charged issue of foods grown using genetically modified organisms (GMOs). The corporations that make them, most economists and many others — most of whom see themselves as models of rationality — argue that these foods present no human health risks and, therefore, should be produced and sold like other foods. In 2014, The Economist stated in an article that GMO foods have been “declared safe by the scientific establishment” (http://go.nature.com/2baSywo). Superficially, the argument seems sound: the science does apparently support the safety of such foods, even if the body of empirical evidence is considerably thinner than proponents make out.

And yet, as Saltelli and Giampietro point out, whether such foods are safe is only one aspect of a host of larger issues surrounding GMOs. Surveys find that people worried about GMOs cite a number of other concerns and questions. They wonder, for example, who decided that GMOs should be developed and why? Will they benefit people or are they being pursued mainly for corporate profit and control? They also questioned why such foods appeared in the market without their knowing, and why they weren't given more information so they could choose whether to buy them or not. The evidence on food safety doesn't touch on any of these matters, and framing the issue in this way leads to a simple answer that is actually too simple. It's rhetorically useful for some parties, but steers around the most important issues.

A similar problem arises in the consideration of climate change and what to do about it. Economists generally take as their starting point that the goal of any policy should be to maximize global consumption over the infinite future (Nat. Phys. 11, 984; 2015). Most non-economists reject this as a sensible goal, and so see little value in the supposed optimal policies that economists derive in their analyses based on this assumption. This approach seems rational to economists, but it is only rational within the context of one arbitrary and narrow framing of the problem we face.

This is why, according to Saltelli and Giampietro, the pursuit of evidence-based policy hasn't resolved issues in quite the straightforward way we might have naively hoped it would. Evidence isn't enough, as the most serious and contentious issues arise from differing values, from clashing views of the world and what we want from it. Evidence-based policy analysis, to be carried through at all, typically requires a radical oversimplification and narrowing of focus, which inevitably turns discussion away from these primary clashes. It ends up too frequently as a distraction from the more important disagreements, but one that suits the aims of some actors all too well.

Saltelli and Giampietro quote the social theorist Steve Rayner of the University of Oxford, who coined the useful phrase 'socially constructed ignorance' to describe how narrow worldviews, even if unrealistic, help people make decisions with comfort. In Rayner's words: “To make sense of the complexity of the world so that they can act, individuals and institutions need to develop simplified, self-consistent versions of that world. The process of doing so means that much of what is known about the world needs to be excluded from those versions — particularly knowledge that is in tension or outright contradiction, which must be expunged.”

The result can be the pursuit of decisions through unsophisticated calculations — cost–benefit analyses in many cases — which appear entirely rational, yet achieve that appearance only by excluding from consideration most of what really matters. Such problem structuring to solve complex issues is, as philosopher Lewis Mumford put it, a form of “mad rationality”.

So doubts over the perceived superiority of evidence-based policy needn't be irrational. The argument of evidence-based policy is that a decision should rest solely on the evidence E. To deny this is not to reject the use of evidence, but only to say that the decision, D, should NOT rest solely on E. This makes a world of difference.