Introduction

As the world struggles with complex problems that affect all aspects of human civilisation—from climate change and loss of ecosystems and biodiversity, to overpopulation, malnutrition and poverty, to disease, ill health and an ageing population—never before has it been more important to base government policy for intervention upon scientific evidence. In this article, we outline a methodology for integrating the process of scientific investigation with political debate and social discourse in order to improve the science–policy interface.

Science advisors and advisory bodies with scientist representation have steadily increased (Gluckman and Wilsdon, 2016); for example, in the UK in the form of the Food Standard Agency (FSA), the Human Fertilisation and Embryology Authority (HFEA) and the National Institute for Health and Care Excellence (NICE) or globally within the commissions and advisory bodies associated with the United Nations and/or the use of technical review panels such as within The Global Fund to Fight Aids, Malaria and Tuberculosis. The well-established Intergovernmental Panel on Climate Change (IPCC) is a model for other panels, such as the Intergovernmental Platform on Biodiversity and Ecosystem Services. However, the process by which scientific evidence becomes part of a policy is complicated and messy (Gluckman, 2017; Malakoff, 2017), and there are many examples to support the view that this results in fundamental failings to deal quickly or effectively with major global challenges. For example, the time lag between the beginning of meaningful climate action in the COP21 Paris Agreement and the science that proved that greenhouse gas emissions are changing the climate, illustrates the difficulty of evidence-based policy making. It also questions the ultimate effectiveness of policy making, since there is evidence to suggest that COP21 may be too little, too late (Rockstrom et al., 2017). Similarly, the continued EU embargo on the use of food from genetically modified (GM) crops shows a serious disconnect between public opinion and the huge amount of scientific evidence that shows that the environmental and health risks are infinitesimal.

What is evidence?

The reasons for these apparent failures are complicated and numerous but one key issue is what constitutes evidence. Even when the problem is clearly one where science can provide a solution, evidence is not only derived from scientific investigation, but also from the political, cultural, economic and social dimensions of these issues, resulting in arguments about relative validity and worth. Hence, bias and prejudice are difficult to remove and evidence is often cherry-picked, only lightly consulted, partially worked into policy (if at all), and/or side-stepped in favour of ideological preferences. Even when evidence is abundant and clear, it is often ignored as we enter a ‘post-truth’ era where the opinions of experts are viewed with scepticism and populist solutions predominate (e.g., a 140 character tweet can brand a piece of sound scientific evidence as ‘fake news’). The ready availability and sharing of information through the internet and social media, which in some sense democratise evidence by increasing the diversity of inputs, should be a positive and welcome development. Condorcet’s mathematical Jury Theorem suggests that ‘larger groups make better decisions’ and that more, and diverse, input leads to better ‘collective intelligence’ (Condorcet, 1785). Thus, the increase in diverse information should foster ‘the wisdom of crowds’ (Surowiecki, 2005) towards ‘the better argument’ (Landemore and Elster, 2012). However, online content is personalised through the use of algorithms aimed to harvest and respond to existing preferences. Thus, the internet often fosters an ‘echo chamber’ effect that limits cognitive diversity and increases ‘group think’ by providing and linking information based solely on the entrenched preferences of the internet user and like-minded individuals (Grassegger and Krogerus, 2016). In addition, there is a view that scientific investigation is not clear, takes place outside the public sphere and often perceived as purposefully elitist. This gives rise to conspiracies about who produced the evidence and for what purpose, eroding epistemic authority. As a result, highly personalised preferences are reinforced by selective information, despite the fact that this information might amount to misinformation, exaggeration, falsehood and degraded or ‘cherry-picked’ evidence. Hence, rational policy development is thwarted because governments are tempted to use the evidence that concurs with the preconceived views of their constituents as well as their own existing political mantras, or which confirms public perceptions and aspirations, whether this mirrors the best available evidence or not.

The problem with scientific evidence

For scientists this is a particularly difficult problem to deal with. Science establishes facts, such as the fundamental physics proving that increasing levels of CO2 in the atmosphere will result in an increased greenhouse effect. Even when proof is elusive (such as knowing exactly how, where and when this greenhouse effect will be translated into changes in climate) the notion of evidence is sacrosanct, it being derived from objective analysis, evaluation, testing, experimentation, retesting and falsifiability. To see hard won evidence ignored, distorted or diluted in favour of what seem ill-informed subjective views leads to frustration and anger. However, a more constructive and positive response would be to realise that the evaluation of scientific evidence cannot be divorced from the political, cultural and social debate that inevitably and justifiably surrounds most major issues. Using the two examples above, the long and sometimes tortuous pathway to the COP21 climate change accord results from the difficult economic trade-offs involved and the very different socio-political perspectives of the nations of the world. In the case of GM, the emotional context of food consumption that may favour natural foods cannot be treated dismissively, nor can the legitimate concerns about increased power and control that GM might give to multinational agri-businesses. As stated elsewhere (Cairney, 2016), scientific investigation defines problems, but often does not identify policy-acceptable, scalable and meaningful solutions. Scientists are often not effective in communicating their findings to audiences outside academia and frequently hold naive assumptions that good evidence will be readily accepted and can quickly contribute to policy. Not appreciating the complexity and non-linearity of many of the intractable problems that science is addressing (so-called wicked problems—DeFries and Nagendra, 2017) is often the root cause of this failure. Thus, the question often asked is can we improve the ways in which scientific evidence is constructed, integrated and communicated, so it can contribute more effectively, efficiently and quickly into policy formulation, in ways that combat the problems of a ‘post-truth’ era.

Producing evidence

Ideas for a policy intervention follow identification of a particular societal problem and may be initiated by a variety of organisations—governments, agencies of government such as research funding bodies, political parties, pressure groups, NGOs, think-tanks or groups of concerned academics (Fig. 1). It may be top-down or bottom-up. This is then followed by the production of evidence about the operation, implementation and effectiveness of the policy idea, commissioned or carried out by the policy proposer. The process of evidence production normally follows a number of steps, which are depicted in Fig. 1A as a MAVS cycle—an iterative process of mapping, analysis, visualisation and sharing (Horton et al., 2016).

Fig. 1
figure 1

An integrated process for policy development. a MAVS—an iterative process for obtaining evidence for policy development. The first step is to map the component processes and participants in the issue, if appropriate as a system wide exercise. Then this issue is analysed using data and information and appropriate tools such as Life Cycle Assessment together with the tools of social science. The results of this analysis are then visualised in transparent form as a dashboard, ready for sharing among all stakeholders and when appropriate through publication in academic journals and/or news media. The data produced might well identify other dimensions of the issue and initiate further cycles. b Identification of a problem or opportunity that requires a policy intervention is put on the agenda. The first task to assemble evidence using the MAVS protocol. This evidence is evaluated using a two-pronged process, independent scrutiny and testing of the evidence and use of deliberative forums to address key issues arising. Repetition of this process will lead to a policy ready for implementation

The usefulness of formalising evidence generation in this way was demonstrated in addressing a specific policy question of how to reduce the environmental impact of the production of bread (Goucher et al., 2017; Horton, 2017). Mapping identified all the key actors in the wheat-bread supply chain, from whom data was obtained. This complete data set was then analysed by a standardised process of Life Cycle Assessment. The evidence clearly showed the dominant contribution of fertiliser as a source of greenhouse gas emissions—which was presented in easily visualised form, and shared via publication in a peer-reviewed academic journal (Goucher et al., 2017), press releases and a summary article in The Conversation (Horton, 2017). These were widely read and discussed across a wide variety of media. The evidence was subsequently taken up by commercial bodies in the wheat-bread industry who are now seeking ways towards a ‘sustainable bread’.

We suggest that the MAVS methodology could be similarly useful in evidence gathering for many other policy purposes. In such cases the evidence might be much more complex than in the above example, because policy more often than not is addressing complex multidimensional wicked problems rather than purely technical ones. One challenge is how to integrate scientific evidence, which is usually quantitative data, with the qualitative data obtained by social sciences. For this, further development of social indicators is crucial, including indicators of well-being, values, agency and inequality (Hicks et al., 2016). Furthermore, what is being suggested here is that evidence production should not be limited to only presenting analytically coherent statements about ‘facts’, ‘truth’ and ‘solutions’. Thus, evidence also needs to be generated in direct response to existing preferences as a means to either support or falsify preferences in a way that speaks to them, not over them. Here, interdisciplinary incorporation of social science techniques adds to the scientific data by providing stakeholder analysis, preference identification and social categorisation.

Lessons could be learned from recent experiments aimed to increase health policy outcomes associated with the production of evidence by means of participatory research models, which incorporate stakeholders into the design (mapping), evaluation (analysis), communication (visualisation and sharing) and implementation phases of research. By doing so, several unique features result. Firstly, stakeholders are able to provide ‘on the ground’ insights about the problems or misunderstandings the research needs to address. Hence the research questions are tailored to these needs and the final aims of the research made transparent. Secondly, by including stakeholders throughout the process, it creates ‘buy-in’ and better understanding of how the evidence was created, increasing epistemic authority while undermining conspiratory speculation and claims of elitism. Thirdly, inclusion naturally builds trust in the results, which in many cases in health research has allowed for better policy translation and outcomes, since people are more willing to adopt the rationale for a policy if they feel that they were involved in the process. As an example, positive policy results have been witnessed in a number of cases where health research linked circumcision to reduced rates of HIV infection in Africa. Although it is still a highly contentious issue in many parts of the world, the inclusion of political, religious and cultural leaders in the research process in many cases helped to alleviate existing fears and misunderstandings, which facilitated more exact communication and acceptance of the source of evidence (WHO, 2016).

Visualisation and sharing are particularly important steps of the MAVS process. All too often, evidence production and analysis results in lengthy and often impenetrable reports, which make the process of transparent evidence sharing impossible and often counter-productive. For example, Howarth and Painter (2016) describe the problems translating the information contained in IPCC reports into local action plans. Thus, research is urgently needed to find the best ways to visualise and then communicate evidence, for example, using clever infographics and other digital techniques. There is huge potential for evidence sharing via web-based national and international events and new online publishing models (e.g., Horton, 2017). Most important of all, people with expert knowledge need to be active and pro-active rather than passive and reactive; indeed one might argue they have a responsibility to do so. Jeremy Grantham, founder of the philanthropic Grantham Foundation for the Protection of the Environment once stated ‘Be persuasive. Be brave. Be arrested (if necessary)’ (Grantham, 2012). Sharing of experience and approaches is also vital, to find out what works and what doesn’t, creating networks if appropriate, such as the International Network of Government Science Advice (INGSA) or less formal and spontaneous movements such as that which resulted in the March for Science. Supplementing evidence with powerful stories from ‘real life’ can also increase the effectiveness of communication. One key implication here is that what may previously have been regarded as research (in a university for example) may become an activity in which the end result, in terms of impact, advocacy and implementation, is not just an optional ‘add-on’ but an integral and obligatory part of the project.

Evaluating evidence

The next step in our methodology is evidence evaluation. This is an open and transparent process that questions the validity of the evidence. Who leads this evaluation process will depend upon who is leading the policy initiative. Given their reputation for impartiality, transparency and interdisciplinary thinking, universities could play a key independent role, so long as they have procedures to include all stakeholders, particularly those directly affected by a policy intervention. This is not always straightforward, especially when research depends upon funding by governments and various external bodies. The key is to break away from the traditional model of the ‘expert panel of mostly white male senior academics’ and strive towards diversity of experience, ethnicity and gender. Again, the criterion for such assessment is not to produce an impenetrable report, but to follow the principles of visualisation and sharing set out above.

Evidence evaluation simultaneously and equally combines discussion, debate and deliberation with testing of that evidence in further independent scientific scrutiny, including using peer review procedures well known for academic science (Fig. 1B). Evidence from scientific investigation rarely constitutes proof and furthermore does not always meet high standards of objectivity, quality or neutrality. Therefore, it has to be independently assessed, including by consideration of evidence available from other sources and studies. Within the evaluation process it is important to locate not only where evidence is lacking or is inconclusive or ambiguous, but also to understand how evidence is perceived, misunderstood or ignored. Thus, for example, the same piece of evidence can be interpreted in different ways by different stakeholders, leading to disagreement and conflict (discussed in Horton et al. 2016). These then become focal points in deliberative forums that consider the tension between different actors and stakeholders.

The use of stakeholder deliberative forums within the evidence policy process not only allows for misconceptions and ideological stances to be located and understood, but also provides deliberative opportunities for various ideological positions to be held to public scrutiny by other stakeholders. Stakeholders with particularly entrenched preferences are asked to share these preferences and give their best defences and evidence to support them. This includes having stakeholder positions tested against the best evidence available and mutual requests of reason giving from other stakeholders. Deliberative forums help to undermine enclave thinking and force ideology testing via the need for public reason giving. They have had empirical success in creating intersubjective meta-understandings between stakeholders, which over time, allow crucial agreements on key factual elements within contested public policy.

There are already many cases of governments instituting deliberative forums for key policy discussions, in efforts to generate policy consensus, rather than relying on aggregative preference tallying models that only measure existing preferences and pit them against each other in simplistic minority/majority binaries. For example, there have been successful deliberative experiments trialled by the Western Australian Department of Planning and Infrastructure, in British Columbia’s ‘Citizens Assembly’, in Ireland during the Irish Constitutional Convention, and by Oregon State in its ‘Initiative Review’ (Rosenberg, 2007).

Although deliberative forums have largely been physical meetings facilitated by researchers, governments or experts, the use of the internet to broaden the scope of deliberative forums could hold promising innovation. This could allow much wider participation and larger sets of data to be collected and evaluated, aided by the use of artificial intelligence techniques. This is an area to which future research should be directed (Neblo et al., 2017).

Transforming knowledge into policy

The results of this two-pronged evaluation are viewed together in the process by which the evidence associated with a policy idea is transformed into a policy plan, as depicted in Fig. 1B. The policy plan can then be evaluated again, and again, step-by-step until all evidence has been validated and all stakeholder viewpoints have been reasonably satisfied or properly discredited. The policy is then ready for implementation. The anticipation here is that stakeholder ‘buy-in’ will remove barriers to policy implementation and that the use of evidence within these deliberations shape that ‘buy-in’. This is because, although politicians could still ignore evidence-based policy consensus, they would have less incentive to do so if that consensus demonstrated a clear ‘buy-in’ by key stakeholders and the public. In addition, deliberative forums often involve policy makers as key participants and thus can deliver preference alteration, particularly if they are aligned at the same time as other constituent stakeholders.

Can the methodology we describe have an effect on the development of evidence-based policy in general? Combining scientific analysis, participation and deliberation among multiple stakeholders, has been proposed to address the problem of water sustainability (Garrick et al., 2017), food security (Horton et al., 2017) and health (Lucero et al., 2018), and it is in such domains that we foresee it being particularly applicable. However, in many cases the full complexity and messiness of the problem may make strict adherence to this methodology difficult, and here evidence sharing through advocacy, stakeholder outreach and campaigning becomes particularly important. Politicians often take note only when public pressure mounts, for example, because of intense activity in the popular press, as in the recent policy proposals in the UK surrounding plastic bottles, coffee cups and plastic pollution of the oceans. It is perhaps less clear whether our methodology can make impact in more politically charged policy areas such as climate change, where evidence is clear but vested interests, often through ‘post-truth’ and ‘fake news’ work to undermine it. Nevertheless, having a formal framework could be a source of stability, discipline and confidence building, a recourse when problems arise and a way to break through log jams and overcome barriers. By establishing trust between scientists, government and the public, it could help build a more effective science–policy interface.