In our attempts to increase the use of evidence in policy, are we neglecting the evidence system?

There is a disjoint in the relationship between the system that generates research evidence, ‘the evidence system’, and the system that generates public policy, ‘the policy system’. Efforts to increase evidence-informed policy (EIP), have shifted from an emphasis on the dissemination of research by producers, to a focus on the use of research by decision-makers (Oliver et al. 2014; Boaz et al. 2011). This focus has sharpened to include efforts to ensure evidence is routinely integrated into the policy system, better understood as the ‘institutionalisation’ of EIP. There has been an emphasis on unpacking policy processes, documenting the phases of institutionalisation of EIP in government bodies, and research on the facilitators and barriers to such institutionalisation (Dayal, 2016; Langer et al. 2019). The institutionalisation of EIP within the evidence generation system, however, is rarely, if ever, mentioned.

This paper argues that in order to transform evidence for policy we need to focus at a systems level, that in doing so there is a danger of focusing on only the policy system, and that the evidence system has been overlooked in the drive for change. It proposes ten steps for transformation, as a start.

Why is a systems’ lens important?

There is a recognition that in order to fulfil the potential for EIP to increase impact, improve accountability, transparency, and governance, and reduce the risk of wasteful public resources, we need to move beyond a focus on individuals, teams and even organisations (Stewart, 2015), and focus on systems change (Koon et al. 2020). We need to understand and influence change across the complete evidence ecosystem (Stewart et al. 2019). Within this ecosystem, there are at least two important systems: the system for evidence generation – the evidence system – and the system of public policy making – the policy system.Footnote 1 If EIP is to contribute to societal impact, understanding these systems, and their relationship to one another, is paramount.

There has been investment in, and a focus on, the policy system by some within the field of EIP. Indeed, the holy grail of the evidence movement is arguably the established and routine consideration and use of evidence within not just one policy theme, but the policy system itself. Funders of EIP activities recognise the power of policy-change and regularly seek policy influence as an ultimate goal (Altshuler and Lucas, 2021; Hewlett Foundation, 2018). Knowledge brokers also talk about the need for government colleagues to take ‘ownership’ of evidence-promotion activities, and to ‘embed’ externally funded programmes within their policy structures. EIP researchers are seeking to understand and promote the institutionalisation of evidence within policy systems (Zida et al. 2017). Examples of institutionalisation of evidence-use are frequently cited, including the Socio-Economic Impact Assessment System (SEIAS) in South Africa (Stewart et al. 2019; DPME, 2021), and the relationship between the Department of Health and NICE within the UK (NICE, 2021).

Despite interest in the policy system (DuMont, 2019; Shearer et al. 2016), there is limited acceptance of the central role that the actors in the policy system play (Greenhalgh and Russell, 2006). The language and terminology used is not the language of governments, and there seem to be few efforts to understand policy environments in terms of evidence use. Government frameworks for knowledge generation, curation, or strategic knowledge management are rarely mentioned (Parkhurst, 2017). Cursory attention is paid, if any, to researchers who work within governments, to the research that governments commission, or to the commissioning processes themselves. Little credit is given to activities within governments to ensure that policies are based on the best available evidence (Head, 2010), even though these internal efforts are, by definition, more cognisant of the policy environment. This environment constantly seeks to coordinate policies across sectors and establish policy coherence through vertical and horizontal integration with the establishment of related structures and processes that defines the way governments across the world operate.Footnote 2 Evidence advocates consider themselves successful if they have a relationship with a policymaker, with little consideration of the structural and institutional relationships between the evidence and policy systems that must ultimately be connected. For these connections to become routinely established, there is a need to accept the central importance of the policy system and the institutions, structures and players within it (Langer and Stewart, 2016).

There is also a need to accept that the policy system is largely outside of academia’s sphere of influence due to its political nature. As such, a focus predominantly on how the policy system must change to use evidence in a routine manner, shifts attention away from changes needed in the system of evidence generation. If EIP is fundamentally a relationship between evidence and policy, so the institutionalisation of EIP requires changes within both the policy system and the evidence system towards meeting the same goals of development and societal impact. It is not enough for researchers to observe, instruct, or support policymakers with what they need to change. Unless the evidence system is fit-for-purpose to support EIP, the policy system will be held back in its ability to systematically use evidence‚ and vice-versa, thereby constraining the broader institutional context of knowledge production. As researchers we must get our own house in order and therefore start the process of self-reflection for change amongst ourselves.

Why is lobbying for findings of individual studies not sufficient?

As national research funders increasingly focus on research impact, largely measured in terms of academic publications, citations and engagement within the academic community (Penfield et al. 2015; Tijssen and Kraemer-Mbula, 2018), we are beginning to see critical analysis that suggests research should also have a societal impact (Leydesdorff et al. 2016; NRF, 2021; Penfield et al. 2015; Woolston, 2021). This change has not, as far as we can tell, led to review or reflection of the system for evidence generation itself. (Let us not forget that even this limited focus on societal impact is a radical shift for many of those working in academia, particularly outside of the health field.) It has largely led to recommendations that research findings (of individual studies) could now be communicated to policy audiences in more creative ways, in addition to communication to academic ones (Jensen and Gerber, 2020), without really addressing the gap of relevance and legitimacy in the evidence generation process. As a result research dissemination and production of policy briefs are increasingly common (Arnautu and Dagenais, 2021). These are produced in the hope that they will end up on the desks of decision-makers and feed into policy processes. We know, however, that such dissemination alone is not effective, and must be combined with activities to build opportunity and motivation to use evidence (Langer et al. 2016). Whilst academia continues to prioritise the publication of research findings in highly cited journals, the chances of effective evidence uptake at every stage of the policy process, combined with deliberative engagements, remain slim. Even worse, this valuing of academic publication inevitably leads to knowledge locked behind costly paywalls imposed by academic publishers effectively freezing out non-academic audiences altogether (Else, 2021). Such norms and values which define continued practices must be challenged as part of the necessary review of our own evidence system and the obstructive roles that this system plays in building effective and inclusive relationships between evidence and policy.

Why is lobbying for the findings of bodies of evidence still not enough?

We argue above that the first necessary step towards rethinking the role of the evidence system in EIP is to recognise the flaws of lobbying for the consideration of findings from individual studies (Gough et al. 2020), a key performance area that is promoted and incentivized via current research systems, and to start doing away with this practice outright (apart from in exceptional circumstances where there is no relevant body of available evidence). However, despite the strengths and opportunities of systematically collated evidence bases, lobbying for the findings of bodies of evidence is also not enough.

We know that research findings need to be transparent, unbiased and comprehensive. We know that individual studies can fail on all these criteria, and that systematic review (also referred to as evidence synthesis) methodology enables us to counter these risks and present comprehensive evidence bases to decision-makers (Gough et al. 2020). We also know that the evidence synthesis community is continuously innovating to ensure they provide complete evidence bases in timely and responsive ways (Gough et al. 2017; Langer et al. 2018). These developments are glimmers of hope from within the evidence system for more effective generation of evidence for policy.

Systems for commissioning, funding, tracking and producing systematic reviews are well established in health, and are gradually being established in small but increasing ways in environmental management, education, international development and other areas of social and economic policy (Bernes et al. 2013; Boaz et al. 2011). These systems are innovating to respond to the requirements of the policy system, with recent adaptations designed to support policy coherence: examples include co-produced evidence maps, responsive evidence services (Dayal and Langer, 2016; Mijumbi et al. 2014) and living reviews (Elliott et al. 2014). However, those who conduct systematic reviews, and who are driving these more recent innovations, remain a minority. Furthermore, collation of bodies of evidence, whether in evidence maps or in systematic reviews, has tended to focus on individual policy decisions, or occasionally on clusters of policies, but not on systems for policy generation. For example, reviews have been used to inform COVID-19 responses around the world, and available reviews have been deliberately summarised and published via evidence portals for this purpose (Brainard, 2020; Grimshaw et al. 2020). Whilst the collation of bodies of evidence has improved the relationship between evidence and policy, it has largely taken place outside of, or at best on the side of, the systems for evidence generation.

There are some indications that routine practices in the commissioning and reporting of research have shifted, with systematic reviews more routinely used to inform policy, but these are limited to the health sector (Niforatos et al. 2019; Raftery et al. 2016). These shifts have included the requirement by those commissioning new randomised controlled trials for applicants to refer to existing systematic reviews to identify gaps and justify applications for funding for new research. Another example has been the greater standardisation of research reporting to enable the process of systematic review: published papers of trials are now expected to contain the title ‘An RCT of x on y in z population’. Nevertheless, not only are these developments within health research not yet complete research (Glasziou and Chalmers, 2018), but such efforts have barely begun outside of the health sector.

The increase in the production of evidence maps and systematic reviews that collate the complete bodies of research evidence for policy has helped in negotiating the relationship between evidence production and the policy decisions they seek to inform. They have not, however, resulted in fundamental shifts within the evidence system nor consideration of the relationship between the evidence system and the routine use of evidence in policy. What is needed here is for both the evidence and policy communities to be advocating for the generation and use of bodies of evidence across sectors and between the different levels of policy development and implementation.

Have we left the evidence system behind?

Our conclusion, arising out of the arguments made, as to whether the evidence system has been left behind, is a resounding yes. Traditionally the system for generating evidence, in particular in academia, barely considers societal impact at all, and certainly has limited – if any – relationship to policy priorities. The majority of research commissioning lacks comprehensive, unbiased and transparent consideration of the existing body of research prior to funding of new research (Morciano et al. 2020), and, as such, fails to identify known research gaps, and risks duplication of effort (Chalmers et al. 2014; Sarewitz, 2018). Most researchers work within a system that rewards publications and citations without consideration of wider societal impacts. Dissemination of research findings to anyone outside of the research community remains the exception. Collation of evidence-bases into evidence maps and systematic reviews remains a niche activity that is underfunded and poorly understood by many within the research sector and broader evidence system. The EIP movement is driven by a relatively small community of researchers that aim to facilitate the relationship between evidence and policy, but for whom change in the policy system remains the goal, whilst the evidence system is neglected. We think it is time for this neglect to be addressed with due diligence.

What might a “transformed evidence system” look like?

In recognition that researchers cannot seek to reform the policy system without also reforming the evidence system of which they are part, this paper proposes some building blocks for a transformed evidence system and invites further dialogue about these proposals. If evidence-use were to be institutionalised within the evidence system, we would hope to see:

  1. (1)

    Evidence-informed policy activities focused on understanding and integrating the relationship between the evidence and policy systems including clearer channels for communication, learning and collaboration amongst those within the two systems.

  2. (2)

    Greater value given to collaborative cross-field interdisciplinary and transdisciplinary research than to single-author, single-disciplinary work. Societal challenges are not solved by individuals within single, specialist fields, dissecting the problem and attempting to find solutions within separate domains – a reflection point for problem-focused research.

  3. (3)

    Formal recognition of the value of policy-relevant research, including participatory research-related activities and co-production that include service-users, practitioners and policymakers.

  4. (4)

    Formal recognition of the value of research synthesis, including evidence maps and systematic reviews that explicitly aim to summarise the evidence-base to inform policy and identify gaps for future research.

  5. (5)

    Research agenda setting, which is inclusive in its processes, is based on a comprehensive understanding of the existing evidence-base and informed by national and international policy priorities.

  6. (6)

    Rigorous research into the strengths and weaknesses of the science-policy advice approach, which bears the risk of perpetuating the promotion of individual research studies and individuals’ perspectives.

  7. (7)

    Both chief scientific advisors to the policy system and chief policy advisors for the evidence system; in both cases embedded within wider structures and not operating independently.

  8. (8)

    Establishment of a new generation of evidence centres that focus on methods for generating bodies of evidence (through systematic evidence synthesis) to meet specific policy priorities across different sector domains. Not only should such centres innovate with partners on methods but should also sensitise and train the next generation of researchers in these methods and orientation.

  9. (9)

    Greater emphasis on building research infrastructure and processes which support the generation of bodies of evidence, facilitate the intersection and collaboration between evidence and policy systems, and thereby institutionalise EIP.

  10. (10)

    Ring-fenced resources to rigorously monitor and evaluate the effectiveness of these steps to transform the evidence system for the institutionalisation of EIP.

We hope that the wider evidence community will join this conversation, and take the dialogue into the depths of the evidence generation community, as it is only together that we can transform the evidence system for EIP.