Introduction

Evidence-based practice is seen as a cornerstone of modern medicine and healthcare more broadly.1 It describes a process where there is 'explicit and judicious use of current best evidence in making decisions about the care of individual patients'.2 The whole of the dental team has a key part to play and the question we ask in this paper is when and how should we be accounting for the input of patients, the public, dental professionals, commissioners and policy-makers in the evidence generation process? We also make a plea to consider implementation during rather than after the evidence generation process.

The process of generating evidence in the traditional model of evidence-based healthcare has been viewed to largely begin with randomised controlled clinical trials of clinical interventions, due to their ability to determine causality. Any observed effect is then pooled statistically across a number of similar trials, using a technique called meta-analysis (when possible) and the evidence then becomes synthesised to create evidence-based policies.3 This process of creating and distilling the available evidence forms the approach taken by groups such as Cochrane, York's Centre for Review and Dissemination and the National Institute of Clinical Excellence. The systematic reviews produced sit at the pinnacle of the hierarchy of evidence (Fig. 1) to 'provide accessible, credible information to support informed decision-making'.4

Figure 1: The hierarchy of evidence.
figure 1

From Hospital Medicine Clinics, 4, Lee C K et al. Understanding Medical Literature, 106–107, 2015, with permission from Elsevier

Once the evidence has been produced, the next logical step is seen to be the translation of this evidence into routine practice. However, changing clinical behaviour is not straightforward. For example, a survey examining general dental practitioners' (GDPs') behaviour before and after the publication of guidance on the use of fluoride varnish demonstrated no significant changes.5 Subsequent research found a number of barriers and facilitators to its use, which included: awareness of recommendations; professional identity; social influences and whether it was something the GDP wanted to do.6 Issues relating to the implementation of antibiotic prescribing guidance followed a similar pattern. The production of guidelines did not result in a direct change in GDP behaviour.7 Indeed, simply educating GDPs or incentivising clinical behaviour was found to be equally limiting.8 This highlights a key concern for funders of medical research. If research is not to be wasted, it must be designed appropriately and make an impact in real life. New studies should account for the lessons learnt from previous research, which in turn should be reported accurately.9,10 Modern trials undertaken in a dental context now conform to the design principles laid down by the Medical Research Council,11,12,13 but there remain challenges implementing the evidence generated.

These problems have led to a rapid growth in 'implementation science', which is also known as 'knowledge translation' or 'knowledge mobilisation'. Many different definitions exist, but there is general agreement that it describes the 'scientific study of methods to promote the uptake of research findings into routine healthcare in clinical, organisational or policy contexts'.14 Recognised implementation frameworks used in implementation science include: Promoting Action on Research Implementation in Health Services (PARIHS) and Knowledge-To-Action (K2A).15,16 PARIHS is a framework that maps out the elements that need attention before, during and after the process of implementation. It proposes that successful implementation is dependent on the complex interplay of the evidence to be implemented (how robust it is and how it fits with local experience), the local context in which implementation is to take place (the prevailing culture, leadership, and commitment to evaluation and learning) and the way in which the process is facilitated (how and by whom).17 The K2A framework describes a cycle of problem identification, local adaptation and assessment of barriers, implementation, monitoring and sustained use.6 Within the cycle, attention is paid to the knowledge creation process, developing knowledge synthesis and tools, and tailoring this to the local context although common interpretations view the action cycle as the process of getting the evidence into practice once it has been generated, ie implementation is construed as a linear process after the evidence has already been generated.

This form of thinking also pervades many interpretations of behaviour change theories, where the problem is commonly seen to again lie at the interface between the end of the evidence production process and clinical practice. Behaviour change theories are then used to influence clinicians' behaviours to adopt this evidence, or understand why it is not being adopted. For example, Michie et al.'s COM-B model is often over-simplified to explore a clinician's capability, opportunity and motivation to change.18,19 Another theory used is the normalisation process theory (NPT). NPT identifies four determinants of embedding (ie, normalising) the evidence into clinical practice: coherence or sense making, cognitive participation or engagement, collective action and reflexive monitoring.20 Again, the emphasis is on 'normalising' new evidence into practice, after the evidence has been generated.

Despite the growing interest in frameworks to enhance the implementation process, the traditional approach of generating evidence and then implementing the evidence into practice is increasingly seen as too simplistic. As argued by Raines et al., (2016) 'the value of shifting from the traditionally used binary question of effectiveness, towards a more sophisticated exploration' is warranted, understanding the 'characterisation of interventions and their contexts of implementation'.21 As highlighted later in the same report, knowledge translation is not a passive process. Many clinicians do not always engage with evidence-based practice and the effectiveness of interventions varies across different contexts.22,23,24,25 This problem leads to research waste because evidence from funded studies does not translate into the desired change in clinical practice.26 As highlighted above, problems in implementation commonly occur because the interpretation of evidence is socially constructed, ie interpreted differently across and within professions. In addition, it is often 'weighed-up' alongside other clinical factors and experiential knowledge can be privileged.27,28,29 As a result, the production of evidence in its own right is not sufficient per se to facilitate translation.30

A plea to consider implementation during the evidence generation process

Over ten years ago Glasziou & Haynes described the stages that lead to change in clinical practice.31 They argued that the adoption of a new practice requires seven separate stages:

  1. 1

    There has to be an awareness of the problem

  2. 2

    There needs to be an acceptance of the need to change current practice

  3. 3

    The intervention should be applicable to the right group

  4. 4

    It should be able to be delivered

  5. 5

    It is acted on by clinicians

  6. 6

    Agreed to by patients

  7. 7

    Adhered to by patients.

This is represented diagrammatically in Figure 2. If we assume a 80% transitional probability at each stage, then the likelihood that the intervention will be adopted in clinical practice is only 21.0% (or a little over one in five). Although a number of assumptions are made in this model (eg, that each stage follows another in a linear fashion), it highlights the impact of not taking context into account or not involving different stakeholders at the very beginning of the evidence creation process.

Figure 2
figure 2

The path from research to improved health outcomes

The central argument of this paper is that if evidence is to be successfully translated into clinical practice, far more attention needs to be paid to the context, mechanisms and conditions that lead to the generation of this evidence (particularly when the intervention is complex and involves human factors for success). This either ensures that the evidence created is more relevant to the patient and to the clinician, or it provides researchers and policy-makers with more of an understanding of why evidence is not being adopted. If more attention is paid to the context, the likelihood that the intervention will be adopted in clinical practice should in theory, improve. As highlighted by Moore et al. recently 'effect sizes do not provide policy makers with information on how an intervention might be replicated in their specific context, or whether trial outcomes will be reproduced'.32 Rather than waiting for the evidence to be produced and then engage implementation frameworks and behaviour change strategies to translate complex interventions into clinical practice, the emphasis should ideally move to using implementation frameworks to understand the context, mechanisms and conditions before, and as, the evidence is being generated.

Equally, the co-production of interventions is being seen as increasingly important. Here, explicit attention is given to patients co-producing interventions with researchers and clinicians, particularly when the interventions are complex, for example, how services are designed.33,34 This approach, along with greater patient and public involvement (PPI), potentially improves the transitional probabilities at each stage of Glasziou & Haynes model, by ensuring 'buy-in' of patients and clinicians alike. Examples of co-production in healthcare include:

  1. 1

    Co-commissioning of services

  2. 2

    Co-design of services

  3. 3

    Co-delivery of services

  4. 4

    Co-assessment.35,36

In Scotland, a workshop involving over 600 patients (entitled 'Moving on Together') and 900 health professionals (entitled 'Working in Partnership') developed an educational tool for improving communication skills, strategies for articulating goals, collaborative problem solving and action planning and monitoring.37 Likewise, 'ImproveCareNow' has resulted in the development of an electronic infrastructure to alter how patients, parents, clinicians and researchers engage with the healthcare system.38

Considering implementation during the evidence generation process also has a knock-on effect on how we potentially design trials, ensuring PPI and co-production is at the centre of feasibility studies and pre-, peri- and post-trial processes. Here, the potential of using implementation frameworks more broadly before and during trial evidence generation, rather than after the evidence has been generated, is an emerging area of research that is currently being examined.39

Trial design

Implications for trial design when implementation is considered as a fore-thought

Patient and public involvement

The active use of PPI in trials is increasing and is associated with higher recruitment rates in mental health studies.40,41,42 Reasons for better outcomes include the type of language used in patient-facing information, insights into appropriate or least burdensome study designs, and awareness of patient involvement improving the willingness to be involved.43 PPI should be carefully planned before research design, incorporating an iterative process where appropriate with clear guidance about roles.44 Despite this, funding is limited in this area and standard operating procedures for PPI in clinical trial units (CTUs) have been limited to post-funding activities.45 Challenges ahead include developing an appropriate common language (to make trials understandable to patients),46 providing support at a CTU level to promote 'pipeline to proposal' infrastructure,47 setting priorities, developing PPI within core outcome sets and understanding how to encourage co-design and co-production principles into trial design.48,49

Feasibility and pilot studies

We also argue that factors associated with implementation could be considered earlier at the feasibility stage. Feasibility studies are commonly conducted before definitive trials to test recruitment, retention, and the acceptability and the fidelity of the intervention in the planned trial.50 For trials of complex interventions, an opportunity exists to explore how implementation frameworks could be used to inform the design of the definitive trial. This offers an opportunity to provide a theoretical underpinning to an exploration of 'context', thereby providing a better understanding of the pathway to impact along Glasziou & Haynes stages.31 Methodological research looking at this and how feasibility studies inform definitive trials is being explored.39

Process evaluations

Although trials remain the best method for making causal inference and providing a reliable basis for decision-making, they often struggle to determine how or why a complex intervention (as opposed to an intervention that relies simply on pharaco-dynamics) does or does not achieve outcomes. As a result, process evaluations are used alongside trials to help understand 'the causal assumptions underpinning the intervention and use of evaluation to understand how interventions work in practice'.27 These are often run as parallel qualitative studies that explain 'discrepancies between expected and observed outcomes, to understand how context influences outcomes, and to provide insights to aid further implementation'.51

Process evaluation can usefully investigate how the intervention was delivered, providing decision-makers with information about how it might be replicated.

Realist approaches to process evaluation are also increasingly being used. These have a particular focus on 'what works, for whom, why and in what circumstances'.52 Again, such an approach can help address many of the stages in Glasziou and Haynes's model. Health service interventions commonly consist of a number of components that can act both independently and inter-dependently.53,54 They are also heavily influenced by the fidelity of the clinician, where learning effects can lead to nonlinear processes.8,55,56 It is becoming increasingly recognised that irrespective of whether the intervention is complicated (detailed but predictable) or complex (detailed and unpredictable), an understanding of range of factors that influence the adoption of evidence is critical.32,57

Implications of using implementation frameworks as part of trial design

Intervention implementation (features and effectiveness) tend to be studied retrospectively (eg, Damschroder & Lowery58). However, in one example, Rycroft-Malone et al. conducted a prospective process evaluation of implementation processes that provided an explanation for trial findings in a large implementation randomised controlled trials in acute care study focused on reducing peri-operative fasting times.59 Using theory-informed approaches or frameworks as part of trial design can help to understand the conditions or features which support intervention effectiveness, its implementation and ideally, how to achieve sustained practice change.

As highlighted by Bain et al. research is increasingly emphasising the 'many ways and levels at which context shapes service development'.60 Again, the use of implementation research is being seen as increasingly important to determine the barriers and enablers to translation and how patients experience the intervention, compared to how it was designed.61 Although NPT and other frameworks have been used, many place too much emphasis on understanding change at an individual level rather than at a system level.10,11,62,63,64,65 There is now an argument to move beyond this limited focus at a micro level to focus on system factors and broader processes at a meso and macro level, ensuring implementation science contributes to intervention development and pre-, peri- and post-trial processes. As argued by Fitzpatrick & Raine, we have 'reached the point now where attention in terms of articulating, refining and developing principles can be given to a much wider array of methods, over and above the classic approach of a definitive trial and systematic review'.66 Table 1 suggests a range of methodologies to consider for future research.

Table 1 Key issues to consider in the production of evidence

Conclusion

The use of implementation as fore-thought has the potential to reduce the gap between the evidence generated and clinical practice, ensuring Glasziou and Haynes's stages are given due consideration during (not after) evidence generation. It also has implications for policy-makers and in theory at least, could enable them to make better informed decisions.67