Introduction

When searching on www.google.com (28 May 2005) for ‘evidence-based medicine’ (EBM) approximately 758 000 hits came up. When combining with ‘spinal cord injury’ (SCI) the number of hits came to 7180 and with adding of rehabilitation it came to 3610. In PubMed 17 794 references, including 5583 reviews, were found when searching for EBM.

Clearly, this is an issue that has created a lot of interest over the years. The interest for EBM has grown, not least because of the enormous and exponentially increasing amount of medical literature published.1 In addition, one often finds many sections in new textbooks nearly outdated when they become published. Therefore, EBM may be seen as a natural consequence of the explosion in the amount as well as the increasing speed in the development of new knowledge within the medical field. This changing pattern regarding information needed for clinicians also makes EBM closely linked with information technology.

Background

The concept of EBM was partly developed on the background of Archibald Cochrane's accusation ‘that many of the treatments, interventions, tests and procedures used in medicine had no evidence to demonstrate their effectiveness, and may in fact be doing more harm than good’.2

The development took place in the 1970s and 1980s at McMaster University in Canada by a group of clinical epidemiologists including David Sackett. This led to the establishment of the Cochrane Collaboration for preparing, maintaining, and disseminating up-to-date reviews of randomised controlled trials (RCTs) of health care. In addition, the intention was that epidemiological principles should be used to incorporate the latest results of these reviews into physicians training and practice of patient care, the EBM.2, 3

The movement under the name of EBM began in the early 1990s in Canada, United Kingdom, and United States. The breakthrough for the named EBM came with the article in JAMA in 1992 from the EBM Working Group. They stated that EBM ‘de-emphasises intuition, unsystematic clinical experience, and pathophysiologic rationale as sufficient grounds for clinical decision making and stresses the examination of evidence from clinical research’.4

Delimitation of EBM

Sackett et al3, 5 state that EBM is ‘the integration of best research evidence with clinical expertise and patient values’.

The emphasis is, therefore, on

  • best research evidence,

  • clinical expertise,

  • patient values.

The individual clinical expertise is emphasised, and ‘mean the proficiency and judgment that individual clinicians acquire through clinical experience and clinical practice’. When you gain expertise, you will be able to diagnose more efficiently, and be better in ‘identification and compassionate use of individual patients' predicaments, rights, and preferences in making clinical decisions about their care’.5

Good judgments do in every patient contact require both the individual perspective and the best available external evidence. The one or the other alone will not be sufficient. We must be aware that the best possible evidence may in certain situations not be useable or appropriate for the particular patient. On the other hand, it is imperative that we as the patients advisers constantly keep ourselves up-dated on the relevant evidence, because otherwise we will not be able to give the evaluation or treatment, which the patient rightly can expect.

What is best practice?

Best practice is a broader term than EMB, and Perleth et al6 reviewed the literature and found that the concept of ‘best practice’ can be broken down into three types of systematic reviews:

  • Health Technology Assessment;

  • EBM;

  • Clinical Practice Guidelines (CPGs).

The evidence is obtained on the basis of clinical research, clinical epidemiology, health economics, and health service research. They conclude that ‘resources should be devoted to increase quality and quantity of both primary and secondary research as well as the establishment of networks to synthesise, disseminate, implement, and monitor ‘best practice’’.

CPGs allow clinicians to practice EBM with minimal effort, as these guidelines have been written by panels of experts.7

Why is EBM needed?

Mozlin7 gives with reference to others the following examples for reasons for why EMB is needed:

  • New types of evidence are being generated that have the potential to change healthcare. That is shift from case studies to for example RCTs and meta-analyses. Meta-analysis is a quantitative method for combining the results of independent studies.

  • Information on new evidence relevant to daily practice is not received by the clinicians.

  • Failure in obtaining the new evidence results in deterioration of knowledge and consequently clinical performance over time.

Since RCT, and systematic review of several RCTs, is more likely to inform us and less likely to mislead us, it has become the ‘gold standard’ for our judgment of whether a treatment does more good than harm. It should, however, be remembered that questions about therapy do not require RCT or cannot wait for the trials to be conducted. If no RCT is available for our patient's predicament, we should use the next best external evidence.

Steps in the EBM process2, 3, 7, 8

To integrate EBM in daily clinical practice the clinicians should follow a five-step process:

  • Convert the need for information into a precise, structured question.

  • Find the best evidence to answer the question.

  • Critically evaluate the evidence for its validity, impact, and applicability.

  • Integrate the critical appraisal with one's clinical expertise and with the patient's unique biology, values, and circumstances.

  • Evaluate the process, and seek ways to improve effectiveness and efficiency next time.

Levels of evidence and recommendations

Levels of evidence have for years been grouped in accordance with Table 1, and based on these levels of evidence the following grades A–D of recommendations have been given:

Table 1 Oxford centre for evidence-based medicine levels of evidence (May 2001), (http://www.cebm.net/levels_of_evidence.asp)

Grade A. Consistent level 1 studies.

Grade B. Consistent level 2 or 3 studies or extrapolations from level 1 studies.

Grade C. Level 4 studies or extrapolations from level 2 or 3 studies.

Grade D. Level 5 evidence or troublingly inconsistent or inconclusive studies of any level.

In the latest CPG published in April 2005 by the Consortium for Spinal Cord Medicine9 the methodology team selected to use a broader scope for evaluation of the scientific literature. Their evaluations were based on the Scottish Intercollegiate Guidelines Network checklists for systematic reviews and meta-analyses, RCTs, cohort, and case–control studies10 (http://www.sign.ac.uk). None of these checklists was found appropriate for pre–post, case series, or cross-sectional studies, which all are commonly used in the spinal cord injury rehabilitation and outcomes literature. Therefore, additional checklists were created.9 The reviewers should evaluate internal validity, subject selection, randomisation, confounding, outcomes assessment instruments, and other relevant aspects of the study, which lead to an overall assessment of the study quality as very strong (++), strong (+), or weak (−).

The studies were rated according to the following:

  1. 1

    Systemic review (or meta-analysis ) of RCTs.

  2. 2

    RCT.

  3. 3

    Systematic review (or meta-analysis) of observational studies (case–control, prospective cohort, and similar strong designs).

  4. 4

    Single observational study (case–control, prospective cohort, or similar strong designs).

  5. 5

    Case series, pre–post study, cross-sectional study, or similar design.

  6. 6

    Case study, nonsystematic review, or similar very weak design.

If the study was rated ‘++’, it was given the number corresponding to its basic design. If the rating was ‘+’, it was given one level less than its nominal rank, and two levels below with a rating of ‘−’.

In addition, the evaluation of the entire body of scientific evidence had focus on quality, quantity, and consistence of the evidence,11 and the strengths of recommendation were given as:9, 12

Level A: Very strong support for recommendation.

Level B: Strong support for recommendation.

Level C: Intermediate support for recommendation.

Level D: Weak support for recommendation.

Finally, each recommendation was given a ‘strength of panel opinion’ rating on a scale from 1 to 5, where 1 reflected disagreement and 5 strong agreement.9

Limitations regarding the evidence

Gupta13 reviewed various aspects of limitations in the evidence itself. Starting with the design and statistical analyses of the studies, these may be flawed or insufficient, and thus the reliability of results questionable. There is the important issue of funding of the research, this means a potential bias is obvious if the study is financed by for example a pharmaceutical company. Another common bias is related to what can be and are published. Significant results are more likely to get published than nonsignificant results. Further, there will be a trend towards doing research within areas we know how to investigate, which for a large part will exclude the practical experience and information from intuition. These biases may to a certain degree systematically affect the final evidence and skew it in favour of experimental and commercially profitable interventions.

Likewise, Zou et al14 summarise possible biases that eventually may lead to erroneous conclusions in meta-analyses. Besides the mentioned publication bias there is a language and citation bias, because among published studies, those with significant results are more likely to get published in English and to be cited. They also describe a database bias, that is, in less developed countries studies with significant results may be more likely to get in a journal indexed in a literature database. Finally, the criteria for inclusion of studies in a meta-analysis may be biased due to knowledge of results of potential studies.

In a paper by Rycroft-Malone et al,15 the nature of evidence is debated and the use of a broader evidence base in the implementation of patient-centered care is argued. They claim that the practice of effective nursing can be achieved only by using the evidence from research, clinical experience, patient experience, and information from the local context. This requires that the external, scientific and the internal, intuitive are brought together. On the same line Buetow and Kenealy16 suggest that EBM should acknowledge multiple dimensions of evidence including scientific, theoretic, practical, expert, juridical as well as ethics-based evidence, and accommodate evidence produced within and outside science.

Limitations to EBM

The criticisms of EBM have thoroughly been discussed and summarised by Cohen et al2 in five main themes:

  • EBM is based on empiricism, misunderstands or misrepresents the philosophy of science, and is a poor philosophic basis for medicine.

  • The EBM definition of evidence is narrow and excludes information important to clinicians, see above.

  • EBM is not evidence based, that is, it does not meet its own empirical tests for efficacy.

  • The usefulness of applying EBM to individual patients is limited.

  • EBM threatens the autonomy of the doctor/patient relationship.

According to Kohatsu et al,8 the major disadvantage of EBM is that it de-emphasises patient values, perspectives and choices, and fails to account for the individual social and biological variation. In addition, that clinical judgement may be devalued by EBM guidelines. On the other hand, advocates of EBM emphasise that clinicians always have to use their experience and expertise! The opinion to this can be that there is nothing new in EBM, just a new name for how medicine always has been practiced.2

Sackett et al3 themselves point towards limitations universal to science and medicine: ‘the shortage of coherent, consistent scientific evidence; difficulties in applying any evidence to the care of individual patients; barriers to any practice of high-quality medicine’. They also agree that limitations specific to the practice of EBM exist; that is, need to learn to search and critical appraise the scientific evidence; limited time for busy clinicians to do this; and necessary instant access often being inadequate. Evidence for that EBM actually ‘works’ is limited.8

Sackett et al3 do on the other hand find that criticism related to EBM like – ‘it designates clinical expertise, is limited to clinical research, ignores patients’ values and preferences, or promotes a cookbook approach to medicine’ – is pseudo-limitations, which can be agreed to when reading what EBM is (see above).

Wente et al17 review the specific challenges for surgeons in relation to EBM. They find it imperative that surgeons realise that RCTs are applicable to the operative specialties in a large scale. It is stressed that if no sham operation is performed, investigators should be aware of bias through the placebo effect of all surgical procedures. They point out that it remains difficult to standardise the surgical procedures tested because these evolve continuously and frequency of complications decrease with learning and experience. In addition, surgeons vary in surgical skill and experience. In surgical trials blinding of patients and surgeons may be more difficult, and restrictive exclusion criteria can invalidate the study in relation to the typical patient population. Finally, they find financial support for surgical clinical studies is limited.

Considering the increasing expenditure in health systems, EBM can be utilised to restrict the economical resources. The health system administrators may not support non-evidence-based interventions on ethical grounds, as the cost would not constitute a just allocation of resources given the demands for health services. This may imply lack of flexibility for the individual health care professional, not least viewed in the light of the incomplete state of evidence for the majority of health problems. Further, will it be ethical to deny a patient an intervention because it does not carry EBM approval, in particular if no evidence-based treatment is available13?

Therefore, EBM may have unwanted effects on the access to health services and may increase the influence of public as well as private interests at the expense of patient interests.

EBM in treatment and rehabilitation of SCI

Obtaining the evidence is the first major hurdle for all busy clinicians.

For subspecialists in SCI there are no secondary evidence-based summary publications. One may build a current awareness service. This is possible in the way that one may have title pages of journals of relevance sent from services like Current Contents, MEDLINE, and Silverplatter. However, the electronic media are for most clinicians more accessible, better indexed, and in particular more up to date. In addition, at the same time there are unlimited linkages to related information.3

MEDLINE is the most sensitive source for evidence due to its comprehensiveness and up to date maintenance, and the best preappraised evidence source is the Cochrane Library and Best Evidence. New services are constantly developed, and it is possible to keep updated on http://www.library.utoronto.ca/medicine/ebm/, where they will be cited as soon as they appear.3

For individuals working within the field of SCI treatment and rehabilitation there are some Cochrane reviews of interest, and those most obvious are shown in Table 2.

Table 2 Cochrane reviews which may be of particular interest for individuals working within the field of SCI treatment and rehabilitation

Guidelines should provide a summary of the evidence for the particular field and instructions about how to use the evidence for the sake of the patients. According to Sackett et al,3 a guideline to be judged as valid, should include a comprehensive, reproducible literature review within the past 12 months, and its recommendations should be tagged by the level of evidence upon which it is based and linked to a specific citation.

The Consortium for Spinal Cord Medicine has over the years produced several CPGs, which can be downloaded from www.pva.org (under publications) for a certain fee. Some of these have also been published. Other guidelines can be found through other web sites (see Table 3).

Table 3 Some useful web sites

Textbooks are as mentioned soon outdated, but they may provide integrated information from a wide variety of sources on a specific subject, and therefore still be of value for a busy and time-challenged clinician.3

Limitations and dangers related EBM in SCI treatment and rehabilitation

For SCI treatment and rehabilitation it is clear that many studies on for example, physiotherapy, specific rehabilitation techniques, treatment or rehabilitation devices as well as within the surgical field may be difficult to carry out in practice. Secondly, it can be difficult to have these studies financially supported. Therefore, one may fear that many relevant possibilities for treatment and rehabilitation for individuals with SCI are not investigated at all and thus missed for those who could have benefited from it. Rehabilitation technology is also expensive and we know that there is a great risk that private companies may not find it attractive to support this kind of development because the investment may not pay-off. An example is all the frustrating problems related to the FreeHand System. Therefore, it is necessary that government institutions and agencies in the future to a greater extent accept this responsibility.

There is yet another risk that our administrators conclude, that we can only introduce new treatments if they have the highest level of evidence. This will mean that many treatments will have little chance ever to come in use. Such a development will in particular be in favor of the large pharmaceutical companies and their products as they have the resources and possibilities to make the necessary large RCTs.

In the present climate of EBM it is also important for everyone working clinically with individuals with SCI to be aware of the necessity to demonstrate clinical effectiveness by regular audit. These should include the use of valid and reproducible outcome measures in relevant areas in SCI treatment and rehabilitation. The use of checklists may improve the quality of information collected for the patients, and electronic patient records with structured data may further increase the possibilities for evaluation of the treatment and rehabilitation.

In the search for evidence and effectiveness it is important that we do not overlook another kind of audit, that is, the reporting of errors. Today errors may be reported as case stories in journals, but the process to write a complete article with background, references, etc. may be too much for many busy clinicians. Therefore, a possibility could be that journals have a section for reporting of errors in a short format, which do not require much of those who report the case. To make it possible for all of us to learn from errors it ought to be a professional obligation to report them.

Summary

Biller-Andorno et al18 conclude: ‘Our moral responsibility requires us to use the positive potential of EBM for fairness and quality in health care as well as to reflect critically on its limits’. In practice, health care professionals generally agree that they should strive to use evidence-based interventions. However, we may also agree with Cohen et al2 that the important issue is to find the best way to incorporate evidence into the multifacetted clinical decision-making process. There is no doubt that EBM is an important factor in a very complex context. We shall always remember to incorporate the individual patient values and circumstances, and current evidence must not restrict a patient's access to health care.2

Guidelines based on the best possible knowledge are very valuable, in particular when appropriately up-dated, not least for the new in the field.