COMMENT

A fresh approach to evidence synthesis

Systematic reviews have transformed medicine, but a more cost-effective means of appraisal is needed for fields in which data are sparse and patchy, argue William J. Sutherland and Claire F. R. Wordley.
William J. Sutherland is professor of conservation biology at the University of Cambridge, UK, and the founder and director of Conservation Evidence.
Contact

Search for this author in:

Claire F. R. Wordley is a postdoctoral research associate on the Conservation Evidence project at the University of Cambridge.
Contact

Search for this author in:

Nests of white storks on artificial pillars.

Storks in Malpartida de Cáceres, western Spain, nest on purpose-built poles in a conservation area.Credit: J. Policinski/Getty

In 1990, researchers conducted a systematic review of studies investigating the use of corticosteroids in women who were at risk of giving birth prematurely1. (The steroids were administered to reduce the chances of the women’s pre-term babies experiencing respiratory issues.) The results of the first of these clinical trials, published in 1972, had indicated that the treatment worked2. But it was not widely adopted, because of concerns about potential side effects and the quality of the evidence. Indeed, the effectiveness of corticosteroids was conclusively established only with the 1990 review. Tens of thousands of lives have probably been saved since then because of this intervention.

Among the countless methods regularly used to pull together evidence from different sources, only systematic reviews critically appraise findings in a comprehensive manner. Reviews such as the corticosteroid one, which can include meta-analysis (whereby data from multiple studies are pooled and analysed together), have transformed medicine. Moreover, the improvements to medical practice have inspired the use of systematic reviews in other fields, such as policing3.

Yet such reviews are enormously time-consuming and expensive. We think that in fields in which data are sparse or patchily distributed, or where studies vary greatly in design and generalizability — as is the case in biodiversity conservation, international development and education, for example — a different approach might often be more appropriate. In our view, given the expansion of available studies, it has never been more important to have a large-scale, cost-effective way of rigorously appraising information for applied fields.

To address this need, we have a developed a method that we call subject-wide evidence synthesis. Here, we lay out how it works.

Searchable synopses

Provided enough studies exist, systematic reviews and meta-analyses are invaluable for providing clear answers to focused questions. But it is hard to conduct a meta-analysis if only a few studies exist, or when those that do exist use different methods or measure different variables. Furthermore, such analyses are labour-intensive and expensive: in medical fields, systematic reviews generally take about a year to conduct and can cost between US$30,000 and $300,000 each4.

Another approach, called systematic mapping, is more broad-brush. But this typically does not describe the findings of the research, and so cannot be used to answer questions about policy (see ‘Review or map?’).

Review or map?

In systematic reviews, investigators generally pose a focused question, such as: ‘Is surgery an effective treatment for knee osteoarthritis?’ They then write an a priori protocol paper to lay out their criteria for including studies, and explain how they will conduct their analysis before carrying out the review itself.

For the actual review, a PubMed or Scopus search for the terms ‘knee surgery’ or ‘knee osteoarthritis’, say, might give several thousand hits. By reading article titles and abstracts, researchers pare down their selection to those studies that meet the predetermined criteria, and then conduct a qualitative analysis or meta-analysis on those.

In systematic maps, search terms are used to address open-framed questions, such as how many studies have been conducted on a particular topic, or how those studies have been conducted. This technique, which also often involves the publication of a priori methods, is used in fields from education to sustainability. In particular, it can identify knowledge gaps and hot spots for research, and so indicate priorities for future efforts10,11.

The approach we’re advocating — subject-wide evidence synthesis — combines elements of systematic reviewing and mapping, along with other techniques, to provide an industrial-scale, cost-effective way to synthesize evidence. It is not intended to replace the use of systematic reviews, but it does provide a rigorous way to synthesize information when data are unevenly or thinly distributed, or highly variable in focus.

We have developed this approach in a project called Conservation Evidence (www.conservationevidence.com), which aims to assess the impact of conservation interventions for all species and habitats worldwide5.

After identifying broad subject areas such as bird conservation or reptile conservation, we established which interventions are likely to be relevant, and defined the criteria for including papers. For this, we drew on the expertise of advisory panels; for instance, 16 specialists in bird conservation helped us to draw up a list of 455 conservation interventions relevant to birds — from the use of model birds to lure species towards a safe location for nesting, to the use of signs and access restrictions to protect nesting birds from human disturbance.

Collating the required information involves manually searching every paper in every issue of the journals we deemed relevant. Many papers can be excluded from our database simply by reading their title. For others, it’s necessary to read the abstract, or even the entire paper, before deciding whether it should be extracted, tagged and stored.

For each intervention, a paragraph summarizes the key findings of all the studies that have been conducted; each study is summarized in an additional paragraph that provides information about the design, sample sizes and location. An expert assessment of the likely effectiveness of the intervention is also provided. All this information is collated in a ‘synopsis’; for bird conservation, the current print version of the synopsis is a 466-page book.

The idea is to provide users with intervention assessments that they can use to pursue both broad and narrow questions. For example, conservationists might ask, ‘How can we reduce fulmar by-catch at sea?’ They could look at the assessments of five interventions designed to reduce seabird by-catch, each of which has been tested on species including fulmars. Or they could look at the assessments of 22 interventions that aim to reduce seabird by-catch but that haven’t all been tested on fulmars specifically. Alternatively, a practitioner asking, ‘What can be done to conserve seabirds?’ might want to read about all 48 interventions pertaining to the conservation of seabirds.

Since 2004, our international team of researchers has searched every issue of nearly 250 journals for tests of some conservation intervention. And we’ve conducted more than 1,700 assessments of interventions5 (see ‘Cost-effective’).

Source: Conservation Evidence

Sharpening the focus

For most conservation interventions, insufficient evidence exists for investigators to be able to conduct a meta-analysis for each species or genus. However, by being apprised of studies that examine how a particular intervention has worked for an order — for birds in general, say — practitioners can better weigh up the chances of success for their intended programme.

Because subject-wide evidence synthesis entails scanning all the papers from every issue of the journals selected, it can unearth unusual interventions that would not necessarily have been identified on the basis of predetermined criteria for paper inclusion. For example, in our searches, we came across a study in which researchers had added snakeskins to nest boxes to deter mammal predators6. This was not on our original list of interventions.

Moreover, in subject-wide evidence synthesis, all papers relevant to the broader discipline (in this case, biodiversity conservation) are extracted, tagged and stored when searching a journal. This means that when the next synopsis is written (on amphibian conservation, say), those producing it just need to add the specialist amphibian journals to the ‘bank’ of journals that have already been searched.

Because the literature on an entire subject area has already been searched and summarized, focused topics can be investigated more nimbly. Also, subject-wide evidence syntheses can be easily updated, because the format for reporting results is standardized. Every new edition of each journal can be searched, a summary paragraph uploaded for each new paper, and the key messages concerning specific interventions changed to reflect the latest findings. For the Conservation Evidence project, the ideal would be to update every synopsis every second year. We are currently updating the first editions for birds and bats.

Lastly, subject-wide evidence synthesis can provide a starting point for different kinds of review. For instance, it identifies areas that are rich in evidence and thus suitable for systematic reviews.

Ultimately, subject-wide evidence synthesis should result in a resource that is concise, easy to navigate and comprehensible to non-scientists. In 2017, the Conservation Evidence website had 15,000–25,000 page views each month.

By using this method instead of search terms, some papers in obscure journals might be missed. At Conservation Evidence, we aim to continually expand our range of searched journals to reduce this. We are also trying to include relevant ‘grey’, or unpublished, literature in our searches, for instance by asking organizations such as Scottish Natural Heritage to share reports. Another concern is that evaluations of interventions might be biased, depending on the expertise of the assessors. We try to minimize this by using multiple anonymous rounds of scoring and a large team of assessors (the Delphi technique)7. And we provide median instead of mean scores, because medians are less influenced by outliers.

Collating data on such a scale is expensive; our bird-conservation synopsis, the result of searching 35 journals, has cost nearly £350,000 (US$467,000). But once the papers have been extracted, many synopses can be produced. Indeed, the costs of producing assessments of interventions and synopses decline over time as each investigator builds on the efforts of others (see ‘Cost-effective’). Overall, this approach is much more cost-effective than standard systematic reviews that rely on the use of search terms.

A multipurpose tool

As the scientific literature continues to grow, locating and collating new papers in evidence syntheses is becoming increasingly challenging8. Advances in artificial intelligence and machine learning could make it easier to perform tasks such as locating papers for defined topics using search terms, categorizing papers as relevant for further consideration, and producing systematic maps9. But for all fields, assessing the quality of individual studies, writing up summaries and so on will continue to require skilled humans, at least for the foreseeable future.

Because the cost-effectiveness of subject-wide evidence synthesis kicks in only when a large enough evidence bank has been developed, long-term funding to develop, sustain and update subject-wide evidence synthesis projects will be crucial.

So far, Conservation Evidence has been supported mainly by philanthropists, alongside research councils, industry and the UK government. Other projects might require core government funding, as has been provided to the UK What Works centres, which help to ensure that high-quality evidence shapes public-sector decision-making.

Our hope is that subject-wide evidence synthesis will prove as useful in disciplines such as international development as it seems to be in conservation. Ultimately, the proven usefulness of this approach in a range of fields will persuade practitioners that it is an indispensable part of the toolkit when it comes to collating knowledge to inform policy decisions.10,11

Nature 558, 364-366 (2018)

doi: 10.1038/d41586-018-05472-8
Nature Briefing

Sign up for the daily Nature Briefing email newsletter

Stay up to date with what matters in science and why, handpicked from Nature and other publications worldwide.

Sign Up

References

  1. 1.

    Crowley, P., Chalmers, I. & Keirse, M. J. BJOG Int. J. Obstet. Gynaecol. 97, 11–25 (1990).

  2. 2.

    Liggins, G. C. & Howie, R. N. Pediatrics 50, 515–525 (1972).

  3. 3.

    Sherman, L. W. Evidence-based Policing (Police Foundation, 1998).

  4. 4.

    Dicks, L. V., Walsh, J. C. & Sutherland, W. J. Trends Ecol. Evol. 29, 607–613 (2014).

  5. 5.

    Sutherland, W. J., Dicks, L. V., Ockendon, N., Petrovan, S. O. & Smith, R. K. (eds) What Works in Conservation (Open Book, 2018).

  6. 6.

    Medlin, E. C. & Risch, T. S. Condor 108, 963–965 (2006).

  7. 7.

    Mukherjee, N. et al. Methods Ecol. Evol. 6, 1097–1109 (2015).

  8. 8.

    Westgate, M. J. et al. Nature Ecol. Evol. 2, 588–590 (2018).

  9. 9.

    O’Mara-Eves, A., Thomas, J., McNaught, J., Miwa, M. & Ananiadou, S. Syst. Rev. 4, 5 (2015).

  10. 10.

    James, K. L., Randall, N. P. & Haddaway, N. R. Environ. Evid. 5, 7 (2016).

  11. 11.

    McKinnon, M. C. Nature 528, 185–187 (2015).

Download references