Introduction

The annual number of medical research citations has more than doubled since 2000, reaching nearly one million citations in 2017 [1]. This increase is to be expected considering that global investment in biomedical research reached US $240 billion in 2010 [2]. However, the United States over recent years has seen stagnation in biomedical research funding, specifically from the NIH [3, 4]. Given the trajectory of biomedical research funding and the drastic increase in publication output, minimizing research waste should be a top priority.

Research waste has become far too common, as one study suggests as much as 85% of research could be considered wasteful [5]. Chalmers and Glasziou define research waste in four general categories [5,6,7]; research that has little clinical significance, research that lacks a quality design, research that never makes it to publication, and research signified by biased and unusable reports.

With regard to this study, research waste is defined as lacking quality design by conducting clinical research where evidence for a hypothesis already exists. For example, one study found that over 50% of research studies analyzed did not consult a systematic review before designing the study [5]. Systematic reviews are the gold standard in evidence-based medicine. They aim to compile, summarize, and critically analyze the current literature on a specific topic to enable physicians and researchers to make informed decisions [8]. The American Academy of Ophthalmology states that guidelines underpinned by systematic reviews are considered “level 1 evidence” [9]. In this way, systematic reviews are used as a basis to shape clinical care. For example, antioxidant vitamin and mineral supplements were being marketed for improvements in age-related macular degeneration. However, a recent systematic review concluded that there was no such improvement with these vitamins [10].

Systematic reviews should not only be consulted at the time of trial design, but also when presenting individual randomized controlled trial (RCT) data in the context of the current literature. However, numerous studies have shown that many researchers are not consulting systematic reviews at all [8]. For example, when Rosenthal et al. analyzed systematic reviews in surgical trials, they found 65% cited a systematic review somewhere in the manuscript, 16% of which are in the introduction, but none of these studies specifically used a systematic review as justification for conducting the trial [11].

Ophthalmology and optometry research is not immune to research waste. A major influencer of research waste within ophthalmology is a lack of standardization in RCT primary outcomes. For example, one study found extensive diversity within primary outcomes in registered uveitis trials [12]. However, Core Outcome Measures in Effectiveness Trials has been a leader in standardizing outcome measures in Ophthalmology, particularly in glaucoma trials [13]. Given the areas of concern regarding research waste within ophthalmology and the lack of systematic review usage to inform trial design in other areas of medicine, more studies are needed to uncover how trial design may influence research waste within ophthalmology and optometry. The aim of this study is to evaluate the use of systematic reviews to justify conducting a RCTs in top ophthalmology and optometry journals.

Methods

One of us (TT) searched PubMed on December 5, 2018 for RCTs published in one of the top five Google Scholar h-5 index journals within Ophthalmology and Optometry. These journals included Ophthalmology, Investigative Ophthalmology & Visual Science, American Journal of Ophthalmology, JAMA Ophthalmology, and Retina. We used the Cochrane highly sensitive search strategy for RCTs“(((((((((([Publication Type]) OR controlled clinical trial[Publication Type]) OR randomized [tiab]) OR placebo [tiab]) OR drug therapy [sh]) OR randomly [tiab]) OR trial [tiab]) OR groups [tiab]) AND (“2016/01/01”[PDat]: “2018/12/05”[PDat]) AND Humans[Mesh])) AND ((((((“Retina (Philadelphia, Pa.)”[Journal]) OR “American journal of ophthalmology”[Journal]) OR “Ophthalmology”[Journal]) OR “Investigative ophthalmology & visual science”[Journal]) OR “JAMA ophthalmology”[Journal]) AND (“2016/01/01”[PDat]: “2018/12/05”[PDat]) AND Humans[Mesh])” We used Rayyan (https://rayyan.qcri.org) to screen and excluded any trials that were not Phase III RCTs.

We used a pilot-tested Google Form for all extractions. We searched each RCT for the number of systematic review cited in the introduction, methods, and discussion. Systematic reviews and meta-analyses were considered synonymous for the purposes of this study. Each systematic review was then given the designation of “used verbatim as justification for conducting RCT,” “inferred use as justification for conducting RCT,” or “not used as justification for conducting RCT” based on the context the systematic review was used. Rationales for each designation were recorded for future resolution. We also extracted trial characteristics that consisted of funding source, type of intervention, number of trial centers, location of study, type of trial, and efficacy of results. Finally, of the RCTs that cited a systematic review as justification for conducting the RCT (verbatim or inferred), we extracted the total number of included studies and patient population for each systematic review. Two of us (BJ, SE) completed the extraction blind and duplicated. All discrepancies were resolved unanimously. This study does not meet the regulatory definition of human subjects research and, thus, is excluded from IRB oversight.

The data were recorded and sorted in an Excel document by journal, section of paper, verbatim, inferred, not used as justification, and trial characteristics. We ran a logistic regression relating trial characteristics to whether a systematic review was used as justification for conducting the trial (verbatim and inferred were included as “justified”) using Stata Version 15.1 (StataCorp).

Results

The search string retrieved 1667 returns, of which, 1515 were excluded as they were inaccessible or were not phase III RCTs. Three hundred and fifty-nine were accessible and screened for inclusion. One hundred fifty-two phase III RCTs were included for analysis. The most common journals in the sample were Ophthalmology (60 of 152) and American Journal of Ophthalmology (46 of 152). The majority of trials was from a single country (88 of 152) and were multicenter trials (87 of 152). Drug was the most common intervention type (73 of 152). A complete depiction of trial characteristics is reported in Table 1.

Table 1 Logistic regression analysis of trial characteristics and systematic review citations as justification of trial conduction (a).

After analysis, we found 22.4% (34 of 152) of phase III ophthalmology clinical trials cited a systematic review as justification for conducting the trial (Table 2). Fourteen trials cited at least one systematic review as verbatim justification for conducting the trial and 28 trials cited at least one systematic review as inferred justification for conducting the RCT. Some trials were given the verbatim or inferred distinction in the introduction, then the opposite in the discussion and were therefore counted as one for each. Characteristics of the systematic reviews used to justify (verbatim or inferred) RCTs are included in Table 3. Thirty-four RCTs cited systematic reviews that were not used as justification for conducting the trial.

Table 2 Analysis of trial characteristics and systematic review citations per journal and sections - n (%).
Table 3 Characteristics of the systematic reviews used to justify (verbatim or inferred) RCTs.

A total of 102 systematic reviews were cited in the 152 RCTs. Fifty-seven of the one hundred fifty-two (37.5%) included RCTs cited at least one systematic review somewhere in the manuscript. The logistic regression did not yield any statistically significant results.

Discussion

Less than one-quarter of phase III RCTs cited systematic reviews as justification for conducting the RCT. We also found only 37% of phase III RCTs cited a systematic review in the manuscript. This finding aligns with a similar study in anesthesiology which reported less than one-fifth of analyzed studies cited a systematic review as justification for the RCT and only 44% referenced at least one systematic review in the manuscript [8]. Another study in top general medicine journals also substantiates these findings, claiming a lack of improvement in authors providing the necessary evidence for conducting an RCT [14].

In 2014, a study concluded that if researchers had evaluated systematic reviews for their respective research question, many of the studies would not have been conducted and a significant number of adverse outcomes could have been avoided [2]. To mitigate this issue, we recommend when submitting for funding, trialists be required to conduct a thorough literature search including systematic reviews and meta-analyses. For example, the National Institute for Health Research in the United Kingdom requires all submissions for grant money to detail the existing evidence for their current project [15].

Taking it a step further, when submitting for publication, journals should consider requiring proof that a formal literature review was completed prior to conducting the trial. We anticipate that this action step would lead to fewer studies being conducted where evidence for the proposed intervention already exists, as well as eliminating trials that would add little value to the current literature. Fewer unnecessary studies would then minimize the risk and potential harm incurred by patients enrolled in RCTs. Eliminating research waste is not only a researcher’s scientific responsibility, but also their ethical responsibility. It raises ethical questions to conduct a clinical trial without proper justification through thorough evaluation of the literature, i.e., systematic reviews. As this unjustified trial may subject participants to unwarranted risks and complications [16].

We feel our methodology is robust and ensures reliability through a blinded and duplicated extraction method. However, we acknowledge a few limitations. First, our findings are limited to the top five ophthalmology journals and may not be generalizable across the entirety of ophthalmology literature. Second, there is a subjective component to determining when a systematic review is cited as verbatim, inferred, or not justifying the RCT that may affect the results. Finally, we understand that reproducibility within science is important and corroborating a result is paramount for reliable data; thus, replicative studies should be labeled as such and not considered wasted research.

Our findings suggest oversight on the part of some ophthalmology researchers in the evaluation of systematic reviews prior to designing an RCT. We believe that placing a higher priority on justifying RCTs with systematic reviews would go a long way to minimizing research waste within ophthalmology and optometry.

Summary

What was known before

  • Research waste has become far too common, as one study suggests as much as 85% of research could be considered wasteful. One study found that over 50% of research studies analyzed did not consult a systematic review before designing the study. A major influencer of research waste within ophthalmology is a lack of standardization in RCT primary outcomes. For example, one study found extensive diversity within primary outcomes in registered uveitis trials.

What this study adds

  • Less than one-quarter of phase III RCTs cited systematic reviews as justification for conducting the RCT. This aligns with a similar study in anesthesiology. To mitigate this issue, we recommend when submitting for funding, trialists be required to conduct a thorough literature search including systematic reviews and meta-analyses. We anticipate this action step would lead to fewer studies being conducted where evidence for the proposed intervention already exists, as well as eliminating trials that would add little value to the current literature. Fewer unnecessary studies would then minimize the risk and potential harm incurred by patients enrolled in RCTs.