Credit: Peter Humaidan/Nikolaos P Polyzos

A well-performed meta-analysis can revive treatment options once considered ineffective or reveal the drawbacks of practices previously considered the gold standard. For example, initial reports suggested that the use of erythropoiesis-stimulating agents in patients with cancer could perhaps reduce the number of patients in need of blood transfusion due to anemia by as much as half1. But a recent meta-analysis including almost 14,000 patients from 53 trials demonstrated that this treatment in fact increases mortality by 17% up to 28 days from the end of the active study phase2. Thus, it is clear that the role of meta-analyses can be crucial in everyday clinical practice.

Unfortunately, not all meta-analyses examine such a vast amount of literature to offer insight. Some build on very scant information, and, as such, conducting a meta-analysis has become an easy way to get published. Worse, some people choose to write a meta-analysis during the early days of an interventional treatment, when the field has had little time to amass data. This is definitely not in line with a primary goal of meta-analysis: to provide solutions in contradictory domains.

A simple insight into the deviation from the initial concept of meta-analysis can be gained by scrutinizing PubMed for articles published in July 2012 in the Cochrane Database of Systematic Reviews, the largest registry of systematic reviews. Among the 61 systematic reviews published during this one month, 15% of the reviews included one or zero trials, the latter stating the lack of data necessary to do the analysis. In addition, half of the systematic reviews in the issue included fewer than 1,000 randomized patients. Furthermore, half of those published in the July issue were updated reviews, previously published between 2000 and 2012. Interestingly, 11 of these 31 updated reviews included the same number of trials and participants as the previous review they sought to bring up to date.

It is a widespread issue that goes beyond any one journal or discipline. Over the last decade, the number of meta-analyses in biomedical sciences has exploded. The number of reports in PubMed classified under the meta-analysis 'publication type' grew from 849 in 2000 to 4,720 in 2011—a fivefold increase. The explosion in the number of meta-analyses cannot be interpreted as substantial progress, as in many cases it is linked to analyses that include few studies, with a limited number of participants, and updates of systematic reviews, which add nothing new.

Several key steps are necessary to ensure that the flood of meta-analyses does not water down the quality of these reports. First and foremost, authors should be dissuaded from conducting meta-analyses with a restricted number of trials, including a limited number of participants. They must resist the urge to write up a meta-analysis merely to feed their own scientific impact number. It is exceptionally hard to set definitive benchmarks for how much data to include in a systematic review, as the parameters can vary among disciplines and interventions, but as a general starting point one might hope that the report include an analysis of at least three or four trials with a minimum of 1,000 participants in total for common diseases for which large trials are feasible. There will, of course, be worthwhile reviews that do not meet these targets—they simply serve to function as a starting point for discussion.

Second, scientists setting out to conduct a meta-analysis early in the lifetime of a drug's development should check registries such as ClinicalTrials.gov to see whether any large studies will be delivering data in the near future. If there are results from big trials on the way, it is better to wait for that information before completing the meta-analysis. We also assert that a meta-analysis should definitely not be updated if there are no new data to add from more recent trials.

Even if there are a number of trials to analyze, authors need to think twice about going ahead with the meta-analysis if the trials themselves are small. As explained in the most-downloaded article in the history of Public Library of Science, “Why most published research findings are false”3 from Stanford University School of Medicine's John Ioannidis, “a research finding is less likely to be true when the studies conducted in a field are smaller and when the effect sizes are smaller.” Meta-analyses built on such small trials have a rocky foundation. Obviously there are exceptional circumstances, such as the study of a treatment for a rare disease, where patient numbers are small and scientists have no choice but to deal with small numbers. In these cases, utmost attention should be given to the statistical methods applied and their limitations.

For their part, medical editors and reviewers must also be called into action. Irrespective of the impact of the journal, common and stricter criteria should be adopted as to when and how meta-analyses should be accepted. Increased citations and higher impact factors do not necessarily signify scientific merit. On the contrary, plausible scientific evidence, more likely to be replicated and valid in the future, must become a priority and should guide decision-making when it comes to accepting a meta-analysis.

Finally, the most important step toward high-quality medical research in meta-analysis is coordinated action by the entire scientific community. Biomedical researchers should apply stricter criteria when deciding which meta-analyses they cite, paying closer attention to details such as the methodology and the number of trials and patients included. Only conjointly will these actions preserve the meta-analysis as an important tool for decision-making to the benefit of patients and clinicians.