What is publication bias and why is it important?

The late and distinguished Alvan Feinstein expressed concern about the proliferation of positive associations with the ‘menace[s] of everyday life’ in risk factor epidemiology.1 Publication bias is a long-standing problem in clinical research, in which treatments that demonstrate a significant health benefit are more likely to be published than those that do not.2,3 The problem was highlighted in the 1950s when a survey of empirical studies published in leading psychology journals discovered an astonishing 97% reported statistically significant results.4 Meta-analyses which include only published studies are therefore likely to be biased.

The impact of publication bias on the integrity of medical research and the findings of meta-analyses have been widely discussed and evaluated.2, 3, 5, 6, 7, 8 In an early example (1986), Simes compared the benefit of treating advanced ovarian cancer using an alkylating agent with combination chemotherapy. He conducted a meta-analysis of published trials only and a meta-analysis of all registered trials.2 The pooled median survival ratio for the published trials was 1.16 (95% CI 1.06 to 1.27), which suggested improved survival using combination therapy, while the pooled ratio for the registered trials was 1.06 (95% CI 0.97-1.15), a smaller and less optimistic improvement that was not statistically significant. The difference between the two pooled estimates was attributed to publication bias.2

Publication bias in medical research continues to be a problem today, as shown by a recent meta-analysis which compared the effectiveness of reboxetine with selective serotonin reuptake inhibitors (SSRI) or placebo for treating major depression.8 The investigators found that 74% of data were not published and ‘published data overestimated the benefit of reboxetine versus placebo by up to 115% and reboxetine versus SSRIs by up to 23%, and also underestimated harm’.8 Previous findings of reboxetine's effectiveness were reversed.

In dentistry, Scholey and Harrison demonstrated that more than 50% of studies presented at international conferences in dental health were unpublished five years after the conference, suggesting that a meta-analysis in a dental health-related area could be biased if a search for unpublished studies is not included.9 A review of five orthodontic journals shows that 88% of studies demonstrated statistically significant results.10 A similar study of journals in maxillofacial surgery came up with a figure of 77%.11 Like other health care domains, publication bias is clearly a problem in dental research.9, 10, 11, 12

The funnel plot as a tool to detect publication bias

The funnel plot is a commonly used graphical device to detect publication bias in systematic reviews. Originally advocated by Light and Pillemer13, it is a plot of a study-specific effect estimate against an estimate of the study's precision. Precision may be assessed in various ways, through a function of the standard error for an effect measure or simply the sample size in each study of a systematic review.13, 14, 15, 16, 17 Effect estimates include relative risk, risk ratio, odds ratio, absolute risk and logarithmic transformations of these measures.

In the absence of publication bias, the points will be symmetrically distributed around the true effect in the shape of an inverted funnel (Figure 1).13, 14, 15, 16, 17 As the sample size of studies in an unbiased meta-analysis increases, the effect estimates become more precise. The scatter of effect estimates around the true effect would therefore be expected to become wider at the bottom end of the plot, where the smaller studies are located. The scatter becomes progressively narrower as the studies increase in size going up the plot. In principle, the pooled effect estimate should reflect the true effect. In the presence of publication bias the shape of the funnel plot will be asymmetric, with negative effect estimates from smaller studies missing from the plot and the pooled effect estimate diverging from the true effect.18 (Figure 2).

Figure 1
figure 1

Symmetrical funnel plot

Figure 2
figure 2

Asymmetric funnel plot

However, the empirical evidence strongly suggests that visual inspection of funnel plots alone is not a reliable way to assess the shape of the funnel plot.19

Statistical tests to assess shape of funnel plot

Various statistical tests have thus been developed to more objectively evaluate the shape of the funnel plot and test for the presence of publication bias. Regression models are the most commonly used,20 and test for an association between the study's effect estimate and its precision, as measured for example by the inverse of the standard error.21 Publication bias may be present if the fitted regression model suggests that the less precise or smaller studies have bigger effect estimates than the more precise or larger studies.21 A weakness of these tests is that they have low statistical power and sometimes fail to detect publication bias when it actually exists. The Cochrane Handbook for Systematic Reviews of Interventions, available online at http://www.cochrane-handbook.org/, provides a list of different regression models.20

The results of these regression models can vary, depending on the choice of effect measure and the choice of precision measure used to construct the funnel plot.15 This issue has been discussed in various contexts for many years15, 16, 19, 22, 23, 24, 25, 26 and is beyond the scope of the present Toolbox article. The reader is advised to refer to the Cochrane Collaboration's latest recommendations, which undergo regular review and periodic changes.20

Publication bias as part of a more general phenomenon

It should be noted that publication bias is part of a more general type of bias:

Reporting biases arise when the dissemination of research findings is influenced by the nature and direction of results. Statistically significant, ‘positive’ results that indicate that an intervention works are more likely to be published, more likely to be published rapidly, more likely to be published in English, more likely to be published more than once, more likely to be published in high impact journals and, related to the last point, more likely to be cited by others.20

The Cochrane Handbook provides a detailed discussion and typology of reporting biases.20 Reporting bias (including publication bias), true heterogeneity and chance, could all account for an asymmetric funnel plot.15, 20, 27 True heterogeneity refers to genuine variation in effect size according to the size of the study. This could happen if the characteristics of patients in small trials are different from those in large trials, eg small trials may enrol higher risk patients than large trials and effect size may depend on the underlying patient risk.27, 28, 29 Finally, play of chance could also result in an asymmetric plot.

Implications for dental health research

Like other health care domains, publication bias is a serious problem in dentistry.9, 10, 11, 12 Dental health researchers might quite understandably think of the funnel plot and associated statistical tests as tools to detect only publication bias. A funnel plot can be asymmetric for a number of reasons and statistical tests to assess reporting bias have their limitations. In the absence of universal access to trial data in an ideal world, these tests serve as an aid to interpretation.30,31 Dental health researchers and practitioners need to assess funnel plots with the above caveats in mind when reading systematic reviews and to seek advice from experienced systematic reviewers in using funnel plots and associated tests when conducting their own reviews.