Publication bias

Bias can exist at all stages of the research process but an important, and often forgotten bias is whether research material becomes fully published within a readily accessible medium such as a journal.

Publication bias has been defined as the tendency on the part of investigators to submit, or the reviewers and editors, to accept manuscripts based on the direction or strength of the study findings.1 This definition concentrates on the fact that the strongest and most positive studies are most likely to be published. However, a better and broader definition of publication bias is that it is any influence that reduces the amount of good science appearing in the literature.2

Publication bias in healthcare research

Early work into publication bias began in the fields of education and psychology.1Further research on the existence, causes and possible effects of publication bias in medicine followed this in the 1980s,3 but, to date, there has been little research into publication bias within the dental literature.

The most common method of assessment of publication bias is to review the publication rate of research originally presented at speciality conferences and meetings. Although this provides baseline knowledge of the publication rates it is likely to underestimate the publication bias of all the research conducted in a particular speciality as the number of conferences and presenters is limited compared with the overall number of active researchers.

Previous assessments of the publication rate for abstracts presented as summary reports at conferences are consistent at around 20–50%. (Table 1).4,5,6,7,8,9

Table 1 Table 1

Only a single study has been identified, which has looked at dentistry. Corry,4 investigated a 10% random sample of IADR abstracts in both 1983 and 1984 and found that the proportion of abstracts that proceeded to full publication was substantially lower (21.6% and 24.2% respectively) than that quoted for other health disciplines. 

Time lag bias

There is often an interval between presentation of research findings at a scientific meeting and publication of a full report. This may be due to final collation of results, delay in writing up, time taken for peer review and correction of reports, or simply journal's publication backlog. If such a time lag becomes 'excessive' it can produce another form of publication bias by delaying the appearance of relevant results and depleting the pool of available current research findings. This may also skew the significance of results, as trials with positive results are likely to be published before those with negative findings. Most research is published within 3 to 4 years of presentation at a conference although there is a variation across a number of medical specialities (Table 2).4,5,6,8,10,11 Studies published late have the potential of being obsolete because they have been superseded by more recent publications. But what should we now consider as being published too late? From the published literature it would seem that an acceptable time limit is 30 months and after this the research publication could be considered delayed.4,5However, not all areas of research will experience the same susceptibility to time lag bias and so editors and referees of journals should assess the usefulness of 'late research'. In turn it is their duty to be aware of this bias, promote early writing up and maintain an optimum knowledge base of their subject matter.

Table 2 Table 2

Publication language bias

Language can also prove to be a barrier to the publication and collation of salient research findings. Nylenna et al.,12 investigated the effect of varying the language of two fictional papers on their assessment by referees. One of the papers was less favourably assessed if presented in the national language of the referee than if it was presented in English. This suggests that papers submitted in English may be published more frequently than those of another language.

Egger et al.,13 looked at pairs of randomised controlled trial reports published in German and English. No differences were found between the German and English language papers with regard to design characteristics and quality of the reports. However, a greater proportion of the articles published in English (62%) reported significant differences with the outcome compared with those that had been published in German (35%). If only reports written in English are included in reviews and meta-analyses then the inferences could be skewed by the predominance of positive results. To investigate this potential bias Gregoire et al.,14 duplicated 28 meta-analyses that had been restricted to papers published in English and found that the conclusion of at least one meta-analysis would have been different even if only one German or Swiss paper had been included.

Moher et al.,15 investigated the justification for excluding trials published in languages other than English when conducting systematic reviews. No difference (in the completeness of reporting) was found between papers published in French, German, Italian or Spanish and English. They recommended that all trial reports (irrespective of language) should be included in systematic reviews to increase the precision and reduce the number of systematic errors in future reviews.

Why is publication bias important?

Development of a biased pool of evidence

If there is a failure to publish current research or a delay in publication it is possible to develop a biased pool of evidence. Without access to all the information on a particular subject there is the potential to find only the most positive results which are readily available in journals. Analysis of this evidence could then yield biased conclusions in systematic reviews and the point estimate of effect in meta-analyses. For example in 1986 Simes3 looked at research into the effect of combination chemotherapy versus a single agent in the treatment of advanced ovarian cancer. An analysis of unregistered published trials favoured combination therapy. However, a subsequent analysis of all registered trials, which included both published and unpublished data, revealed no benefit to combination therapy.

Complete assessment of the evidence

Without the process of peer review and our own interpretation of the findings we will be unable to assess where the results sit in the research hierarchy

Only when we have access to the full report do we have the potential to evaluate the research methodology and determine the usefulness of this research in relation to the whole body of evidence. Without the process of peer review and our own interpretation of the findings we will be unable to assess where the results sit in the research hierarchy, whether the statistics are correct, and whether the conclusions fit the results.

Publication as an ethical imperative

In any study, trust has been placed in the researchers to test, for example, a new procedure or material in the faith that the results of the study will be used to benefit others. Agencies which fund research and volunteers who consent to participate in investigations do so with the understanding that such work will make a contribution to scientific knowledge. This is particularly true for randomised controlled trials where participants may potentially receive an inferior treatment. The failure then to publish the results, particularly when material involves work on human or animal subjects, has therefore been deemed as a form of scientific misconduct.16,17,18 There is also the consideration of wasting resources on a research project, which may have already been capably carried out. Despite negative findings, publication of 'good' research can prevent duplication of effort, or provide a baseline methodology which can be adapted or improved. Alternatively, results can be used to calculate sample sizes for prospective studies or included in systematic reviews. The value of well conducted lab-based and retrospective research must not, therefore, be underestimated. Data from these studies can be used to develop protocols and undertake power calculations for prospective studies using rigorous research methods in the future.

Factors affecting publication bias

A number of reasons have been cited for publication bias, including:

  • Poor quality of research design

  • Small sample size

  • External funding

  • Negative findings

  • Failure of authors to submit manuscripts

  • Rejection of manuscripts by journal editors

Rejection of manuscripts by journals has been found to be less important than failure of researchers to actually submit their work for publication

Perhaps the common misconception is that most work remains unpublished because of rejection from chosen journals. However, investigators have found the contrary to be true. Rejection of manuscripts by journals has been found to be less important than failure of researchers to actually submit their work for publication.17,19,20 Failure to submit manuscripts can be caused by a number of factors including unimportant results, incomplete analysis, and investigators being too busy or having lost interest. This lack of enthusiasm to publish has been coined the 'file drawer problem'.21

Presentation of research at a conference or meeting is insufficient to allow proper appraisal of the scientific method. Information available from the abstracts of papers presented at conferences must be taken with a note of caution when read in isolation. Soffer,22 warned against the uncritical acceptance of reports from conferences due to the difficulty of establishing the true status of the research from a 200 word abstract designed to serve as a basis for acceptance. Conference abstracts contain only limited detail of the research process and, although frequently assessed prior to acceptance, do not undergo the same detailed formal peer review experienced when most papers are submitted for publication. Critical peer review of the content of conference abstracts is therefore lacking and without structured guidelines it has been shown that inconsistencies arise between the abstract and a later full version of the study report.23 It is therefore important that abstracts proceed to a full publication, which will provide access to detailed investigational methods. Only then can the research design, context and results be properly evaluated and their significance related to the overall body of evidence that is available.

Is there a problem in dental research?

Despite the growing collection of information in medical journals on publication bias and the factors affecting publication, there is still little information on how publication bias affects dentistry and its specialities.

A recent study24 found that 41.6% (252 out of 546) abstracts presented at leading dental conferences proceeded to full publication with a median time lag of 18 months (IQR 9, 30 months). These figures are similar to the findings of other studies into publication bias in medical specialities.

There is a need for further investigation of this problem, to investigate the publication rates of dental research and raise awareness of the potential pitfalls of ignoring publication bias within the literature. These data will provide baseline information to allow continued prospective research into the reasons behind failure to publish, and permit comparisons with other fields of healthcare. Any significant degree of bias at this level may have important consequences on the availability of potentially salient research findings and could impact upon the way in which evidence-based dentistry is conducted.

Conclusion

Publication bias in medical specialities has been shown to influence the estimate of treatment effect. The level of publication bias and time lag to publication appear to be the same in dentistry as medicine.

Dental researchers need to be aware of the potential problems that arise from publication bias and protect against it

Systematic reviews of dental interventions are therefore just as likely to be affected by publication bias as medical ones. Dental researchers need to be aware of the potential problems that arise from publication bias and protect against it.