The starting point for many drug discovery programs is a published report on a new drug target. Assessing the reliability of such papers requires a nuanced view of the process of scientific discovery and publication.
Papers trumpeting new drug targets are a staple of biomedical journals such as Nature Medicine, but their claims are viewed skeptically by many drug development professionals. This skepticism has only been reinforced by a recent attempt to put some numbers on the reproducibility of such reports in a real-world setting (Nat. Rev. Drug Discov. 10, 712, 2011). The authors, from Bayer Healthcare, surveyed their in-house scientists tasked with experimentally validating publications on new drug targets. For only 21% of the 67 cases examined did they find results considered to be completely in line with the literature.
A reproducibility rate of 21% seems remarkably low, and the Bayer authors didn't find any obvious trends to explain this. Given the limited information provided in their report, it's difficult to make an independent evaluation of their findings, and a more fine-grained look at each of the cases might yield further insights. Notably, 70% of the cases examined involved oncology drug targets, so the situation might be different in other therapeutic areas.
Why should published research on new drug targets be so challenging to replicate? Although it doesn't seem plausible that outright fraud could account for a high fraction of the failure rate, less serious forms of bias—so-called 'questionable research practices', such as selective presentation of data—are likely a more pervasive problem (see PLoS ONE 4, e5738, 2009). Statistical errors made by authors, such as the use of insufficient sample sizes, may arise simply from ignorance but have the potential to undermine the conclusions of many papers. Even a basic statistical procedure—comparing the difference between two experimental effects—is widely botched (Nat. Neurosci. 14, 1105, 2011).
Journals, including Nature Medicine, obviously have a large stake in the reliability of what they publish. With work that breaks new ground, whether on new scientific concepts or drug targets, missteps are easy to make, and an effective peer review process is essential. Innovation and originality are not always close bedfellows with rigor and reliability, but they should at least be on speaking terms. Our editors and peer reviewers put an emphasis on mechanistic insight, which may help avoid publication of striking but ultimately irreproducible findings.
Peer review is not infallible. We strive to involve experts who will provide insightful and critical comments, but they are busy scientists in their own right and don't have unlimited time and energy to invest. More fundamentally, there is a tension between making sure the science is right—at the extreme, requiring that the results of a paper be experimentally reproduced in another lab before publication—and getting the results out to the scientific community. Indeed, high-profile journals have recently come under fire for asking too much of authors before allowing publication (Nature 472, 391, 2011). Clearly, it is the job of the editor to strike the right balance. It should also not be overlooked that journals are in competition with each other, and a more stringent review process could conceivably hurt the ability of a journal to attract the most innovative work.
A systematic problem in scientific publishing that is often seen as perpetuating erroneous findings is the difficulty in publishing negative results. A bevy of newly launched journals in which to publish such results may overcome this difficulty. The confidentiality of the work of industry scientists also contributes to the problem—indeed, we don't know which targets the Bayer scientists attempted to validate or how these attempts failed—and companies should seek ways to divulge as much information as possible on these sorts of early-stage preclinical investigations. More systematic efforts to analyze patterns in data reproducibility—for example, focusing on the reliability of specific classes of targets or experimental models—might also help. For our part, we are well aware of our responsibility to air data that call into question the reliability of our published papers, and we routinely do so, typically in our Correspondence section.
It would be naive to think that the first publication of a paper on a new target or technique is definitive, and independent validation of the results is part and parcel of the scientific method. The peer review process and publication doesn't guarantee scientific 'truth', and both authors and journals should be careful not to oversell their findings. Best practices by scientists, peer reviewers and editors should be used at every step, and there certainly is a need for improvement over a 21% success rate in reproducing published results. But there is, no doubt, a lack of a complete alignment between the process of scientific discovery and the desire of the drug industry for plug-and-play drug targets.