The Oxford English Dictionary defines research as “searching again or repeatedly.” In practice, the repeat component of research has become a fundamental part of high-quality experimental science. Despite this standard, the published scientific record has recently come under fire for significant numbers of reports that, in whole or in part, are not reproducible. The scientific community must act to change the attitudes and behaviors that contribute to this problem. In recognition that journals have an essential role in this process, Nature Chemical Biology and the Nature journals are adopting editorial measures to improve transparency in experimental design and data presentation in the articles that we publish.

Two reports have identified low reproducibility in published data as a major concern for translational research that contributes to costs and the high failure rates in drug development. In one report, Bayer scientists quantified the outcomes of in-house efforts to reproduce published data prior to launching a drug development program on a new target, reporting that only 20–25% of published data were consistent with their in-house findings (Nat. Rev. Drug Discov. 10, 712, 2011). Begley and Ellis discuss a similar analysis performed at Amgen, yielding an even lower 11% rate of reproducibility (Nature 483, 531–533, 2012). The authors of both reports discussed key reasons for their observations, touching on the incorrect and inappropriate use of statistics and the limits of preclinical models, as well as on selective data presentation and poor study design.

Funding agencies are taking steps to address these concerns. In 2012, the US National Institute of Neurological Disorders and Stroke (NINDS) and the US National Cancer Institute (NCI) sponsored workshops focusing on reporting standards in papers involving preclinical studies to increase the reproducibility of data. The NINDS workshop emphasized improved reporting of methodological details, particularly in animal studies (Nature 490, 187–191, 2012), including disclosure of sample size, whether and how samples are randomized and how the data are subsequently handled. The NCI workshop (http://cdp.cancer.gov/docs/checklist_draft_guidelines.docx) addressed a wider array of issues, including sample preparation and quality control, assay design, validation and reproducibility, as well as clinical trial design. Collectively, these workshops revealed a common need for improved study design and reporting.

At Nature Chemical Biology, we have established some procedures to ensure we publish robust and well-documented studies, for example by ensuring that full chemical characterization is available for all compounds used in a paper. Beginning in May, we and the other Nature journals will expand these practices to improve the quality of study design and subsequent data management in the articles we publish. The key elements of this initiative, discussed below, are outlined on the shared policy page for Nature journals (http://www.nature.com/authors/policies/reporting.pdf).

We are removing length restrictions on methods sections to allow authors to detail their experimental designs and methods sufficiently for readers to interpret and replicate them. As an aid to authors, the Nature journals have created a checklist to draw attention to several design elements that are critical for the interpretation of research results and that are often reported incompletely. For example, authors will need to describe methodological parameters that may introduce bias or influence robustness and to continue to provide precise characterization of key reagents, such as chemical compounds, cell lines and antibodies. We will require more precise descriptions of statistics, and at the editor's discretion, we will employ statisticians as consultants on certain papers. The checklist also consolidates existing policies about data deposition and data presentation. Authors will receive a personalized checklist when a revision is requested after the first round of review, and referees will then be asked to specifically comment on the checklist points along with other completed revisions.

These efforts are a small response to a larger challenge that will need to be tackled by the scientific community. It may be unrealistic to assume that simply raising awareness and calling on scientists to amend some of their habits will be sufficient to solve the problem. Looking forward, the community needs to determine the most productive strategy for sharing the responsibility for research reproducibility among scientists, publishers, funding organizations and research institutions.

All stakeholders must commit to raising the standards of research design, execution and reporting and to developing the mechanisms needed to support these standards. Agreement on standards is a critical step. The reproducibility concerns outlined here identify common issues, but the community needs to have broader conversations about general standards across the life sciences and also within emerging disciplines. Enforcement of higher standards presents a second challenge. Checklists and reminders during peer review can ensure that published research has been examined from all angles. However, saving rigorous validation until a paper has been written is far too late in the process. Principal investigators and institutions need to ensure that studies are held to the highest standards from conception to the execution and reporting of experiments. Education is a key element in this approach: many scientists do not receive adequate training in statistics and other quantitative aspects of their subject. Mentoring of young scientists on matters of rigor and transparency is inconsistent at best. Funding agencies and research institutions have required training courses to address shared challenges in the past (for example, the required ethics training by the US National Institutes of Health, http://1.usa.gov/hm4yvF). Comparable required courses in statistics and experimental design for all basic science and biomedical researchers should be considered.

The scientific community shares the responsibility for ensuring the robustness of published scientific studies. We hope that our initial effort focusing on reproducibility and reporting of data in the life sciences will translate into noticeable improvements in this area. We trust that our authors will grasp the significance of this step, we hope that other publishers will adopt similar initiatives, and we urge scientists, their funders and their institutions to continue their efforts in supporting high-quality, transparent scientific research. Because, ultimately, the public trust in science is at stake, and scientists, research institutions, funders and publishers alike rely on the goodwill of the public for their success.