In an attempt to ensure high standards of transparency and reproducibility, Nature Plants is introducing a plant-specific reporting checklist for authors — and making it a requirement for all refereed papers.
Scientists are, by nature, sceptics. We do not like to accept anything as fact unless it can be tested repeatedly, at least in principle. Moreover, furthering scientific knowledge is not a static process but one of revision and reassessment fuelled by discovery. It is possible that when Sir Isaac Newton wrote “if I have seen further, it is by standing on the shoulders of giants” in a letter to Robert Hooke in 1676, it was a subtle insult referring to the latter's short stature — nonetheless, the phrase perfectly encapsulates the way that research builds on the work of the past, even when that ‘past’ is as close as last year or last month.
For this to be possible, information in scientific papers must be sufficiently complete as to make them reproducible. At Nature and the Nature Research Journals, we are always striving to ensure transparency and the absence of ambiguity in our published research, a part of which drive has been the introduction of a reporting checklist.
From as long ago as 2013, the Nature journals have been taking specific steps to raise the standards of methodological reporting in life science papers. In part, we were inspired by a National Institute of Neurological Disorders and Stroke (NINDS) workshop in 2012 on poor reporting standards in preclinical studies, the discussions from which culminated in recommendations for increasing reporting standards for animal experiments (Nature 490, 187–191; 2012).
Our policies are essentially an articulation of what any conscientious scientist would be doing anyway. To encourage data transparency, we ask for the inclusion of original data for gels and blots in supplementary information when edited or manipulated versions appear in the main report. We mandate the deposition of certain kinds of datasets (atomic structures, gene sequences and microarrays) in community-endorsed repositories, and when specialist repositories are not available, we encourage submission of large datasets in unstructured databases such as figshare or Dryad. Indeed, Scientific Data was established by Nature Publishing Group specifically for the publication of descriptions of datasets (known as Data Descriptors), allowing such data to be more easily shared and reused, and granting due recognition to the originators of the data. (Details of our data availability policies can be found online.)
A key feature of our push for greater reproducibility has been the development of multi-point reporting checklists, which serve as reminders to authors of the expected reporting standards as well as being a handy location for reviewers to find crucial details about the experiments and analyses that are being presented. A generic checklist consolidating methodological details common to a number of disciplines in the life sciences is available, but we have produced a version more targeted to plant sciences. For all primary research that is sent for peer review, we will be asking authors to provide completed versions of one of these checklists. Hopefully most authors will find them to be a useful aide-mémoire for what should be included in their report and so will be able to supply this information at initial submission, which may also avoid a possible delay later in the peer review process.
Along with data deposition, another important aspect of reproducibility is a proper definition of the system used in experimentation. In the Nature Plants checklist, this is addressed in a single question concerned with the identification and validation of any antibodies used. Poor antibody validation and characterization pollutes the literature with false positive and unquantifiable results. Despite there being no agreed register, we ask authors to provide details of antibody sources and validation, and we will be following the ongoing discussions on how best to address the issues of insufficient antibody characterization.
Our colleagues who deal with animal research have to contend also with the multiplicity of cell lines, and they have bodies such as the International Cell Line Authentication Committee (ICLAC) and the National Center for Biotechnology Information (NCBI) to help with such issues. With plant research, much less cell line research is performed, and we lack equivalents to the immortalized lines that have almost gained the status of independent model organisms in some areas of animal cell biology. However, the principles remain the same. The organism on which research has been performed must be clearly stated at least to the level of species, and preferably variety or ecotype for heavily studied plants such as Arabidopsis thaliana or rice.
The largest section of the checklist is devoted to the reporting of statistics. This is because statistical information is often extremely poorly reported. I hesitate to suggest that the actual application of statistics is itself poor, only because the way it is presented in papers can make it impossible to tell what tests have been used, let alone whether they are appropriate. And yet without such basic information as the number of times an experiment has been performed, the sample size, whether replicates were performed on the same or independent samples, whether standard deviations, standard errors of the mean or confidence intervals are the basis of error bars, or what the P value actually is, it is impossible to know how much trust to put in a set of results.
For anyone feeling uncertain about how to correctly employ statistics in their work, a good starting place would be the ‘Statistics for Biologists’ web collection of articles from across the Nature journals addressing both simple and more complicated statistical issues. Alongside news articles, reviews and practical guides, there is also a complete archive of the Nature Methods column ‘Points of Significance’, which, since 2013, has provided a basic introduction to core statistical concepts and methods, including experimental design.
We understand that submitting a study for publication is already a taxing experience, and we do not want to make it any more bureaucratic than is absolutely necessary. Rather, we hope that this checklist will be a useful reminder of the minimum standards of scientific reporting and a succinct reference for reviewers to judge the reliability of what they are reading.
Every scientific study builds the foundations on which future advances rest; the responsibility lies on us all to ensure that, for want of accurate reporting, they do not crumble under such weight.