All scientific discoveries are built on previous observations. Recently, the US National Institute of Neurological Disorders and Stroke convened a workshop to draw attention to widespread deficiencies in methods reporting in life science articles, and published their recommendation of a core set of reporting standards for study design1. Although their recommendations were aimed mainly at preclinical or purely translational studies, steps to increase the transparency of methods reporting in all experiments, whether purely exploratory or translational, are welcome. To help readers fully understand how the experiments reported in Nature Neuroscience were conducted, we have developed a set of guidelines for the reporting of basic methods information in our pages, along with a checklist for authors (available on our website). Authors will be asked to fill out any relevant portions of this checklist before their paper is reviewed, and the editors will work with the authors following acceptance to try to ensure that all crucial methods details are described.

Many of these guidelines are requirements that exist in our guide to authors and are parameters that editors and referees currently evaluate before a paper is accepted for publication. For example, authors are required to report their statistical evidence clearly: what tests were used, how many samples were evaluated in each condition, what comparison was carried out and what significance level was found. Studies that include gels or blots should also include the full-length gel with appropriate loading controls in the Supplementary Information, and it should be clearly stated how many times a certain experiment was run when 'representative' experiments are shown. As it can be difficult for authors to ensure that all of these guidelines are adhered to, this checklist aims to consolidate many of our previous policies and to serve as a good reminder for authors to ensure that they have described their experiments as clearly as possible.

Our decision to implement this checklist grows out of a project we embarked on in June 2012. For each paper that we sent out for review, editors filled out a basic checklist that included most of the information mentioned above. We then attached this form to the electronic record and asked the referees to comment on whether they found this form useful when evaluating their papers, and followed up with over a 100 referees who had reviewed papers between July and August to get their feedback. About half of the surveyed referees found the form useful and said it helped them find the necessary information in the paper and aided their reviews. About a quarter of the referees did not find a generic form useful at all and the other quarter felt that, although a checklist could be useful in principle, they were not sure whether it was strictly necessary or did not feel that it applied to the specific issues that could be crucial in evaluating papers in their field. We recognize that many referees evaluate papers using their own unique system and, for many, a checklist may not add much. Nonetheless, we did find that the reviews that we received during this time were, in general, much more detailed in terms of comments on the statistics or methods design.

The results of this pilot experiment also made it clear that many papers, at least at the time of submission, did not report all of the details that they should have and that, in many cases, there was considerable ambiguity about some of the experimental parameters. By asking authors to refer to the checklist, we hope that this information will be clear to the editors, referees and readers. We recognize that no single checklist will effectively address the myriad types of techniques used in the neurosciences (and indeed for some disciplines, such as computational neuroscience, much of this checklist will not apply), but the goal is to capture some of the main elements of the types of papers that we most commonly publish. For example, there are many papers that use custom statistics matched to their problem, and asking for a specific statistic in these cases, as the form does, is useless. Still, for experiments using statistics, we urge authors to think of the question, the statistical hypothesis, whether this was an a priori hypothesis or part of an exploration of multiple hypotheses (and if so, how was the multiple testing accounted for), how many tests were carried out and whether independent data were used for each test, the assumptions the test relies on and whether these are known to hold, and any description of the test statistics and inference procedures. Likewise, we urge authors to clearly define their samples and explain exactly how they were collected (for example, five slices from five animals from five different litters) and for representative experiments to indicate how many times the experiment shown was replicated in the laboratory. By providing as much detail as possible, we hope that the methods used will be crystal clear for anyone who reads the paper.

A common complaint from authors is that including all such details in their methods is inconsistent with journals' methods limits. At Nature Neuroscience, we have never edited out any relevant methods-related details, and we are happy to accept papers with methods longer than 2,000 words when genuinely necessary. We continue to believe that methods, as with the rest of the paper, are most effective when they are written succinctly and clearly and we feel that, for most papers, all these relevant details can be comfortably accommodated within 2,000 words. For the paper that does require lengthier methods, we will continue to exercise editorial discretion in allowing longer online methods on a case-by-case basis. We hope these changes will aid in crafting a clearer paper. We welcome comments on our checklist or these policies and invite you to e-mail us at neurosci@us.nature.com.