Many areas of neuroscience are now critically dependent on computational tools to help understand the large volumes of data being created. Furthermore, computer models are increasingly being used to help predict and understand the function of the nervous system. Many of these computations are complex and usually cannot be concisely reported in the methods section of a scientific article. In a few areas there are widely used software packages for analysis (for example, SPM, FSL, AFNI, FreeSurfer and Civet in neuroimaging) or simulation (for example, NEURON, NEST, Brian). However, we often write new computer programs to solve specific problems in the course of our research. Some of these programs may be relatively small scripts that help analyze all of our data, and these rarely get described in papers. As authors, how best can we maximize the chances that other scientists can reproduce our computations, find errors or reuse our methods on their data? Is our research reproducible1?

To date, the sharing of computer programs underlying neuroscience research has been the exception (see below for some examples) rather than the rule. However, there are many potential benefits to sharing these programs, including increased understanding and reuse of your work. Furthermore, open source programs can be scrutinized and improved, whereas the functioning of closed source programs remains forever unclear2. Funding agencies, research institutes and publishers are all gradually developing policies to reduce the withholding of computer programs relating to research3. The Nature family of journals has published opinion pieces in favor of sharing whatever code is available, in whatever form4,5. Since October 2014, all Nature journals require papers to include a statement declaring whether the programs underlying central results in a paper are available. In April 2015, Nature Biotechnology offered recommendations for providing code with papers and began asking referees to give feedback on their ability to test code that accompanies submitted manuscripts6. In July 2015, F1000Research stated that “software papers describing non-open software, code and/or web tools will be rejected”7. Also in July 2015, BioMed Central introduced a minimum-standards-of-reporting checklist for BMC Neuroscience and several other journals, requiring submissions to include a code availability statement and for code to be cited using a DOI or similar unique identifier8. We believe that all journals should adopt policies that strongly encourage or even mandate the sharing of software relating to journal publications, as this is the only practical way to check the validity of the work.

What should be shared?

It may not be obvious what to share, especially for complex projects with many collaborators. As advocated by Claerbout9 and Donoho10, for computational sciences, the scholarship is not the article; the “scholarship is the complete software [...]”10. So, ideally, we should share all code and data needed to allow others to reproduce our work, but this may not be possible or practical. However, it is expected that the key parts of the work should be shared, for example, implementations of novel algorithms or analyses. At a minimum, we suggest following the recommendation of submission of work to ModelDB11, i.e., to share enough code, data and documentation to allow at least one key figure from your manuscript to be reproduced. However, by adopting appropriate software tools, as described in the next section, it is now relatively straightforward to share the materials required to regenerate all figures and tables. Code that already exists, is well tested and documented, and is reused in the analysis should be cited. Ideally, all other code should be communicated, including code that performs simple preprocessing or statistical tests and code that deals with local computing issues such as hardware and software configurations. While this code may not be reusable, it will help others understand how analyses are performed, find potential mistakes and aid reproducibility. Finally, if the work is computationally intensive and requires a long time to run (for example, many weeks), one may prefer to provide a small 'toy' example to demonstrate the code.

By getting into the habit of sharing as much as possible, not only do we help others who wish to reproduce our work (which is a basic tenet of the scientific method), we will be helping other members of our laboratory or even ourselves in the future. By sharing our code publicly, we are more likely to write higher-quality code12, and we will know where to find it after we have moved on from the project13, rather than having the code disappear on a colleague's laptop when they leave your group or suffer some misfortune14. We also will be part of a community and benefit from the code shared by others, thus reducing software development time for ourselves and others.

Simple steps to help you share code

Once you have decided what to share, here are some simple guidelines for how to share the work. Ideally, these principles should be followed throughout the lifetime of the research project, not just at the end when we wish to publish our results. Guidelines similar to these have been proposed in many areas of science15,16,17, suggesting that they are part of norms that are emerging across disciplines. In the 'Further reading' section (Box 1), we list some specific proposals from other fields that expand on the guidelines we suggest here. Box 2 describes several online communities for discussing issues around code sharing.

Version control

Use a version control system (such as Git) to develop the code18. The version control repository can then be easily and freely shared with others using sites such as http://github.com19 or https://bitbucket.org. These sites allow you fine control over private versus public access to your code. This means that you can keep your code repository private during its development and then publicly share the repository at a later stage (for example, at the time of publication), although we recommend opening the code from the start of the project. It also makes it easy for others to contribute to your code and to adapt it for their own uses.

Persistent URLs

Generate stable URLs (such as a DOI) for key versions of your software. Unique identifiers are a key element in demonstrating the integrity and reproducibility of research20, and they allow you to reference the exact version of your code used to produce figures. DOIs can be obtained freely and routinely with sites such as http://zenodo.org and http://figshare.com. If your work includes computer models of neural systems, you may wish to consider depositing these models in established repositories such as ModelDB11, Open Source Brain21 or NITRC22. Some of these sites allow for private sharing of repositories with anonymous peer reviewers. Journal articles that include a persistent URL to code deposited in a trusted repository meet the requirements of level two of the “analytic methods (code) transparency” standard of the Transparency and Openness Promotion guidelines15.

License

Choose a suitable license for your code to assert how you wish others to reuse your code. For example, to maximize reuse, you may wish to use a permissive license such as MIT or BSD23. Licenses are also important to protect you from others misusing your code. Visit http://choosealicense.com/ to get a simple overview of which license to choose or http://www.software.ac.uk/resources/guides/adopting-open-source-licence for a detailed guide.

Etiquette

When working with code written by others, observe Daniel Kahneman's 'reproducibility etiquette'24 and have a discussion with the authors of the code to give them a chance to fix bugs or respond to issues you have identified before you make any public statements. Cite their code in an appropriate fashion.

Documentation

Contrary to popular expectations, you do not need to write extensive documentation or a user's guide for the code to still be useful to others4. However, it is worth providing a minimal README file to describe what the code does and how to run it. For example, you should provide instructions on how to regenerate key results or a particular figure from a paper. Literate programming methods, in which code and narrative text are interwoven in the same document, make documentation semiautomatic and can save a lot of time when preparing code to accompany a publication25,26. However, these methods admittedly take more time to write in the first instance, and you should be prepared to rewrite documentation when rewriting code. In any cases, well-documented code allows for easier reuse and checking.

Tools

Consider using modern, widely used software tools that can help with making your computational research reproducible. Many of these tools have already been used in neuroscience and serve as good examples to follow, for example, Org mode27, IPython/Jupyter28 and Knitr29. Virtualization environments, such as VirtualBox appliances and Docker containers, can also be used to encapsulate or preserve the entire computational environment so that other users can run your code without having to install numerous dependencies30.

Case studies

In addition to the examples listed above in “Tools”27,28,29, there are many prior examples to follow when sharing your code. Some prominent examples of reproducible research in computational neuroscience include Vogels et al.31 and Waskom et al.32; see https://github.com/WagnerLabPapers for details. The ModelDB repository contains over 1,000 computational models deposited with instructions for reproducing key figures to papers; for example, see https://senselab.med.yale.edu/ModelDB/showModel.cshtml?model=93321 for a model of activity-dependent conductances33.

Data

Any experimental data collected alongside the software should also be released or made available. For small data sets, this could be stored alongside the software, although it may be preferable to store experimental data separately in an appropriate repository. Both PLOS and Scientific Data maintain useful lists of subject-specific and general repositories for data storage; see http://journals.plos.org/plosbiology/s/data-availability#loc-recommended-repositories and http://www.nature.com/sdata/policies/repositories.

Standards

Use of (community) standards, where appropriate, should be encouraged, particularly use of nonproprietary formats to enable long-term accessibility. In computational neuroscience, for example, PyNN34 and NeuroML35 are widely used formats for making models more accessible and portable across multiple simulators. Neuroimaging data and results can be organized using BIDS36.

Tests

Testing the code has long been recognized as a critical step in the software industry, but the practice has not yet been widely adopted by researchers. We recommend including test suites demonstrating that the code is producing the correct results37. These tests can be at a low level (testing each individual function, called unit testing) or at a higher level (for example, testing that the program yields correct answers on simulated data)38. With public data available, it is often straightforward to have a test verifying that published results can be recomputed. Linking tests to continuous integration services (such as Travis CI, https://travis-ci.org) allows these tests to be automatically run each time a change is made to the code, ensuring that failing tests are immediately flagged and can be dealt with quickly.

User support

Although some people are eager to provide support for their code after it has been published, others may feel that they do not want to be burdened by, for example, feature requests. One simple suggestion to avoid this is to establish a user community for the code39. This could be as simple as creating a mailing list or asking for issues to be posted on a GitHub repository.

Closing remarks

Changing the behaviors of neuroscientists so that they make their code more available will likely be resisted by those who do not see the community benefits as outweighing the personal costs of the time and effort required to share code40. The community benefits, in our view, are obvious and substantial: we can more robustly and transparently demonstrate the reliability of our results, we can more easily adapt methods developed by others to our data and we can increase the impact of our work as others can similarly reuse our methods on their data. Thus, we will endeavor to lead by example and follow all these practices as part of our future work in all scientific publications. Even if the code we produce today will not run ten years from now, it will still be a more precise and complete expression of our analysis than the text of the methods section in our paper.

However, exhortations such as this article are only a small part of making code sharing a normal part of doing neuroscience; many other activities are important. All researchers should be trained in sound coding principles; such training is provided by organizations such as Software Carpentry38 or Data Carpentry and through national neuroinformatics initiatives such as http://python.g-node.org. Furthermore, we should request code and data when reviewing, and we should submit to and review for journals that support code sharing. Grant proposals should be checked for mentions of code availability, and we should encourage efforts toward openness in hiring, promotion and letters of reference41. Funding agencies and editors should also consider mandating code sharing by default. This combination of efforts on a variety of fronts will increase the visibility of research accompanied by open-source code and demonstrate to others in the discipline that code sharing is a desirable activity that helps move the field forward.

We believe that the sociological barriers to code sharing are harder to overcome than the technical ones. Currently, academic success is strongly linked to publications and there is little recognition for producing and sharing code. Code may also be seen as providing a private competitive advantage to researchers. We challenge this view and propose that code be regarded as a research product and as part of the publication, in which it should be shared by default, and that those conducting publicly funded research should have an obligation to share code. We hope that in the future code sharing becomes the norm. Moreover, we are advocating for code sharing as part of a broader culture change embracing transparency, reproducibility and the reusability of research products.

Author contributions

All authors contributed to discussions and to writing and editing the manuscript.