The machine learning community continues to grow in size and reach. Ever more research areas are adopting machine learning methods to tackle scientific problems and to develop applications, complementing traditional approaches. However, training and validating machine learning models is a complicated process, involving many iterations of trial and error, alongside various other user choices. The complexity of these choices, often reflected in specialized data processing steps, increases as machine learning studies become more interdisciplinary. Such subtleties are not always fully reflected in research articles that report on a particular method or application. For reviewers to properly assess these papers, and for readers to replicate and build on the work, authors should share as much detail as possible about their code, making explicit their processes for model training, hyperparameter selection, and validation.

Numerous organizations have taken steps to encourage code sharing. A survey presented at NeurIPS (Neural Information Processing Systems) last year, revealed that 55% of reinforcement learning papers at NeurIPS, ICML (the International Conference on Machine Learning) and ICLR (the International Conference on Learning Representation) in 2018 provided a link to the code. This year, for the first time at a major machine learning conference, a code sharing policy was introduced at ICML. While not mandatory, the policy encouraged authors to provide code alongside the paper. The ICML committee opted for an inclusive approach, allowing pseudocode as well. The policy has paid off, with code provided for 67% of accepted papers. The organizers of NeurIPS are also experimenting this year with a code sharing policy, one that is again non-mandatory: code submission is expected but not enforced.

There are a number of barriers for authors who want to share their code. A substantial amount of work may be required to render research code understandable, let alone reusable, by others. Industry researchers may be restricted in sharing their methods due to proprietary hardware or software. An encouraging insight from the ICML initiative is that authors from industry were as likely to provide code as authors from academia. It is an open question whether stricter code sharing requirements would hinder the ability of industry researchers to publish their results.

Knowing that others will be able to build on your work (and getting awarded the accompanying community kudos) is enough motivation for most researchers to share their code. However, with only relatively mild incentives in place regarding code sharing, these good intentions can quickly melt away when pitted against an impending submission, or reluctant collaborators. Explicit journal and conference policies, along with knowledge that reviewers will assess code as part of the peer review process, provides the right balance of incentives to authors.

The policy of Nature Machine Intelligence is that whenever code is central to the results — which is the case for many of our papers — code should be provided. It needs to be shared with the reviewers when a paper is sent out to review, and must be made publicly available at the time of publication.

To help reviewers evaluate code, we encourage authors to make it executable by using Code Ocean, a platform that can be used by anyone to make standalone compute capsules based on Docker images. The capsules bring together metadata, code, datasets and software dependencies so that other users can run the code either in the cloud or on their local machine. Three Nature research journals, including Nature Machine Intelligence, launched a trial last year in collaboration with Code Ocean in which reviewers get dedicated computing resources for evaluating the code and verifying claims made by the authors. Referees’ anonymity is preserved when accessing the code. There is professional support to assist authors and to verify that capsules are in working order. The published paper will be accompanied with a link to a code capsule, which is assigned a digital object identifier (DOI).

To date, the authors of 38 out of 67 papers (57%) we have selected for external review — and for which code forms a crucial component of the methodology — used Code Ocean to produce a compute capsule for reviewers. The remaining authors provided code via repositories such as GitHub or Zenodo. Three of the four research articles published in this issue provide code through Code Ocean capsules — see, for example, the capsule by Tong Wang et al. The fourth paper, by Zeng et al., provides a link to a public repository.

It may not always be practical or possible to make code executable, and a one-size-fits-all approach can be counterproductive. Code can depend on proprietary software or highly specialized hardware, as in prototype robotic systems. The goal is that authors should provide as much detail as possible in a code availability statement and, where code is essential to the main results, they should ensure code is available and reusable. The complexity of choices made while developing or applying machine learning models cannot always be represented within a code repository, so implementing a strict code sharing policy does not alone guarantee a route towards transparent and reproducible research. But we believe that it is a crucial step in the right direction.