From next week, scientists who submit grant applications to the US National Institutes of Health (NIH) will be asked to take a little more care. As part of an increasing drive to boost the reliability of research, the NIH will require applicants to explain the scientific premise behind their proposals and defend the quality of their experimental designs. They must also account for biological variables (for example, by including both male and female mice in planned studies) and describe how they will authenticate experimental materials such as cell lines and antibodies.

These demands are timely, sensible and, if researchers have been following the advice of their scientific societies, will sound familiar. Over the past year, a string of organizations have published their own statements and guidelines to boost the reproducibility of research.

Collectively, the message is: show your work, and don’t fool yourself with unreliable reagents or shoehorned data. Updated guidelines from the Federation of American Societies for Experimental Biology, for example, call for standard ways to cite antibodies and animal-care practices. The Society for Neuroscience has asked for random sampling of everything from subjects to cell parts whenever an entire population is not studied. The American Psychological Association has called for infrastructure and policies to promote data sharing. The Biophysical Society has detailed how to make experimental data widely accessible.And the American Society for Cell Biology has called for subdisciplines to create community standards for assays. More guidelines are in the works, and funders and journals have weighed in too.

These will help to create studies that are more sound. Other biomedical funders should follow the lead of the NIH and introduce similarly tangible requirements.

The NIH has stated that communication and awareness are crucial to address the lack of reproducibility in research. But unreliable work has many causes — and aspects of today’s scientific landscape can thwart quality research. Competition for funding and faculty positions, and for the publications necessary to secure them, encourages uncritical acceptance of results. All too often, it is better to be first but wrong than scooped and right. Journals, including this one, have gone some way to acknowledge and take up their responsibility for this.

At the same time, experiments have become outsourced. Kits and reagents bought from commercial vendors allow scientists to do more research in a fraction of the time needed for ‘home-brew’ experiments with reagents created in the lab. These resources are invaluable but leave scientists less able to anticipate and identify artefacts.

Guidelines can cut down on mistakes from rushed publications and help to disseminate knowledge that would otherwise have to be gained by experience. But the real power of such recommendations lies less in their specific contents and more in the values that produced them.

Civil society depends on people acting according to societal norms, even when not doing so is unlikely to bring punishment. Citizens’ motivation is not to gain material resources but to maintain their integrity. Similarly, the scientific community depends on researchers who adhere to values that are embodied in guidelines and recommendations. Doing quality research offers intrinsic rewards.

Guidelines work best when they build a culture that makes proper behaviour second nature. They can help to make researchers aspire to the values that produced them. They are valuable not only because scientists follow them but also because they can inspire researchers to uphold their identity and integrity.