Credit: newspaper background, Macida/E+/Getty images

Scientific write-ups and science news are supposed to be objective accounts of past work, or of future plans based on current results. Yet subjectivity and unintentional bias can creep in. For example, a therapy tested in mice is claimed to be ‘straightforward to translate’ on the basis of clinical trials of presumably comparable therapies; an approach is deemed general without supporting evidence in more than one system or set of conditions; results are presented as statistically significant in an underpowered study on the basis of P values that are slightly below 0.05.

Bias and exaggeration are inherently human. Researchers, reviewers, editors and science journalists can get overly excited by promising results, and may find it difficult to avoid overstating the significance of exciting findings or ideas. Sometimes, hype is not easy to detect, especially when information is limited. The need to ‘sell’ one’s research to funding agencies, journals, employers and colleagues to get ahead and craft a successful career can heighten exaggeration and unchecked leaps of faith. It is therefore crucial to be conscious of typical biases and statistical pitfalls.

Reporting biases fall into various classes: sensationalism, shaky evidence, neglect of relevant information, and insufficient accuracy or clarity. Sensationalism is most acute in stories written to seek attention — often through shrewd use of clickbait — at the expense of accurate reporting. The academic peer-reviewed literature also has its share of hyped claims, such as cancer treatments with excellent efficacy at curing immunocompromised mice (and not much else), sensors that can detect ultralow concentrations of a molecule (yet only under controlled laboratory conditions), superior diagnostic methods (only shown to work for carefully processed samples) and biological findings that can be exploited in a wide range of biomedical applications (remaining to be defined).

Shaky evidence is more problematic, and in biomedical fields is often a result of insufficient statistical proficiency or of weak research standards. Sample or patient numbers that are not representative of a population, inappropriate use of statistical tests, missing controls, and unchecked assumptions in the distribution of the data or in data-exclusion criteria can lead to wrong conclusions. Such deficiencies can lie undetected for a long time, particularly if the findings happen to meet expected outcomes or if the work happens to be largely inconsequential. Research collaborators, reviewers and editors can’t always catch all the issues.

Neglecting to mention constraints, assumptions and shortcomings can also lead to hyped reporting. For example, if patients drop out from a study, failing to disclose the underlying reasons might obscure drawbacks of the technology. And if a procedure involves complex steps and precise handling of sample preparation, a lack of sufficient procedural detail will hinder reproducibility. Costs of materials or of labour are not always reported when relevant.

Even when methods and protocols are thoroughly described, lack of accuracy and clarity can lead to money and time wasted on unfruitful repetitions. The wrong nucleotide sequence, antibody concentration or machine setting can lead an experiment astray. Even bugs in old versions of software can on rare occasions affect outcomes negatively. Careful and accurate reporting, which is often perceived as boring and is not easily done well, should be viewed as vital. In this regard, this journal requests authors of original research to fill in the Nature Research life sciences reporting summary before peer review, and works with authors and reviewers to ensure factual accuracy and the appropriateness of context and claims for the content that we publish.

How to avoid human fallibility in reporting? A general piece of advice is to assume that overstatements and inaccuracies always sneak in, and therefore to purposely look for them. Ask co-authors or informed colleagues to double-check graphs, schematics, tables and prose. When evidence is preliminary or at the proof-of-concept level, state so and discuss possible limitations and how they could be overcome. If a study is designed to test safety, feasibility, improved outcome or patient benefit, make this clear and discuss any caveats. When reporting on findings in fields that are prone to be hyped in the media (such as cancer immunotherapy, genome editing and precision medicine), be especially mindful of discussing any caveats, such as side effects, risks and costs. A case study is not solid proof that the therapy, diagnostic method or device works. A mechanism associated with a phenomenon doesn’t necessarily explain it.

Readers can take the same advice. Beware of news stories and reports that don’t mention device, treatment or implementation costs, that treat surrogate endpoints as if they were primary endpoints, that confound observational, retrospective and prospective studies, that extrapolate anecdotal evidence, that fail to discuss any caveats, limitations and relevant previous research, that confuse correlation with causality, or that include numbers without the relevant context. In science, hype shouldn’t be typed or shared.