The evaluation of science should be straightforward. You read a paper and you either agree or disagree with the conclusions derived from the data presented. Perhaps you'll cite the paper in your future publications—in order to refute, support or extend the findings. But you assume, in this process, that you have all the information necessary to judge the legitimacy of the authors' claims.

Why wouldn't you? Today, with the availability of online supplementary information, and public databases providing repositories for large datasets, the reader can often repeat the authors' experiments in silico, or at the very least scrutinize on screen each and every result.

Thanks to the ability to manipulate figures online, to magnify and to cross-compare, readers can often identify errors that authors, referees and editors may all have missed. Similarly, authors may notice mistakes in the published versions of their articles that went unnoticed in the course of its final production.

Nature Medicine is no stranger to these sorts of issues, which we address in a case-by-case manner. If the error can be legitimately explained and corrected, and the true data restored to the paper, readers will once again be able to evaluate for themselves the validity of the work.

A darker possibility here is that of fraud, but even in such instances there may be a trail visible to the canny reader. A recent high-profile case in point is an article on patient-specific embryonic stem cells published in Science in May 2005 by Woo Suk Hwang and his colleagues. What started as an instance of duplicated images has quickly turned into a far more serious problem following accusations of data fabrication. An investigation into what actually occurred is under way at the authors' institution. In the meantime, Hwang has decided to retract the paper while firmly denying the allegations.

The problem with trusting what you read in the scientific literature acquires a whole new dimension in the case of clinical trials. In 2000, New England Journal of Medicine (NEJM) published the VIGOR study, which was designed to monitor gastrointestinal side effects in individuals with rheumatoid arthritis treated with the drugs Vioxx (rofecoxib) or Naprosyn (naproxen). The authors reported an incidence of 17 heart attacks in individuals taking Vioxx, resulting in a relative risk of myocardial infarction of 4.25. On 8 December 2005, the journal's editors published an expression of concern over these results upon learning that three additional heart attacks had been omitted from the published data, which translates to a fivefold greater relative risk of heart attacks associated with Vioxx treatment. The relevant data were removed from the manuscript before its submission to the journal.

Although the researchers reported the additional data to the US Food and Drug Administration, the conclusions published in NEJM, and derived from the restricted number of heart attacks, understate the risk of Vioxx use and misinform the reader.

Although evaluating scientific data is arguably straightforward, evaluating the integrity of those data can be a thornier issue. Peer review is one tool for judging the data in hand. Despite instances of misrepresentation of basic research data, or simply faulty conclusions, efforts to replicate published results—albeit time-consuming and frustrating—can often correct the scientific record. But clinical trial data are less amenable to independent verification owing to the enormous cost, effort and restrictions involved in obtaining them. In view of the impact on human health and lives, the scrupulous treatment of clinical results is therefore essential. The numbers, and not their interpretation, must speak for themselves.