Nature | News

Swiss survey highlights potential flaws in animal studies

Poor experimental design and statistical analysis could contribute to widespread problems in reproducing preclinical animal experiments.

Article tools

Rights & Permissions

Fairfax Media via Getty Images

Many Swiss scientists who conduct research with animals are unaware of guidelines for reducing potential bias, a survey shows.

Less than one-fifth of Swiss scientists who sought permission to experiment with animals in 2008, 2010 and 2012 reported the use of methods to curb biases and reduce chance findings, according to a study published this month in PLOS Biology1. Research planned without such precautions can produce distorted findings that could be contributing to the widespread difficulty in reproducing published results in biomedical science.

Hanno Würbel, an animal behaviourist at the University of Bern, Switzerland, and colleagues analysed 1,277 applications for animal experiments, and some of the publications that resulted from them. The scientists found that relatively few applicants reported the use of methods such as sample randomization when designing their experiments. And in a companion paper published in PLoS ONE, Würbel's team present the results of a detailed survey of animal researchers in Switzerland, asking about measures to lessen the risk of bias.

The survey was sent to all 1,891 researchers registered for animal experimentation in the country, and 302 of the 530 responses were sufficiently complete to be included in the analysis. The responding scientists reported a greater use of methods to reduce bias than found in the published literature2.

“What we learned from the survey and interviews we conducted with scientists is that there’s still a lack of awareness of the problem,” says Würbel. “Many scientists think that this so-called reproducibility crisis is exaggerated and it’s not all that bad.”

In Switzerland and the European Union countries, scientists who want to conduct experiments with animals — including vertebrates and some invertebrates — must seek authorization from local or national authorities. In most countries, the process involves an ethical review, including a harm–benefit analysis that also assesses the experiments' scientific validity, such as whether they have statistical analysis plans and whether the study methods are suitable for achieving the expected benefit.

Design choices

The PLoS Biology study reveals that only 8% of the applications mentioned whether a sample-size calculation had been performed, ensuring that the number of animals to be studied would be large enough to robustly test the effect the researchers were looking for. Only 13% of the applications mentioned whether the animals had been assigned randomly to treatment groups, and just 3% reported whether the researchers would measure the outcome of the experiment without knowing which treatment group each animal belonged to (blinding).

In contrast, 69% of researchers who responded to the survey published in PLoS ONE said that they had performed sample-size calculations. Eighty-six percent said that they used randomization and 47% said they used blinding. But only a small fraction of researchers said they had reported such measures in their latest publication: 18%, 44% and 27 %, respectively.

The situation “is probably even worse than these papers suggest,” says Ulrich Dirnagl, a neurologist at the Charité Medical University in Berlin.
 He says that some scientists simply tick a box on the application form, saying that they randomized their animal groups, without knowing what it means. Others do a ‘sample-size samba’, manipulating the expected size of the measured effect to justify the desired sample size, a reversal of the usual statistical calculation. 

Researchers in many different fields face problems like this in every phase of their experiments, from design to data analysis, says Jelte Wicherts, a psychologist at Tilburg University in the Netherlands who has written a checklist to guide scientists' planning3.

But it is not clear that such advice is reaching researchers. The UK National Centre for the Replacement, Refinement and Reduction of Animals in Research (NC3Rs) in London developed study guidelines in 2010, called 'Animal Research: Reporting of In Vivo Experiments' (ARRIVE). They were widely disseminated, says Nathalie Percie du Sert, an experimental-design specialist at NC3Rs, but Würbel’s survey showed that half of the Swiss respondents didn't even know about them.

Würbel says that increasing the number of scientists who pregister their studies, declaring in advance how they will be done and analysed, would help reduce the risk of bias. Wicherts agrees, arguing that this makes it less likely that a scientist will be solely focused on obtaining a significant result or their desired outcome from a study.

Journal name:
Nature
DOI:
doi:10.1038/nature.2016.21093

References

  1. Vogt, L., Reichlin, T. S., Nathues, C. & Würbel, H. PLoS Biol. 14, e2000598 (2016).

  2. Reichlin, T. S., Vogt, L. & Würbel, H. PLoS ONE 11, e0165999 (2016).

  3. Wicherts, J. M. et al. Front. Psychol. 7, 1832 (2016).

For the best commenting experience, please login or register as a user and agree to our Community Guidelines. You will be re-directed back to this page where you will see comments updating in real-time and have the ability to recommend comments to other users.

Comments

Commenting is currently unavailable.

sign up to Nature briefing

What matters in science — and why — free in your inbox every weekday.

Sign up

Listen

new-pod-red

Nature Podcast

Our award-winning show features highlights from the week's edition of Nature, interviews with the people behind the science, and in-depth commentary and analysis from journalists around the world.