Nathalie Percie Du Sert, who developed the Experimental Design Assistant. Credit: Welcome Trust

A free online tool that visualizes the design of animal experiments and gives critical feedback could save scientists from embarking on poorly designed research, the software's developers hope.

Over the past few years, researchers have picked out numerous flaws in the design and reporting of published animal experiments, which, they warn, could lead to bias. In response, hundreds of journals have agreed to voluntary guidelines for reporting animal studies: checklists of best practice, such as what statistical calculations to use to ward off error.

But these lists kick in after scientists submit a paper, says Nathalie Percie du Sert, who specializes in experimental design at the National Centre for the Replacement, Refinement and Reduction of Animals in Research (NC3Rs) in London. “When you get to the reporting stage, that's a bit too late,” she says. “We want researchers to think about these issues at the design stage.”

Percie du Sert's solution is a programme called the Experimental Design Assistant (EDA), which launched in October 2015. She hopes that it will help to improve the quality of animal research and perhaps even become an integral part of the conduct of animal studies.

The EDA allows scientists to create a visual representation of an experiment by laying out its key elements — hypothesis, experimental method and planned analysis — in logically connected, coloured boxes. The software then uses a built-in set of rules to spot potential problems, and suggests refinements. These may be simple — the researcher hasn't specified how to randomize animals to the control or treatment arm — or more complex: there are potential confounding variables in the control and trial arms. The tool can also assist scientists with calculating the sample size needed to ensure a statistically robust result, or with randomization.

There's nothing fundamentally new in the EDA, says Percie du Sert. It builds on existing knowledge of good experimental design. But it can aid scientists who have little training in the area, she says, and teach them design choices.

Visit the Toolbox hub for more articles

Since the EDA's launch, around 400 accounts have been created to use it, producing between 50 and 100 experimental diagrams in total each month, says Percie du Sert. She does not have access to detailed information about its users; the sensitivities around animal research and the need to protect researchers mean that data on who is using it, and how, are secured.

The Wellcome Trust's Sanger Institute, a genome-research centre in Cambridge, UK, is rolling out an internal training programme that includes lessons on design and use of the EDA; the agency is encouraging staff to use the software to present experiments to ethical-review committees, says Natasha Karp, a biostatistician at the institute. Karp took part in the working group that oversaw the tool's development, and says that she uses it to visualize the experiments of the biologists whom she supports.

We want researchers to think about these issues at the design stage.

The EDA is not the only software that aims to improve research quality and reproducibility. Other tools check manuscripts before publication for issues such as errors in formatting or omission of P values. These include Penelope, a paid-for service aimed at journal publishers; another tool called WebCONSORT (which is not yet freely available) is being tested as a way to improve reporting of clinical trials. Protocol Navigator, a free web application created by scientists at Cardiff University, UK, also produces visual experiment maps that can be shared.

But the EDA specifically targets animal research, and as such, is unique in its ability to give a rapid overview of the design and analysis of animal experiments, says Karp. “There isn't anything else quite like this system.”

Percie du Sert hopes that a visual representation of experiments could become common practice, used in research papers or lab meeting presentations. Eventually, the EDA might even produce time-stamped versions to prove that an experiment was conducted and analysed as designed, she adds, rather than being the product of a scientist searching for meaning in data after the fact — a frowned-upon practice sometimes called HARKing ('hypothesizing after the results are known').

The online tool can seem a little complicated, says Jeffrey Mogil, who studies pain at McGill University in Montreal, Canada. “But I actually think that people might get a big kick out of using this,” he says. “It looks like a cool way to break in new grad students or teach the scientific method to undergrads.”