Multi-lab efforts point the way to shoring up the reliability of field studies.
In an unusual reproducibility effort, 14 ecology labs across Europe have teamed up to watch grass grow, using identical soil and seeds shipped round the continent. Their study, posted on the bioRxiv preprint server on 8 August (A. Milcu et al. Preprint at bioRxiv http://dx.doi.org/10.1101/080119; 2017), is part of a budding movement to bolster trust in ecological research.
Following the reproducibility crisis that has gripped psychology and biomedical science, ecologists are starting to recognize that their own field might not be immune to doubts over the reliability of its findings. But few replication studies have been done. Ecologists Clint Kelly at the University of Quebec in Montreal, Canada, and Rob Lanfear of the Australian National University in Canberra, found only a handful of replication efforts out of the thousands of studies they analysed, Kelly told the annual meeting of the Ecological Society of America last week in Portland, Oregon.
Some ecologists have proposed replication efforts. In May, Emilio Bruna, a plant ecologist at the University of Florida in Gainesville, circulated a proposal to replicate ten seminal experiments in tropical biology that have never been repeated. They include, for instance, a 1967 study showing how ants and acacia trees benefit one another (D. H. Janzen Univ. Kansas Sci. Bull. 47, 315–558; 1967), which has influenced theories about other mutualistic relationships.
But many ecologists are sceptical about repeating field experiments, Bruna says — arguing that the results of ecological field studies can never come out the same. A study conducted in a Panamanian forest, for example, may not translate to Costa Rica; even if the trees are the same species, herbivores may vary.
“There are people who say, ‘Every study is a snowflake. If you replicate my study and you don’t find the same results, it’s because conditions are changed and it’s meaningless,’” says Tim Parker, an ecologist at Whitman College in Walla Walla, Washington, who is working on a teaching module that would enlist undergraduates to help with replication efforts.
The objection holds even for replicating simplified model-ecosystem experiments conducted in the lab, says Alexandru Milcu, an ecologist who led the grass-growing study at the Ecotron in Montferrier-sur-Lez, France, run by the country’s basic-research agency, the CNRS. “We can try to standardize everything but we will never take into account all the variables that affect an experiment — some of which we don’t even know,” he says.
So Milcu’s suggestion, at least for lab work, is to make a virtue of including variation in experiments, which may make conclusions more robust. That idea is inspired by a 2010 mouse study, which found that behavioural tests yielded fewer spurious results when factors such as the age of the mice or the size of their cages were deliberately varied (S. H. Richter et al. Nature Meth. 7, 167–168; 2010).
Milcu and his colleagues examined the tenet that grass grows better when it is paired with legumes (which snatch nitrogen from the atmosphere and enhance soils) — a widely supported theory that is the basis for crop rotation using leguminous plants such as clover. In one set of experiments, the 14 labs controlled every variable as best they could; in the second set, they deliberately planted mixed strains of grass in varying soil conditions.
The study, which has not yet been peer reviewed, finds that labs were more likely to come to similar conclusions about the benefits of growing grasses and legumes together in the less-controlled experiments. By making each lab’s data noisier, a robust effect generalizable across many conditions is more likely to stand out, Milcu says. The strategy could lead to ecologists missing small but significant local effects in particular systems, he acknowledges — but he says that this could be remedied by doing larger experiments. How the idea could be translated to field work isn’t clear, although Milcu thinks it is worth exploring.
Bruna’s ecology replication initiative — proposed in association with the Center for Open Science in Charlottesville, which has coordinated high-profile replication initiatives in psychology and cancer biology — still lacks funding. He estimates that it could be pulled off for US$600,000. Such funding would be better spent rigorously testing fundamental theories using updated approaches, rather than repeating old studies, argues Stefan Schnitzer, an ecologist at Marquette University in Milwaukee, Wisconsin. “I think we need to replicate ideas and fundamental theory rather than replicate a single study. I’m not sure how that will advance the field," he says.
But Bruna thinks that some ecologists' resistance to replication studies comes from a misunderstanding over their goals, as well as their consequences for ecological theory. Replication studies needn’t be perfect facsimiles to be useful, he says: just because an effect observed in one tropical ecosystem doesn’t seem to hold true in another does not mean the original study was wrong. “In ecology we want to explore why there is so much variance,” Bruna says. “No pun intended here, but people don’t seem to be able to see the forest for the trees.”
Related links in Nature Research
Cancer reproducibility project releases first results 2017-Jan-18
1,500 scientists lift the lid on reproducibility 2016-May-25
Ecology’s $434,000,000 test 2016-Jan-19
Over half of psychology studies fail reproducibility test 2015-Aug-27
Related external links
Rights and permissions
About this article
Cite this article
Callaway, E. Why 14 ecology labs teamed up to watch grass grow. Nature 548, 271 (2017). https://doi.org/10.1038/548271a