When psychologist Courtenay Norbury came across a paper this week that had similar conclusions to research she published 12 years ago, she turned to social media with a question. Norbury, who studies children with autism spectrum disorders at University College London, tweeted:
How many times does a research finding need to be replicated before the field says "ok, how do we move this forward?"— Courtenay Norbury (@lilacCourt) February 23, 2016
Dorothy Bishop, a developmental neuropsychologist at the University of Oxford, UK, who helped to write a report on how to improve the reliability of biomedical research, tweeted in response that some fields can get stuck on the same research questions:
the opposite of the reproducibility crisis! Stasis. And yup it's a problem in some areas https://t.co/tzXMUQ6h8C— Dorothy Bishop (@deevybee) February 23, 2016
The problem of irreproducibility in science has gained widespread attention, but one aspect that is discussed less often is how to find the right balance between replicating findings and moving a field forward from well-established ones. Norbury has for years reported that people who have autism and poor language skills can find it difficult to make inferences, decipher ambiguous phrases and understand metaphors or jokes. She has also shown that this is not necessarily the case for people with autism who are more linguistically capable1, 2. The latest study, published in Research in Developmental Disabilities on 18 February3, mirrored the work by suggesting that people with autism who have good language skills do well on such tasks.
Psychologists Melanie Eberhardt at the University of Cologne in Germany and Aparna Nadig at McGill University in Montreal, Canada, authors of the latest study, acknowledge that this evidence is clear and convincing for researchers in this area, but say: “We find that unfortunately there are many lay, professional and academic circles where this result is still not understood.” They add, “Therefore we found it important to add to the convergent evidence on this question.”
“Replication is important but it would be nice sometimes to take a bigger leap forward,” says Norbury. She adds that the next “obvious” step in this research involves intervention, which is challenging to do. “But I’d love to start thinking of how to overcome these obstacles, rather than just repeatedly demonstrating that language impairment has negative impacts for children with autism,” she says.
Brett Buttliere, a research assistant at the Leibniz Institute for Knowledge Media in Tübingen, Germany, tweeted:
@deevybee this is also a large problem in Psychology as well, in my opinion! :D— Brett (@BrettButtliere) February 23, 2016
“It is obvious that not publishing when something doesn’t work is bad, but so is doing the same thing over and over again without learning something new,” Buttliere said in an interview.
Virginia Barbour, executive officer of the Australasian Open Access Support Group in Brisbane, Australia, posted her observations:
In response, Bishop later tweeted a link to a paper published in The Lancet that looked at reducing waste in biomedical research4. The 2014 article said that many studies are done without referencing systematic reviews of the literature, which leads to waste.
Paul Glasziou, a clinician and researcher at Bond University in Queensland, Australia, who is a co-author of the Lancet paper, says that the bar for reproducibility can be set at different heights. For example, he says, the US Food and Drug Administration requires a minimum of two positive randomized control trials to show effectiveness of a new drug — a rule that Glasziou says is “reasonable” as long as the trials are “well done and adequately powered”. He adds, however, that clinical studies are usually repeated more than once, pointing to one analysis of systematic reviews that found that a review cited on average 16 similar papers5.
One issue could be that researchers are unaware of similar studies. For example, a study of 1,523 clinical-trial reports published between 1963 and 2004 found that, on average, each report cited less than 25% of the previous similar trials that were relevant6.
Barbour added in an interview that one way of deciding whether a claim has been reproduced enough could be to analyse reviews and meta-analyses of studies from the same field.
Using novelty as a criterion for publication in journals may solve the problem, notes Norbury, who is an editor of the Journal of Child Psychology and Psychiatry. By contrast, many researchers have called for journals to de-emphasize new findings and instead publish numerous replications; Norbury says that she agrees “to a certain extent”.
But, she adds, “my heart sometimes sinks when I see yet another paper exploring what I consider to be well-trodden ground. I’m not looking for ‘sexy’ findings, but I am looking for something that has the potential to change practice or move the field forward.”
- Journal name:
- Date published: