Barbara A. Spellman hails an analysis of reproducibility in psychology by a champion for change.
The Seven Deadly Sins of Psychology: A Manifesto for Reforming the Culture of Scientific Practice
An angiogram and computed-tomography scan of a man's brain, used to locate his language centre.
Chris Chambers's portrait should sit high on the wall of heroes in the movement to reform science. A cognitive neuroscientist and psychologist, Chambers has had an important role as an editor and advocate in identifying, challenging and changing practices responsible for the reproducibility crisis.
Many fields of science — social, life, physical and medical — have had to acknowledge in recent years that much of their published research is not replicable (see 619–620; 2017). Psychological science was hit hard by that problem early this decade. But it quickly joined the vanguard of reform. In The Seven Deadly Sins of Psychology — part history, part autobiography, largely manifesto — Chambers identifies some “sins”, from biased reasoning to outright fraud, that led us to this point. And he describes specific reforms, some already well under way, that will make science more transparent, accessible and reproducible. As he shows, the sins are (mostly) not those of individual scientists, but of the processes and incentive structures under which scientists work. Nature 543,
Chambers ably illustrates these failings with tales from psychological science. The first chapter describes a 2011 paper by social psychologist Daryl Bem that reported nine experiments demonstrating evidence of precognition — the ability to predict the future (407–425; 2011). Published in the American Psychological Association's prestigious Journal of Personality and Social Psychology, it left many psychologists outraged. The article had followed the rules of scientific practice. Its studies all supported the same hypothesis; its methods included randomization and standard-looking data analyses. But close scrutiny of the paper and subsequent failures to replicate the studies (plus the unwillingness of journals to publish those failures) revealed many of the sins. J. Pers. Soc. Psychol. 100,
The sin that makes the biggest news splash is outright fraud — changing or fabricating data, or making up an entire study. It may be the least interesting (and, we hope, the least prevalent) sin, but is illustrated by Chambers's tale of a psychologist who did just that — at least 58 times. If you stop to ask why a scientist would commit fraud, the perverse nature of scientific processes and incentives is revealed. To get jobs, promotions, grants or fame, a scientist must publish in high-visibility journals. That is not straightforward. It is not enough to invent an experiment; design, run, and analyse it flawlessly; and write a paper that describes the results clearly. What's also needed are beautiful data that tell an unblemished, consistent story in support of the hypothesis. No amount of care or cleverness can guarantee such results.
Even honest researchers may consciously or unconsciously engage in lesser sins to get better-looking (rather than better) results. They might use biased reasoning and hidden flexibility in deciding how to collect (or not) more data, in reporting (or not) all measures and findings, and in modifying their hypotheses to fit the data. Such decisions might be good for scientists in the short run. They are not good for science.
Chambers is a fan of preregistration as a corrective. With this, scientists describe a proposed study in detail, submit it to a journal for review and potentially get an 'in-principle acceptance'. After running the study and making the data publicly available, a manuscript that fulfils the proposal will be published — regardless of how the data turn out. This could help reduce many of the sins, such as bias, flexibility and some fraud; file drawers filled with studies that 'don't work'; and bean counting, or evaluations concerned with numbers (such as quantity of publications) rather than quality.
Preregistration can't ameliorate all sins, but Chambers provides examples of concrete steps that can be taken by a variety of stakeholders: be more aware of potential biases; share methods and data; push for and reward transparency and openness.
This book is written for anyone curious about how science might repair itself. It should be required reading in university courses on research methods. And it's for publishers, grant funders, journalists and science writers. Enabling, creating and disseminating good science is a vast cooperative endeavour.
Is there still a crisis? Certainly more research will be found to be unreplicable and more theories will unravel. Yet it's key to recall that 'crisis mode' in an epidemic can dissipate as successful treatments evolve, even when new cases arise. As Chambers shows, we think we know the causes and can abate much of the problem. Crisis phase over (in my view). But there is still much work to be done.