Skip to main content

Thank you for visiting You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.


I critiqued my past papers on social media — here’s what I learnt

Every year in June, I discover that the most self-critical scientists are final-year undergraduates. In the results section of their dissertations, they mercilessly apply the rules that we teach them. Their discussions are largely of limitations, catalogues of failure. Their conclusions can be brutal.

Somewhere between graduating and beginning our careers, we researchers seem to lose this flair for self-criticism. We become more invested in publications, grants and jobs. These incentives drive us on. But they also narrow our vision. As investment in our work deepens, we become blind to its faults.

The incentives of academic life seem to require that we abandon self-criticism. Papers are typically written after the results are known, as if everything worked as expected. Manuscripts are often submitted without acknowledgement of limitations; perhaps to be added later, if reviewers request. Guidelines for funding applications offer little opportunity to talk about error, uncertainty or failure. Candidates applying for academic jobs rarely discuss experiments that didn’t replicate, rejected papers or unsuccessful grant applications. It is as though any admission of fallibility will be treated harshly by reviewers.

A scientific record that includes only successes is incomplete. Failure, error, reflection and self-correction are too rarely published. If we are not honest about our mistakes, scientific progress will be slowed.

On Good Friday this year, traditionally a time of self-reflection in the Christian calendar, I began critiquing my own scientific record — writing down something critical about each of my publications. Much of my career, my writing and now my podcast, ‘The Error Bar’, has been spent criticizing others’ work.

In 57 tweets (see, I recalled the worst things about each of my publications. What I did wrong, what I wouldn’t repeat, what would work better. “The effect size we studied was too small to be worth any further study,” I wrote of one. “Too many behavioural tasks,” I noted of another.

I didn’t know it at the time, but I was following the example of psychologists engaged in the Loss-of-Confidence Project, who seek to encourage scientific self-correction (J. M. Rohrer et al. Perspect. Psychol. Sci.; 2021).

Like them, I found this reflection enlightening — it highlighted my mistakes and removed a weight of self-doubt. I now worry less that I’ve missed something big, or got something very wrong.

Most illuminating were other scientists’ reactions. Some told me it was “brave” or “crazy”. I understand why, but that reaction is troubling. Self-criticism in science is desirable, so a system that discourages authors from doing it needs fixing. How should we begin?

To start, be your own harshest critic. On social media or PubPeer, be explicit about the weaknesses of your work. When reviewing or editing others, remember your own failings, the constraints on your work and the incentives that drove you. If you discover serious errors, be willing to correct or retract. This could be a positive process of study, development and communication rather than hiding, moving on or doubling-down. One excellent example is from Sam Schwarzkopf’s laboratory at University College London, which retracted a brain-imaging paper after discovering, studying and publicizing their analytical errors (B. de Haas Nature 589, 331; 2021).

In addition, the systems of publication, funding and employment need to nurture and reward such honesty. Open Science lends itself to self-criticism and self-correction. When you pre-register a study, you specify what you’re going to do and how. When you publish, you either confirm that you did it or explain why you didn’t. This mode of publishing encourages honesty and transparency. On publication, making data freely available improves transparency; acknowledging errors improves the public’s perception of trustworthiness in science. The Journal of Trial and Error, which launched last year, encourages authors to reflect on errors and discuss them when they inevitably come to light. But science shouldn’t need separate journals for reporting failures — it must be part of normal practice.

People applying for research funds could be required to include criticism of their own work, and to deal explicitly with alternative hypotheses. Those who are lucky enough to receive funding should then report on the project’s errors and failures, not just on its outputs and successes. Hunting out our own weaknesses will make us better scientists.

In 2019, my institution, the University of Nottingham in the United Kingdom, signed the San Francisco Declaration on Research Assessment. This committed us to stop using journal impact factors and similar metrics to assess individuals. This changed our hiring, evaluation and promotion criteria. To build on these improvements, we could ask candidates to engage in self-criticism, to say what they would now do differently. We could request a ‘negative CV’ — a list of failed applications and rejected papers.

By rediscovering our inner undergraduate, reflecting on our errors and opening up our science to scrutiny, we can free ourselves from the fear that our failings might be uncovered.

Nature 595, 333 (2021)


Competing Interests

The author declares no competing interests.


Nature Careers


Nature Briefing

An essential round-up of science news, opinion and analysis, delivered to your inbox every weekday.

Nature Briefing

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

Get the most important science stories of the day, free in your inbox. Sign up for Nature Briefing


Quick links