In a recent lab meeting, we discussed whether it would be a greater setback to science to lose all published papers or to lose the collected knowledge of living scientists. My answer was the latter, on the premise that real knowledge and judgment—about which ideas are true, which findings are important, which experiments are worth doing, etc.—exist in the collective minds of scientists, not in the published literature, which is only an imperfect and incomplete record of how we developed that knowledge and understanding. This answer is my basis for arguing that forcing PhD students to publish—whether explicitly as a formal degree requirement or implicitly in the way we guide students and make hiring decisions—is cynical and corrosive.

Credit: Nick Yeung

Narrow emphasis on publication is cynical in that it pushes students to think in instrumental terms about their research: experiments as a way to get publications rather than to learn; impact factor mistaken for scientific value. Knowledge and understanding then become merely helpful and happy by-products of achieving the goal of publishing, rather than vice versa, which is precisely the wrong way around. Disseminating findings and ideas is clearly a key part of what we do as scientists, and PhD students should be trained to present their research in a variety of media. But if our fundamental aim—for ourselves and for the training we give students—is to develop understanding and scientific judgment, then this aim should be reflected directly in the way we evaluate and incentivise PhD students: they should be allowed the time and space to cultivate their ideas and insights, knowing that this is the proper basis for their scientific career.

As academics, we often resent being judged according to simplified metrics: how many papers, in which journals, with what impact factors, based on how many dollars of funding. We likewise lament when our undergraduate students focus on their grades and ask what they need to know for the test, rather than engaging in deeper scholarship. We can’t make these complaints with integrity if we simultaneously choose to support and implement systems for evaluating PhD students that similarly confuse proxies for scientific progress (i.e., published papers) with progress itself and that cynically allow us to lean on crude metrics in place of us expending the time and effort to exercise our academic judgment.

Worse still, forcing students to publish—rather than encouraging them to publish, when the ideas are mature—is corrosive to our scientific culture. A few years ago, in more innocent times, the pressure to publish seemed problematic mostly in terms of diluting the literature with an ever-rising tide of weaker, less well-thought-through papers, inducing a sinking feeling that it is impossible to keep up and, behind the scenes, adding to a growing burden in time, energy and money spent on the review and publication process. But it didn’t seem wrong as such, just dispiriting, and the arguments above could, perhaps reasonably, be dismissed as misplaced nostalgia or idealism.

Now, as concerns about reproducibility continue to grow, it seems implausible that pressure on students to publish hasn’t been part of the cultural problem. The statistician Andrew Gelman refers to The Armstrong Principle: if you push people to promise more than they can deliver, they’re motivated to cheat (https://statmodeling.stat.columbia.edu/2017/06/08/lance-armstrong-principle/). In this case, we are over-promising on behalf of students (an even worse sin) when we expect or require them to achieve outcomes that are outside of their control, given the vagaries of data and peer review. As such, we are incentivising students to cheat in all the subtle and self-defeating ways that are being painfully exposed as endemic in our scientific culture.

What to do to resist this cynicism and corrosion? At an individual level, there is no value in martyring our own students by wilfully ignoring the criteria by which, in reality, they will be judged. Similarly, although I’ve toyed with the idea of declining to examine for any PhD programme that formally requires students to publish, I don’t see a plausible way that this could effect useful change. So instead I try to help my own students by giving them the time and confidence to resist perceived pressure to publish too thinly and too quickly. And, in hiring decisions, I look beyond publication lists on CVs to focus instead on skills-based assessments (such as reviewing a manuscript or completing a programming exercise) and on my evaluation of candidates’ written work—as it appears in thesis chapters and preprints, as well as published papers—for its creativity, depth and rigour, not the impact factor of the journal it was published in or its number of citations.

I’ll continue to refine this approach as I wait for the cultural shift around reproducibility to exert its influence on the evaluation of PhD students’ research. I’m optimistic that this shift will force a re-evaluation of the values that we apply to and instil in PhD students, with less weight placed on publications as an unhelpful proxy for successful training and future promise, and with greater weight placed on other, equally important ways that students can contribute to the collective understanding that is the proper goal of scientific research: exercising their hard-earned academic judgment constructively in their choice of research questions, in their scientific integrity and rigour, in their interactions with colleagues in departments and at conferences, in their peer reviews, and in the way that they in turn train their own students.