We can quantify exactly how much faster Usain Bolt is than the next-fastest sprinter. It's much harder to say who is the best scientist, let alone how much better they are than the next-best scientist. Deciding who deserves recognition is, at least in part, a judgement call.

On my optimistic days, I can believe that, despite all the noise, there's still a reliable signal: that we mostly manage to publish, fund and hire people who do the better research. As an editor, peer reviewer and grant reviewer, I have spent hours making consequential choices about others' work. It would be demoralizing to believe that I might as well have flipped a coin.

On my more cynical days, I worry that we scientists have far too much faith in our abilities to distinguish the truly excellent. Too often we assume that researchers with more grant money, awards, publications and citations must be better than the rest. Eminence, by which I mean prestige for a specific accomplishment, position or award, is given much more weight than it should be.

I don't deny that most eminent scientists are very good at what they do. But I think that is equally true for tens of thousands of scientists who toil away mostly in obscurity. Science is difficult and important, and we should recognize the people who do it well. But concentrating recognition among a select few might not be justified, and it could damage science.

For eminence to be valid, judgements of excellence need to be accurate, and it turns out that excellence is hard to judge (see go.nature.com/2qalo2l). There's often spirited disagreement about which manuscripts to accept, which grants to fund or which candidates to hire. As a journal editor, I once handled a paper that all reviewers judged unworthy of publication because it lacked novelty. I thought it was scientifically sound and published it anyway. To my surprise, that paper received more attention than almost all others from that journal. It was covered in dozens of news articles and downloaded thousands of times within six months.

The fact is that separating shoddy work from solid work is much more straightforward than distinguishing the top 5% of solid work from the next 5%, which often makes the difference between favourable and unfavourable decisions. Given these difficulties, decisions about who gets rewarded cannot come down to quality alone. So, what else drives them?

Scientists are human, and thus susceptible to biases. One of the most powerful is status bias. Here, recognition is awarded partly on the basis of past recognition (so a scientist is more likely to get a publication accepted if he or she has a track record of good publications). This is essentially the 'rich get richer' phenomenon, or 'Matthew effect', described nearly half a century ago by the sociologist of science Robert Merton ( R. Merton Science 159, 56–63; 1968).

When eminence begets eminence, noise in the system gets amplified. There's an element of luck to who ends up having the most success, and that luck will build on itself.

Even if status bias did not exist, other personal biases would factor into decisions. When there is no objective basis for choosing one qualified candidate over others, people naturally fall back on subjective preferences. A selection committee might consciously or unconsciously favour certain research topics, groups of people or even individuals. From there on, past awards increase the chance of future awards. That can widen inequalities.

Favouring elite scientists when evaluating papers and proposals is like giving Usain Bolt a 10-metre head start in his next race because he won his last five. It incentivizes scientists to present themselves and their results in the best light possible, to shun transparency and to deflect criticism. Those tendencies contribute to reproducibility problems.

Favouring elite scientists is like giving Usain Bolt a 10-metre head start in his next race.

What's the solution? We cannot eliminate prestige. One partial cure is to admit up front that judgements of eminence are often subjective. From there, we can move on to a harder task: rather than relying on heuristics such as the prestige of their university, or previous recognition, let's read people's work and evaluate each study or proposal on its merits.

One trick I use to avoid status bias is to keep myself blind to the authors' identities as long as I can — a strategy that many journals in social and personality psychology have also adopted. Once I tried this, I realized just how much I had been using authors' identities as a short cut. Assessing research without this information — knowing that I might be harshly criticizing a famous person's work — is nerve-wracking. But I'm convinced it's the best way to evaluate science.

Whenever possible, the scientific community should look for ways to reward work by making solid, broad distinctions and avoiding fine conjectures about who is the best of the best. In fact, a couple of biomedical researchers have proposed that grant reviewers should strive to identify only the top fifth of grant proposals, with the final awards decided by lottery ( F. C. Fang and A. Casadevall Science 352, 158; 2016).

Eliminating status bias completely might be impossible, but I recommend that everyone tries. Let's focus less on eminence and more on its less glamorous cousin, rigour.

figure a

Geoff Macdonald