Almost every academic newsletter I receive nowadays includes announcements of somebody winning an award. But below their shiny surface, I find that such announcements carry a whiff of ambiguity. I start thinking about how winners are selected, who is left out and why, and whether the research community could do this award-giving business better.

Research that I and my colleagues have done shows that things could indeed be done a whole lot better. Scientific prizes are plagued by opaque and seemingly biased selection criteria. This needs to change. Done right, such awards could provide an opportunity to recognize and value transparent and robust research, and build a more inclusive and trustworthy way of doing science.

Big awards, such as Nobel prizes, are often surrounded by public controversy because of a lack of diversity among the winners, and because the selection processes and policies are inequitable or lack transparency. We focused instead on the innumerable smaller awards, administered mainly by journals and learned societies, with categories such as ‘best paper’ or ‘most promising young researcher’. These don’t get as much attention, but they are often stepping stones for career advancement, especially for early- and mid-career researchers. Such awards can filter and reinforce what is considered excellent research.

My team and I — a diverse group of volunteers representing six continents and many career stages — set out to gather data about the transparency and declared values of these smaller, research-focused awards.

We started with an international selection of 13 ‘best researcher’ and 10 ‘best paper’ awards in my area of research, ecology and evolution. The results were published this year (M. Lagisz et al. Nature Ecol. Evol. 7, 655–665; 2023). A larger team has since expanded the assessment to a broad sample of 222 best-paper awards across all disciplines, the results of which were posted as a preprint on 12 December (M. Lagisz et al. Preprint at bioRxiv https://doi.org/k8rr; 2023).

We found that descriptions of the selection criteria and processes used for these awards are generally short and vague. Often, no contact information is given should you wish to request more information. Around half the awards surveyed in our latest study had journal editors involved in nominating or selecting the winners, but 91% did not state how potential conflicts of interest would be handled.

Furthermore, award descriptions rarely mention concepts that align with open science — the movement to make science accessible to all. The only positive example included ‘transparency of the methods’ in its evaluation criteria.

Of the 222 awards, 21 mentioned considering impact metrics — counts of citations or downloads — in their selection process. Concerningly, eight used such measures as the only metric for selecting the ‘best’ paper (there is a separate class of ‘impact’ awards, but we did not include those in our analysis). And although many scientific organizations and institutions claim publicly to be committed to equity, diversity and inclusivity, only two of the 222 awards mentioned related values or policies in their award description and selection processes.

The lack of explicit standards for evaluating science allows assessors to vary their scores depending on the identity of the nominees. Such biases can be compounded when potential or actual conflicts of interest exist and are not managed. Awards that rely on simplistic metrics, such as citations, contribute to an academic ‘Matthew effect’ — ‘to those that have, more shall be given’. As with other indicators of scientific esteem, including numbers of articles published and grants obtained, citations are easier to achieve by some scientists, helping them to secure promotions, jobs and further funding, snowballing into more and bigger awards.

Our data show that between 2001 and 2022, 61% of individual winners were men. Although that finding might align with broader employment patterns in research, we found no discernible trend towards a greater representation of women. Some 48% of winners were affiliated with US institutions. Researchers based in low- and middle-income countries made up just 11% of winners, with more than half of these based in China. This imbalance was particularly marked in the earliest part of the study period, from 2001 to 2010.

Omnipresent prizes and awards reflect scientific communities’ values ‘in action’. We have concluded that they are currently failing to match global calls for improving transparency and equitability in science. Changing how they operate and what they reward can incentivize better research practices and support the drive to open science. Given the slow progress in addressing the many biases prevalent in academia, historically under-represented and marginalized groups can benefit from award-giving institutions, reducing ambiguity and explicitly fostering equitable access and assessment practices.

So, next time you see another award announcement, maybe reflect on whether this prize contributes to the reproducibility crisis and various biases rampant in academia. Is it transparent and equitable? Does it recognize robust and reproducible science? And if you are one of the many people who manage existing awards or are working to establish new ones, now is the time to act and embrace the principles and values of a more inclusive science.