How many manuscripts is it reasonable for a scientist to peer review in a year? Many researchers would estimate two or three dozen; Malcolm Jobling, a fish biologist at the University of Tromsø in Norway, says that he has racked up more than 125 already this year. How do we know? A welcome movement is under way to publicly register and recognize the hitherto invisible efforts of referees.

Jobling’s staggering total is revealed at Publons, a New Zealand-based start-up firm that encourages researchers to post their peer-review histories online (for an interview, see Naturehttp://doi.org/wbp;2014). Publons is not the only attempt to recognize and reward academics for their refereeing activity. As Nature noted last year (see Nature 493, 5; 2013), publishers are increasing their efforts to reward assiduous reviewers. The Nature journals give a free subscription to anyone who has refereed three or more papers in a year for them, and allow peer reviewers to download a statement of work. Similarly, science publisher Elsevier this year launched a system to formally recognize its peer reviewers, and to give rewards to ‘outstanding reviewers’ — those who have reviewed the most papers.

Unlike Publons, which hopes to establish a cross-publisher profile, the activities of individual publishers are restricted to their own platforms. But publishers are taking part in broader talks to establish standards to publicly record peer-review service in a researcher’s ORCID (Open Researcher and Contributor ID) profile. Those discussions, under the auspices of the Consortia Advancing Standards in Research Administration (CASRAI), an international non-profit group, are also looking at ways to record other types of peer review — including reviews of grant applications, conference abstracts, service as a journal editor and institutional benchmarking (for example, being on the panel of a national research audit such as the UK Research Excellence Framework).

Researchers could use their reviewer records to highlight their expertise for employers and government agencies. If enough information can be publicly revealed, it could shed more light on the average number and type of review undertaken by scientists, who increasingly complain that they are overwhelmed with peer-review requests.

The final direction of the drive to publicly record and reward peer review is far from clear. Publons — among others — hopes that there will be more cases of open, signed reviews (which will make it easy to recognize a referee’s contribution). Yet the majority of pre-publication reviews remain private: many researchers are uncomfortable about being publicly revealed as the author of a critical review because of the fear of subtle reprisals in other areas of their career. Unless this culture shifts, efforts will stay focused on allotting credit for reviews whose text and author remain secret.

Recording the number of reviews is only the start. A well-considered review that substantially improves a paper can take days — whereas a sloppy reviewer could dash off assessments of many papers in a few hours. So the next challenge in publicly recognizing peer review will be to find a way to assess quality. Many journal editors already have an informal idea of their ‘good’ and ‘bad’ reviewers, which in some cases can be quantified by response time. But these judgements are not usually shared with colleagues, and may differ from one editor to another. Lutz Prechelt, an informatics researcher at the Free University of Berlin who is advising Elsevier on its programme, has suggested that both authors and editors could be asked to mark the helpfulness and timeliness of a review. But it will be important to ensure that the benefits of this system are not drowned by the bureaucracy involved.

Efforts to publicly recognize peer review are still in their infancy. But as attempts to acknowledge and reward a crucial role, they should be applauded.