Introduction

Track record evaluation acts as one of the most powerful gate keepers in academia. From Ph.D and postdoc positions through to faculty appointments and the awarding of grants and prizes, at every step, the researcher’s curriculum vitae (CV) is scrutinized to promote or stymie the next step in their career.

Over the last decades, the exponential proliferation of research and research outputs has come to threaten the quality of this fundamental selection process. The increasing burden on evaluators to review ever more track records spawned the widespread misuse of evaluation “shortcuts”. Rather than having their work assessed on content and quality, researchers are instead being judged upon inadequate proxies such as their seniority and renown, the number of papers they have authored, the impact factor of the journals, in which these papers were published, and their accumulated citations.

If evaluators rely too heavily on such shortcuts, they become targets for researchers to aspire to (Mueller & De Rijcke, 2017; Smaldino & McElreath, 2016). Accounting for seniority and fame propagates the Matthew effect (Merton, 1968), where the already established are more likely to receive further resources while young and up-and-coming researchers are being disadvantaged. Counting publications prioritizes quantity over quality. Emphasizing high-impact journals favors storytelling and “en vogue” areas of research over equally substantive but less conspicuous work. Projects, which do not rely on publication, end up receiving less attention despite them equally impacting the advancement, integration and application of knowledge. Combined, these behaviors thwart innovation, societal impact, responsible research practices, the diversity of research and research careers and the very quality of research as a whole.

The San Francisco Declaration on Research Assessment (DORA, 2014), the Leiden Manifesto (Hicks et al., 2015), the Metric Tide (Wilsdon et al., 2015) and others (Moher et al., 2018; 2020) have turned a spotlight on these malpractices and also presented pathways to improvement. However, many of these efforts focus on the evaluation of research publications and outputs. This policy comment aims to draw attention to the format, in which such achievements are presented to evaluation in the first place: the academic CV.

The CV Harmonization Group (H-Group) was founded by the Swiss National Science Foundation and the Open Researcher and Contributor ID organization (ORCID) as a spin-off from ORCID’s Reducing Burden and Improving Transparency (ORBIT) project to foster exchange and pool expertize on academic CVs for more responsible and transparent track record evaluation. It also aims to promote a more standardized and harmonized CV structure so as to increase comparability across researchers and compatibility across organizations. CV harmonization also enables automation and data reuse, and reduces the burden on researchers of providing high-quality information on CVs.

Research funders arguably hold a privileged position in academia, as their requirements and actions often have the most substantial and far-reaching influence on research culture. The H-Group therefore focused on the involvement of these stakeholders by including representatives from funding organizations across >10 countries (a list of H-Group members is provided in the supplementary information). Equally important, however, the group also made sure to include leading experts in research on research as well as representatives from major researcher data infrastructure organizations.

In a workshop held in May 2019 in Zurich, Switzerland, the H-Group met to pragmatically and proactively investigate improvements to academic CVs. The workshop inspired the development of ten recommendations on the structure and content of academic CVs to make research careers more diverse and their assessment fairer and more transparent. Most of these ideas are not new. However, developed further, together with funders, experts and infrastructure representatives, and collated in one document, we hope they will help inspire those who wish to improve their CV requirements alongside their evaluation practices.

Ten recommendations for academic CVs

Provide clear instructions for researchers and evaluators

Define clearly what role CVs play in the overall evaluation and how they are weighted alongside other documents such as project proposals, for example. For each section of a CV, make sure the person filling it in and the one evaluating it understand exactly what it should contain and how it should be evaluated by providing both with the same guidelines.

Formulate your guidelines carefully, such that they are transparent, set the right tone and may lead by example. Use inclusive and non-gendered language (e.g., “they” as opposed to “he” or “she”). Encourage researchers to report alternative career paths and outputs and ensure that evaluators recognize and evaluate them appropriately. Clearly state what is required, what is prohibited and what limitations are in place, such as word counts or the type of metrics or number of references, which may be included, and enforce these regulations consistently.

Where possible, communicate and enforce a quantitative weighting of the evaluation of the CV and even its individual sub-sections by transparently assigning to them a percentage of the final score or evaluation decision.

Prioritize actual achievements over the recognition endowed upon them

Distinguish between achievements realized by the researcher and those endowed upon them by others. An elegant study or important discovery constitute, within a certain context, the researcher’s own personal work and achievement. The resulting publication in a prestigious journal or an award received for this work often additionally depend on more circumstantial factors, such as the author’s renown or their association with established peers; their personal network; the prestige of their institution; their language, social and geographical context and sometimes even just their prowess and story-telling skills.

Structure CVs such that these two aspects of every track record can be distinguished clearly by dedicating individual sections to them, for example. Give priority to what a researcher has achieved over what recognition others have endowed upon them. Confounding achievements with rewards or lending too much weight to the latter obscures evaluation and propagates the Matthew effect.

Focus on more recent achievements over historical information

Prioritize current and directly relevant information over historical achievements, even when those may be more prestigious. Focusing on the more recent past of researchers highlights their current state and activity on a level playing field, independent of how long they have been active. This prevents established researchers from being perpetually rewarded for the same achievements and gives up and coming researchers a fair starting point.

Focus on activities and outputs that are relevant

Less is more. Limit the number of achievements and outputs in CVs. This focuses the track record and its evaluation by removing “noise” and giving priority to the researcher’s best and most important work. DORA advises that evaluators read a researcher’s work, rather than relying on proxies. If an institution has signed DORA, it should therefore ensure that evaluators will realistically be able to read what they are supposed to evaluate with due diligence.

Focusing on only a few outputs saves researcher and evaluator resources, discourages salami slicing of results, improves comparison between early- and late-career researchers and renders publication hiatuses as a result of career breaks less apparent.

Acknowledge and encourage a broad range of contributions

Recognize and reward a wide range of outputs to foster a more diverse and inclusive research culture. Singling out one type of output, such as publications for example, discounts the richness of research and the many forms through which it can impact academia and society. CVs should present all types of outputs equally and encourage researchers and evaluators to acknowledge a diverse range of contributions. At the same time, it is important to ensure that all outputs are given the same weight and are being accounted for transparently and reliably in evaluation.

Balance and control incentives

It is important that institutions keep the subtle ramifications of Goodhart’s law in mind. Goodhart’s law states that when a measure becomes a target, it ceases to be a good measure (Klein et al., 2014). Many laudable policies and incentives bear unintended hidden risks if they are not carefully balanced and implemented. For example, only allowing open access publications on CVs submitted to funding organizations may have opportunity costs if equally valuable aspects such as advancing teaching, diversity and equality, collaboration, sustainability, reproducibility or public engagement are not explicitly required to obtain funding. Prioritizing open access over other equally valuable behaviors at the scale and level of influence of large funders, for example, has the power to change academic behavior and culture in a non-trivial way.

Use the academic age not the biological age of researchers

Including a researcher’s biological age in a CV propagates biases and can discriminate against women, younger and older researchers. It also contradicts the respective institution’s age-related regulations: either a researcher of a certain age is eligible or not, thereafter, their biological age should not play any role in evaluation. For eligible researchers, the only seniority that is of relevance is the amount of time they have spent in academia. The academic age can additionally provide some protection from discrimination against unconventional career paths or researchers with child care or other responsibilities. Measuring the academic age in precise number of years can be overly specific, using age ranges instead may be more useful.

Encourage narratives instead of lists

Incorporate a narrative section into a CV. Listed items can provide a quick and well-structured overview but they also entice evaluators to count instead of read, which in turn can favor snap judgements over in-depth evaluation. In free-form narratives the researcher can contextualize their achievements and explain what they are working towards. Narratives also allow researchers to highlight public outreach and societal impact activities that defy quantification. This can help evaluators to understand the context of and connections in a body of work. Crafting narratives takes time. Help researchers by keeping templates for narratives short, formulating questions to structure the narrative, and providing clear instructions. For instance, ask researchers to adhere to the Contributor Roles Taxonomy System (CRediT) in describing their roles.

Use metrics cautiously

Implement metrics carefully to avoid misuse and make sure evaluators are well aware of their respective definitions and limitations. The use of metrics can provide a valuable addition to CVs and their evaluation when used appropriately (see the Leiden Manifesto for an in-depth discussion of this topic, Hicks et al., 2015). Most reliable and transparent metrics are, however, based on publications and their citations only. There are unfortunately hardly any useful metrics available for other types of outputs (e.g., software, data, outreach). If an institution primarily or exclusively uses publication-based metrics in their academic CVs, they signal priority of publications over other forms of output, incentivizing researchers accordingly. At the same time, evaluators will be biased toward publications, simply due to the fact that these are the only metrics being presented to them.

Use established open and interoperable data standards and systems

CV data is valuable as it allows for the analysis of an institution’s policy and performance as well as researcher career statistics. However, non-standardized CV Portable Document Files (PDFs, https://en.wikipedia.org/wiki/PDF) render this data mostly unusable. To harvest the benefits of digitalization, now or in future, requires that CV data adheres to established standards and is stored in an open and interoperable format (see Box 1). Exchanging and re-using data elements in CVs is crucial. Using data from reliable well defined and established sources without requiring extensive rekeying of basic information reduces the number of errors in the data provided, and offers means of automated validation. Doing so, it reduces the burden on researchers and institutions alike of checking and correcting the content of CVs. Flexible and reliable data reuse is possible when data is made available under open licenses and using open technologies, eliminating barriers to integration of a range of data providers into the systems of institutions. Such licenses and open technologies are vital to preserve a common pool of shared information, which can be used by any organization in the world with access to the internet without costly licensing fees or subscriptions.

Conclusion

Track record evaluation directly and significantly impacts academia and academic careers. CVs are the key documents in this process; therefore, their content and format need to be considered with great care. Unfortunately, today many organizations and their evaluation panels still rely on poorly structured CVs, which can propagate and even foster bias and unsatisfactory outcomes.

The H-Group formed to present ideas, which can hopefully help improve and harmonize the content and the structure of academic CVs. The ten guidelines presented here aim at inspiring and promoting best practices. We hope that they encourage organizations to consciously and deliberately structure their CVs so that they appropriately reflect a broad and balanced overview of a researcher’s previous achievements. At the same time, we hope that the academic CV will also become a more standardized and interoperable document. Combined, these changes can help improve evaluation, make academia fairer and more inclusive for all and foster good quality research.