Ten ways to improve academic CVs for fairer research assessment

Academic CVs are ubiquitous and play an integral role in the assessment of researchers. They define and portray what activities and achievements are considered important in the scientific system. Developing their content and structure beyond the traditional, publication-focused CV has the potential to make research careers more diverse and their assessment fairer and more transparent. This comment presents ten ways to further develop the content and structure of academic CVs. The recommendations are inspired by a workshop of the CV Harmonization Group (H-Group), a joint initiative between researchers on research, academic data infrastructure organizations, and representatives from >15 funding organizations. The proposed improvements aim at inspiring development and innovation in academic CVs for funding agencies and hiring committees.


Introduction
T rack record evaluation acts as one of the most powerful gate keepers in academia. From Ph.D and postdoc positions through to faculty appointments and the awarding of grants and prizes, at every step, the researcher's curriculum vitae (CV) is scrutinized to promote or stymie the next step in their career.
Over the last decades, the exponential proliferation of research and research outputs has come to threaten the quality of this fundamental selection process. The increasing burden on evaluators to review ever more track records spawned the widespread misuse of evaluation "shortcuts". Rather than having their work assessed on content and quality, researchers are instead being judged upon inadequate proxies such as their seniority and renown, the number of papers they have authored, the impact factor of the journals, in which these papers were published, and their accumulated citations.
If evaluators rely too heavily on such shortcuts, they become targets for researchers to aspire to (Mueller & De Rijcke, 2017;Smaldino & McElreath, 2016). Accounting for seniority and fame propagates the Matthew effect (Merton, 1968), where the already established are more likely to receive further resources while young and up-and-coming researchers are being disadvantaged. Counting publications prioritizes quantity over quality. Emphasizing high-impact journals favors storytelling and "en vogue" areas of research over equally substantive but less conspicuous work. Projects, which do not rely on publication, end up receiving less attention despite them equally impacting the advancement, integration and application of knowledge. Combined, these behaviors thwart innovation, societal impact, responsible research practices, the diversity of research and research careers and the very quality of research as a whole.
The San Francisco Declaration on Research Assessment (DORA, 2014), the Leiden Manifesto (Hicks et al., 2015), the Metric Tide (Wilsdon et al., 2015) and others (Moher et al., 2018;2020) have turned a spotlight on these malpractices and also presented pathways to improvement. However, many of these efforts focus on the evaluation of research publications and outputs. This policy comment aims to draw attention to the format, in which such achievements are presented to evaluation in the first place: the academic CV.
The CV Harmonization Group (H-Group) was founded by the Swiss National Science Foundation and the Open Researcher and Contributor ID organization (ORCID) as a spin-off from ORCID's Reducing Burden and Improving Transparency (ORBIT) project to foster exchange and pool expertize on academic CVs for more responsible and transparent track record evaluation. It also aims to promote a more standardized and harmonized CV structure so as to increase comparability across researchers and compatibility across organizations. CV harmonization also enables automation and data reuse, and reduces the burden on researchers of providing highquality information on CVs.
Research funders arguably hold a privileged position in academia, as their requirements and actions often have the most substantial and far-reaching influence on research culture. The H-Group therefore focused on the involvement of these stakeholders by including representatives from funding organizations across >10 countries (a list of H-Group members is provided in the supplementary information). Equally important, however, the group also made sure to include leading experts in research on research as well as representatives from major researcher data infrastructure organizations.
In a workshop held in May 2019 in Zurich, Switzerland, the H-Group met to pragmatically and proactively investigate improvements to academic CVs. The workshop inspired the development of ten recommendations on the structure and content of academic CVs to make research careers more diverse and their assessment fairer and more transparent. Most of these ideas are not new. However, developed further, together with funders, experts and infrastructure representatives, and collated in one document, we hope they will help inspire those who wish to improve their CV requirements alongside their evaluation practices.

Ten recommendations for academic CVs Provide clear instructions for researchers and evaluators.
Define clearly what role CVs play in the overall evaluation and how they are weighted alongside other documents such as project proposals, for example. For each section of a CV, make sure the person filling it in and the one evaluating it understand exactly what it should contain and how it should be evaluated by providing both with the same guidelines.
Formulate your guidelines carefully, such that they are transparent, set the right tone and may lead by example. Use inclusive and non-gendered language (e.g., "they" as opposed to "he" or "she"). Encourage researchers to report alternative career paths and outputs and ensure that evaluators recognize and evaluate them appropriately. Clearly state what is required, what is prohibited and what limitations are in place, such as word counts or the type of metrics or number of references, which may be included, and enforce these regulations consistently.
Where possible, communicate and enforce a quantitative weighting of the evaluation of the CV and even its individual sub-sections by transparently assigning to them a percentage of the final score or evaluation decision.
Prioritize actual achievements over the recognition endowed upon them. Distinguish between achievements realized by the researcher and those endowed upon them by others. An elegant study or important discovery constitute, within a certain context, the researcher's own personal work and achievement. The resulting publication in a prestigious journal or an award received for this work often additionally depend on more circumstantial factors, such as the author's renown or their association with established peers; their personal network; the prestige of their institution; their language, social and geographical context and sometimes even just their prowess and story-telling skills.
Structure CVs such that these two aspects of every track record can be distinguished clearly by dedicating individual sections to them, for example. Give priority to what a researcher has achieved over what recognition others have endowed upon them. Confounding achievements with rewards or lending too much weight to the latter obscures evaluation and propagates the Matthew effect.
Focus on more recent achievements over historical information. Prioritize current and directly relevant information over historical achievements, even when those may be more prestigious. Focusing on the more recent past of researchers highlights their current state and activity on a level playing field, independent of how long they have been active. This prevents established researchers from being perpetually rewarded for the same achievements and gives up and coming researchers a fair starting point.
Focus on activities and outputs that are relevant. Less is more. Limit the number of achievements and outputs in CVs. This focuses the track record and its evaluation by removing "noise" and giving priority to the researcher's best and most important work. DORA advises that evaluators read a researcher's work, rather than relying on proxies. If an institution has signed DORA, it should therefore ensure that evaluators will realistically be able to read what they are supposed to evaluate with due diligence.
Focusing on only a few outputs saves researcher and evaluator resources, discourages salami slicing of results, improves comparison between early-and late-career researchers and renders publication hiatuses as a result of career breaks less apparent.
Acknowledge and encourage a broad range of contributions. Recognize and reward a wide range of outputs to foster a more diverse and inclusive research culture. Singling out one type of output, such as publications for example, discounts the richness of research and the many forms through which it can impact academia and society. CVs should present all types of outputs equally and encourage researchers and evaluators to acknowledge a diverse range of contributions. At the same time, it is important to ensure that all outputs are given the same weight and are being accounted for transparently and reliably in evaluation.
Balance and control incentives. It is important that institutions keep the subtle ramifications of Goodhart's law in mind. Goodhart's law states that when a measure becomes a target, it ceases to be a good measure (Klein et al., 2014). Many laudable policies and incentives bear unintended hidden risks if they are not carefully balanced and implemented. For example, only allowing open access publications on CVs submitted to funding organizations may have opportunity costs if equally valuable aspects such as advancing teaching, diversity and equality, collaboration, sustainability, reproducibility or public engagement are not explicitly required to obtain funding. Prioritizing open access over other equally valuable behaviors at the scale and level of influence of large funders, for example, has the power to change academic behavior and culture in a non-trivial way.
Use the academic age not the biological age of researchers. Including a researcher's biological age in a CV propagates biases and can discriminate against women, younger and older researchers. It also contradicts the respective institution's agerelated regulations: either a researcher of a certain age is eligible or not, thereafter, their biological age should not play any role in evaluation. For eligible researchers, the only seniority that is of relevance is the amount of time they have spent in academia. The academic age can additionally provide some protection from discrimination against unconventional career paths or researchers with child care or other responsibilities. Measuring the academic age in precise number of years can be overly specific, using age ranges instead may be more useful.
Encourage narratives instead of lists. Incorporate a narrative section into a CV. Listed items can provide a quick and wellstructured overview but they also entice evaluators to count instead of read, which in turn can favor snap judgements over indepth evaluation. In free-form narratives the researcher can contextualize their achievements and explain what they are working towards. Narratives also allow researchers to highlight public outreach and societal impact activities that defy quantification. This can help evaluators to understand the context of and connections in a body of work. Crafting narratives takes time. Help researchers by keeping templates for narratives short, formulating questions to structure the narrative, and providing clear Box 1 | Some key open and interoperable data standards and systems DOIs (Digital Object Identifiers, https://www.doi.org/) are a type of Persistent Identifier (PID). The alphanumeric strings are established standards to uniquely and permanently identify specific objects independent of their location. DOIs are commonly used by the research community to identify and differentiate various scholarly output items (e.g., publications, datasets, software, etc.). By forming persistent links to a resource DOIs mitigate against "reference rot", which is caused by a combination of link rot and content drift (Klein et al., 2014).The DOI system is maintained by the International DOI Foundation (IDF), which oversees its registration agency members, such as Crossref or DataCite. ISSN (International Standard Serial Number, https://www.issn.org) is an international, standardized PID. The 8-digit code uniquely identifies serial publications (i.e., newspapers, journals, collections, periodicals etc.) irrespective of their medium (print or electronic). The ISSN is especially useful for disambiguating serials with the same title. ISBN (International Standard Book Number, https://www.isbn-international.org) is an international standard used to identify monographic publications (i.e., books or book chapters, if separately available) and to record their metadata. The 13-digit code can also be resolved into a DOI. ORCID (Open Researcher and Contributor ID, https://orcid.org/) is a global and persistent identifier registry for individual researchers. The ORCID iD disambiguates and identifies individual researchers throughout their career and connects them with their research outputs (e.g., publications etc.) or records of employment or previous funding. The ORCID system has become a de facto standard. DataCite (https://datacite.org/) is a not-for-profit, international organization and official DOI registration agency for scientific datasets and other research objects. DataCite's members (i.e., academic organizations) can assign DOIs to their research outputs, enhancing the discoverability of their data and its associated metadata and helping to preserve longterm access to that data. Crossref (https://www.crossref.org/) is a not-for-profit membership organization and official DOI Registration Agency. Alongside operating a DOI registry for scholarly outputs and a funder registry, it currently runs the Grant Identifier initiative (https://www.crossref.org/community/funders/) in partnership with several funders including for instance the Wellcome Trust, ERC and NIH. In this scheme, DOIs will be used to persistently identify awarded grants and their corresponding metadata. This makes funding information more transparent and can facilitate effectively tracking the outcomes and impact of funding. ROR (Research Organization Registry Community, https://ror.org/) is an initiative led by Crossref, DataCite, California Digital Library and Digital Science aiming at developing "an open, sustainable, usable, and unique identifier for every research organization in the world". ROR is designed to identify affiliation on researcher outputs (e.g., publications etc.) by assigning unique ROR IDs to research organizations. ROR thereby supports a more efficient discovery and tracking of research activities and outputs by institution. The system is interoperable with other organization identifiers, including GRID (https://www.grid.ac/institutes), ISNI (https://isni.org/), the Crossref Funder ID (https://www.crossref.org/services/funder-registry/), or Wikidata (https://www.wikidata.org/wiki/Wikidata:Main_Page). Orcid plans to include ROR IDs in its list of supported Organization IDs by 2020/21. CASRAI (Consortia Advancing Standards in Research Administration Information, https://casrai.org/) is an international non-profit organization providing the forum and means needed to standardize research data. In conjunction with funders, universities and other stakeholders it develops and maintains an open dictionary of terminology for the semantics and structures of scholarly information, including the definition of CV items. One such standard is for instance the CRediT system (Contributor Roles Taxonomy, https://credit.niso.org/), a taxonomy, which defines the contribution of individual authors to a publication. CRediT has now been adopted by NISO (National Information Standards Organization), see: http://credit.niso.org/. Adherence to the CASRAI standards facilitates international interoperability and access to research data. instructions. For instance, ask researchers to adhere to the Contributor Roles Taxonomy System (CRediT) in describing their roles.
Use metrics cautiously. Implement metrics carefully to avoid misuse and make sure evaluators are well aware of their respective definitions and limitations. The use of metrics can provide a valuable addition to CVs and their evaluation when used appropriately (see the Leiden Manifesto for an in-depth discussion of this topic, Hicks et al., 2015). Most reliable and transparent metrics are, however, based on publications and their citations only. There are unfortunately hardly any useful metrics available for other types of outputs (e.g., software, data, outreach). If an institution primarily or exclusively uses publication-based metrics in their academic CVs, they signal priority of publications over other forms of output, incentivizing researchers accordingly. At the same time, evaluators will be biased toward publications, simply due to the fact that these are the only metrics being presented to them.
Use established open and interoperable data standards and systems. CV data is valuable as it allows for the analysis of an institution's policy and performance as well as researcher career statistics. However, non-standardized CV Portable Document Files (PDFs, https://en.wikipedia.org/wiki/PDF) render this data mostly unusable. To harvest the benefits of digitalization, now or in future, requires that CV data adheres to established standards and is stored in an open and interoperable format (see Box 1). Exchanging and re-using data elements in CVs is crucial. Using data from reliable well defined and established sources without requiring extensive rekeying of basic information reduces the number of errors in the data provided, and offers means of automated validation. Doing so, it reduces the burden on researchers and institutions alike of checking and correcting the content of CVs. Flexible and reliable data reuse is possible when data is made available under open licenses and using open technologies, eliminating barriers to integration of a range of data providers into the systems of institutions. Such licenses and open technologies are vital to preserve a common pool of shared information, which can be used by any organization in the world with access to the internet without costly licensing fees or subscriptions.

Conclusion
Track record evaluation directly and significantly impacts academia and academic careers. CVs are the key documents in this process; therefore, their content and format need to be considered with great care. Unfortunately, today many organizations and their evaluation panels still rely on poorly structured CVs, which can propagate and even foster bias and unsatisfactory outcomes.
The H-Group formed to present ideas, which can hopefully help improve and harmonize the content and the structure of academic CVs. The ten guidelines presented here aim at inspiring and promoting best practices. We hope that they encourage organizations to consciously and deliberately structure their CVs so that they appropriately reflect a broad and balanced overview of a researcher's previous achievements. At the same time, we hope that the academic CV will also become a more standardized and interoperable document. Combined, these changes can help improve evaluation, make academia fairer and more inclusive for all and foster good quality research.