A US$1.2 million grant will fund an effort to identify and publicize the criteria that universities around the world use to hire and promote researchers. The Declaration on Research Assessment (DORA), a global initiative to reform the evaluation of researchers, will use part of the funds to create an interactive dashboard that will shine much-needed light on a process that is often opaque and controversial, says programme director Anna Hatch, who is based in Washington DC. “When criteria are visible and transparent, universities can be held accountable,” she says. “Researchers will know how their contributions will be measured, so they can make a better case for themselves.”
DORA, conceived in 2012 at the annual meeting of the American Society for Cell Biology, called for improvements to the evaluation of researchers and the outputs of scholarly research. The declaration specifically calls for doing away with impact factors as a way to judge the merit of academics. So far, it has been signed by more than 20,000 individuals and institutions around the world.
The grant is from the Arcadia Fund, a UK-based charity that has supported many academic initiatives since its founding in 2001.
The dashboard, which is expected go live in mid- to late 2022, will be part of Tools to Advance Research Assessment (TARA), a multi-pronged effort to gain a clearer picture of researcher evaluation around the world. DORA will run TARA in collaboration with project leaders Sarah de Rijcke, director of the Centre for Science and Technology Studies at Leiden University in the Netherlands, and Ruth Schmidt, a researcher of design at the Illinois Institute of Technology in Chicago.
Blueprint for change
De Rijcke says that the dashboard will track important issues in hiring, such as work to promote diversity, inclusiveness and equity. It will also identify universities that have moved beyond controversial metrics such as impact factors and h-indexes, which are widely criticized as imperfect measures of researcher effectiveness and productivity. Providing such examples could offer a blueprint for other institutions that want to make such changes, de Rijcke says.
TARA will also include a survey of attitudes and practices in US universities. The survey will focus on the United States partly because universities there have been relatively slow to embrace change, Hatch says. “In other parts of the world, there’s a strong momentum towards researcher-evaluation reform,” she says.
Earlier this year, Utrecht University in the Netherlands, which signed DORA in 2019, announced plans to abandon the impact factor in its hiring and promotion decisions. On 19 July, more than 170 Dutch academics signed a letter opposing the policy on the grounds that it could remove clarity from the hiring and promotion process and make researchers less competitive when applying for international jobs
But, as Hatch notes, gathering signatures isn’t enough to ensure change. “The declaration itself was really good at raising awareness,” she says. “The next big step is fostering and supporting institutional actions.”
De Rijcke acknowledges that stated policies don’t always reflect the reality of hiring or promotion committees, but she says that knowing the official stance of a university should provide researchers who are seeking a new post, or a promotion, with some extra leverage. “It can make a real difference to individual researchers who, for instance, want to address problematic hiring and promotion practices in their department,” she says.
Kamden Strunk, a higher-education researcher at Auburn University in Alabama who has championed for reform of hiring and promotion in academia, says that, from the outside, TARA looks to be a worthwhile project. “Collecting and publishing a database of how research is evaluated would be a huge benefit to researchers, early-career faculty and committees,” he says.
Strunk notes, however, that universities might be reluctant to disclose their actual processes for evaluating talent. Besides, he says, decisions are largely personal. “When it comes to things like promotion and tenure, individuals can vote however they choose,” he says. “So in a certain sense, the criteria turn out to be largely illusory anyway.”
Erin McKiernan, a physicist at the National Autonomous University of Mexico in Mexico City and a member of the DORA steering committee, says that TARA is a crucial tool. “There is an urgent need to reform academic evaluations and incentives,” she says. “Projects such as this could help the wider academic community to better understand current systems, identify good practices, and propose specific interventions to improve evaluations. It’s a vital step forward in making sure our evaluation systems emphasize the right things, like quality over quantity, public impact, and output sharing.”
Hatch says that the team behind TARA plans to hold a ‘scoping’ session in September to start fine-tuning the plans for developing the dashboard as well as a resource toolkit for universities that want to rethink their evaluation policies. “We want to make sure the tools we’re creating are as useful as possible to the academic community,” she says.