Richard Nakamura, director of the Center for Scientific Review at the US National Institutes of Health (NIH), does not consider himself to be racially biased. Yet a test of his speed at associating certain words with faces of different races revealed a slight unconscious prejudice against minorities. If the director of the institute that oversees the NIH’s grant process harbours these inclinations, he wonders, are grant reviewers affected as well?

Seals brought TB to Americas Icelandic volcano shakes ominously Neanderthals: Bone technique redrafts prehistory

To answer that question, the NIH will launch ambitious analyses beginning in September to determine whether bias hampers minority scientists who seek agency funding. A 2011 study in Science found that white researchers receive NIH grants at nearly twice the rate that African American researchers do (see ‘Grant gap’). Even when factors such as publication record and training are considered, an African American scientist is still only two-thirds as likely as a white scientist to be funded (D. K. Ginther et al. Science 333, 1015–1019; 2011). The disparity seems to arise early during the review process, when grants are first rated.

The findings spurred the NIH to launch a ten-year, US$500-million effort in 2012 to train and mentor minority scientists. But officials acknowledge that the racial gap among grantees is not just because there are fewer qualified applications from minority researchers. Now the agency will look inward to determine where its grant process may be failing — and what to do about it.

One basic issue that the NIH will address is whether grant reviewers are thinking about an applicant’s race at all, even unconsciously. A team will strip names, racial identification and other identifying information from some proposals before reviewers see them, and look at what happens to grant scores. (Such identity stripping is surprisingly difficult: even citations might reveal who the applicant is, and reviewers need some information about an applicant to make a fair appraisal.) The results could be telling. “If the disparity drops with anonymization, that’s clear evidence of bias,” says Nakamura.

Credit: Source: D. K. Ginther et al. Science 333, 1015–1019 (2011)

Such a finding would be in line with other results in this area. A study published this year found that faculty members in US universities are less likely to respond to interview requests from prospective students whose names are associated with minority groups than they are to identical requests from students with ‘white’ names (K.L.Milkmanet al.Soc.Sci.Res.Networkhttp://doi.org/t9h;2014).

The NIH will also study reviewers’ work in finer detail, by analysing successful applications for R01 grants, the NIH’s largest funding programme for individual investigators. The goal is to see whether researchers can spot trends in the language used by reviewers to describe proposals put forward by applicants of different races. There is precedent for detectable differences: in a paper to be published in Academic Medicine, a team led by Molly Carnes, a physician at the University of Wisconsin-Madison, used automated text analysis to show that reviewers’ critiques of R01 grant applications by women tended to include more words denoting praise, as though the writer is surprised at the quality of the work. And numerous other studies show that different standards exist for men and women in a variety of fields. “Women do, indeed, have to be twice as good to get the same competence rating as a man,” says Carnes.

The NIH will also analyse text in samples of reviewers’ unedited critiques. The Center for Scientific Review typically edits the wording and grammar of these reviews before grant proposals are returned to applicants, but even the subtlest details of such raw comments might hold clues about bias. Nakamura says that reviewers will not be told whether their comments will be analysed, because that in itself would bias the sample. “We want them to be sloppy,” he says.

The NIH’s Study Sections, in which review groups discuss the top 50% of grant applications, might also harbour bias: the 2011 Science paper found that submissions authored by African Americans are less likely to be discussed in the meetings. But when they are, a negative comment arising from even one person’s unconscious bias could have a major impact in such a group setting, says John Dovidio, a psychologist at Yale University in New Haven, Connecticut, and a member of the NIH’s Diversity Working Group. “That one person can poison the environment,” he says.

Even if the NIH investigation does not turn up evidence of bias, it may still reveal some of the causes of the racial disparity in the NIH’s grant-making process. Perhaps grants from minority researchers are more likely to be written in a way that does not appeal to reviewers, says Monica Basco, executive secretary of the Diversity Working Group’s peer-review subcommittee. That would suggest fixes such as grant-writing help. Evidence of bias would be harder to address, and any interventions would need to be tailored to address the point at which it occurs, says Basco.

Nakamura expects that the NIH’s effort to identify and root out prejudice, which he says could cost up to $5 million over three years, might prove controversial. “People resent the implication they might be biased,” he says — an idea borne out by some responses to his 29 May blogpost on the initiative. One commenter wrote, “It is absolutely insulting to be accused of review bigotry. Please tell me why I should continue to give up my time to perform peer review?”

But Nakamura believes that the NIH — and reviewers — need to keep open minds. After all, he says, “we are human beings with emotions and feelings we’re not in control of”.