Published online 16 November 2010 | Nature 468, 356-357 (2010) | doi:10.1038/468356a

News

Study says middle sized labs do best

A comparison of funding level and output has captured attention at the US National Institutes of Health.

Jeremy Berg: “It’s important not to forget that the average behaviour is not the behaviour of everybody.”NIH

The director of one of the biggest institutes at the US National Institutes of Health (NIH) posted a blog entry that got tongues wagging this autumn. Jeremy Berg, who heads the National Institute of General Medical Sciences (NIGMS) in Bethesda, Maryl­and, had analysed the scientific productivity of nearly 3,000 researchers who were funded by grants from his institute in 2006. With the help of NIH data-mining experts, who have developed power­ful tools for such studies, Berg was able to show, in hard numbers, what scientists could once only speculate about: the relationship between grant size and scientific productivity.

"Everything had come together so that it seemed possible to ask the questions I asked without it being a two-year project," says Berg.

His analysis plots the median number of publications between 2007 and mid-2010, and the median average impact factor of those publications, against total direct NIH funding in 2006. It covers 2,938 investigators, who were divided into 14 groups on the basis of their funding level.

The resulting plot (see chart) shows that both measures peaked at around US$750,000 in annual funding; at higher funding levels, the median publication number and average impact factor were both discernibly lower.

Click for larger image

Berg says conventional wisdom has long held that, once a lab reaches a certain size, it becomes harder to manage and the average number of publications per dollar falls. But until now, he says, "no one actually had the data to put that in more quantitative terms". He hastens to add that the variation within funding levels is large. "Some people with $800,000 or $900,000 are publishing 40 or 50 papers over this time. It's important not to forget that the average behaviour is not the behaviour of everybody."

Berg's analysis comes at a time of increasing austerity for the US government, driven by a struggling economy and ballooning deficits. The push to trim costs is likely to gain strength come January, when spending-conscious Republicans will take control of the US House of Representatives, where funding bills are born. And political cost-cutters may increasingly turn to analyses such as Berg's to inform their decisions.

"Science is not an obvious first choice for the public. It could be regarded as a luxury during a time of recession. So there is a call for greater accountability and greater documentation of the impact and expenditure of public funds," says John Marburger, vice-president for research at the State University of New York, Stony Brook. As director of the White House Office of Science and Technology Policy under former president George W. Bush, Marburger pushed for more rational systems of developing and evaluating science policy. "Congress and the administration want to see something more than just our anecdotal success stories," adds John McGowan, deputy director for science management at the NIH's National Institute of Allergy and Infectious Diseases in Bethesda.

Analyses similar to Berg's are under way, but on a larger scale. The STAR METRICS (Science and Technology in America's Re­invest­ment — Measuring the Effects of Research on Innovation, Competitiveness and Science) project was launched in May and, led by the NIH and the US National Science Foundation, aims to develop measurements of the economic and social impacts of US research spending by linking data on federal grant recipients to outcomes such as publications, patents, citations and employment (see Nature 464, 488–489; 2010). Meanwhile, McGowan and his team have developed e-SPA (electronic Scientific Port­folio Assistant), a computer tool for gauging productivity by linking NIH-funded investigators to measures including impact factor, citation number and patents applied for and published. e-SPA is now in use by about 1,000 NIH staff as they plan and evaluate their research portfolios and make close-call funding decisions on individual grants. And in 2006, the National Institute of Environmental Health Sciences in Research Triangle Park, North Carolina, launched SPIRES (Scientific Publication Information Retrieval and Evaluation System), an NIH-wide system that matches 275,000 NIH grants with publications going back to 1980.

Some are sceptical of such efforts. "There's no reason to think that just because there is productivity in an area of science it would be a predictor of social value," says Daniel Sarewitz, Washington DC-based co-director of the Consortium for Science, Policy and Outcomes at Arizona State University. "You can be productive on a question that's of great interest to scientists, but of no particular value in terms of application."

Nonetheless, such analyses focus the attention of scientists competing for increasingly scarce dollars. For Dorothy Erie, an NIGMS-funded bio­chemist at the University of North Carolina in Chapel Hill, Berg's analysis tells an important story. "There's a very clear difference in productivity between those who are above $225,000 and those who are below it," she says. "If you can only afford to hire two people, it's hard to be productive."

Berg stresses that the analysis is a conversation-starter, not a judgement to be applied mechanically. "If you just say, 'Based on your funding level, you should be publishing seven papers and you are only publishing four,' and one of those four is the discovery of RNA interference, that clearly would be the wrong way to think about things," he says.

Raphael Kopan, a developmental biologist and NIGMS grantee who this year ran his lab at Washington University in St Louis on $800,000, says that Berg should be applauded for trying to scientifically analyse what his institute gets for its investment. But without segregating the data — comparing, for instance, investigator-initiated grants with projects instigated by the NIGMS, or intramural with extramural investigators — "it may lead to the wrong con­clusion — that scientists do best if their funds are limited and their labs are small. I don't think this is necessarily correct," says Kopan.

Still, Berg's analysis has served a purpose: validating a 20-year-old NIGMS policy of generally denying new grants to well funded labs. Since 1999, that has meant labs with more than $750,000 in direct support from all sources, including the award being applied for.

Marburger says that Berg's analysis provides a "reality check" of that policy. The results, he says, are "an indication that they aren't making a big mistake".

ADVERTISEMENT

Berg's next project will be to tackle the impact of the abbreviated grant-application forms that came into effect at the NIH in January. Among other things, he will be asking whether and how the slimmed-down form for the agency's mainstay grants is affecting the scores that applicants receive.

Whatever happens, the future is likely to bring more austerity, making it important for defenders of science agencies to arm themselves with the best quantitative ammunition they can generate. In this environment, questions such as Berg's "are very good to ask", says Kopan, who argues that Congress is already effectively cutting the NIH by failing to keep its budget growing as quickly as the costs of doing biomedical research. If cuts have to be made, he says, "we might as well go ahead and do it correctly". 

Comments

If you find something abusive or inappropriate or which does not otherwise comply with our Terms or Community Guidelines, please select the relevant 'Report this comment' link.

Comments on this thread are vetted after posting.

  • #60844

    Everybody want's to go on pretending like we can continue to fund as if we still live in the false economic bubble that temporarily provided more than is practical. I think nobody wants to sacrifice anything, and nobody wants to admit that they were living a lie.

Commenting is now closed.