Careers are made and broken by grant-funding committees. So how are the key decisions really made?
There are six outstanding grant applications listed on the flip chart. There is money for two, or maybe three. And the decision as to which will be funded rests in the hands of the 15 members of the peer-review panel who are meeting inside the glassy rectangle of the American Cancer Society (ACS) building in central Atlanta, Georgia. It is only 45 minutes into the committee's two-day meeting in June, and the conversation is already tense.
"It seems pretty pedestrian," says the committee chairman, referring to the first application on the list. The applicant wants to investigate the molecular signals that could shut down runaway cell division in a particularly deadly cancer — but much of this pathway has already been worked out in other cell types. "This is good solid work," argues another reviewer, slightly exasperated. "Not everything has to be a bright, shiny idea. Valuable information will come out of it. The innovation is less than in other grants, but I think the other aspects make up for that."
Is good, solid work enough when as much as $ 800,000 is at stake? ,
The real question is whether good, solid work is enough when as much as US$800,000 is at stake — the cost of supporting a cancer investigator and his or her lab over four years. The competition is extreme. At the ACS, the largest private non-profit funder of cancer research in the United States, the average success rate for grant applications has slipped by a few percentage points in the past two years to roughly 15%, owing largely to fewer donations — the organization's sole source of income — in the economic downturn. At the National Institutes of Health (NIH) in Bethesda, Maryland, which funds the majority of biomedical research in the United States, several years of flat federal funding combined with a rise in the number of applications means that 21% of research-project grant applications were funded in 2009, down from 32% ten years earlier (see graph). The situation in many other countries is just as tough.
All of this puts immense pressure on the grant-review panels. Senior reviewers say that when the top one-third of proposals can be funded, the review process works well at identifying the best science. But when the success rate drops, they see the process start to fall apart. Conversations turn nit-picky and negative, with reviewers looking for any excuse not to fund a project, rather than focusing on its merits. Reviewers say that they feel forced into making impossible choices between equally worthy proposals, especially when success rates are less than 20%. "That's in a range where you have lost discrimination," says Dick McIntosh, professor emeritus of cell biology at the University of Colorado in Boulder. "That's a situation where you are grading exam papers by throwing them down the stairs." The chairman of the ACS panel agrees. "Deciding between the top grants, I don't want to say it's arbitrary, but it's not really based on strong criteria," he says. "It's subtle things."
To find out how subtle, Nature secured access to an ACS review-panel meeting. The organization spends a total of $120 million a year on research grants. That is a drop in the funding bucket compared with a behemoth such as the NIH, which awarded $16 billion in research- project grants alone last year — but federal law prohibits members of the public from attending meetings of NIH 'study sections'. The ACS allowed a reporter to sit in with the stipulation that the identities of the reviewers and the grant applicants were to be protected. And when it comes to deciding who should get a share of the pot, the tensions, agonies and battles are the same everywhere.
At the start of the day, as the reviewers — 13 biomedical scientists, a cancer survivor and an oncology nurse — are taking their seats, the ACS programme officer, who shepherds the proposals through review, announces the results of the round six months earlier: "Two out of 23 Research Scholar Grants were funded."
Eyebrows furrow as everyone quickly does the mental maths — that is a success rate of just below 9%. The rate for this session will be decided by the ACS in September, on the basis of the available budget and the total pool of proposals recommended for funding from all of its 20 peer-review panels. But it is likely to be just as low as, if not lower than, the previous one. This means that only the exceptional applications will even have a chance at securing one of the grants, which are similar in scope to the NIH's R01 Research Project Grants — a mainstay of funding for many US labs — but are awarded only to investigators who are in the first six years of their independent careers.
The panel members have already done their homework. Each of them has been assigned half-a-dozen 25-page grant applications to review in detail, producing a written critique of each one and a preliminary score of outstanding (1.0–1.5), excellent (1.5–2.0), good (2.0–2.5), fair (2.5–3.0) or poor (3.0–5.0). Panel members serve as 'primary reviewer' for half of their stack, and secondary reviewer for the rest. The vice-chairman lists the six proposals with 'outstanding' scores on the flip chart at the front of the room, and the panel discusses these first.
Two of the applications stand head-and-shoulders above the rest. One, with a preliminary score of 1.1, uses an unusual animal model to investigate the human genes that drive an aggressive blood cell cancer. "Of the cancer approaches [in this animal], this is the most innovative system I've seen," says one reviewer. "If this works with this tumour, he can apply it to any cancer."
The second high-scoring proposal — also 1.1 — is aimed at answering fundamental questions about a rare but highly malignant childhood tumour. It, too, wins praise for its ambition — impressive but not over-reaching. "This would be huge, it's incredibly innovative and it addresses key questions," says the primary reviewer. "Apparently, he was even more ambitious the first time around [on his first submission]. He tried his best to curb his enthusiasm, but he couldn't do it."
The next proposal up for discussion also gets high marks — 1.4 — but one reviewer notices something that doesn't sit well. The applicant has focused the proposal on a handful of genes from a longer list that are important to metastasis, the spreading of the cancer. But from the application, the reviewer can see that the investigator has submitted a nearly identical proposal to the NIH. "Splitting the two to get two grants doesn't seem right," the reviewer says. "I like it, but I don't want to support it if she's not going to put all of her best things in there." After some discussion, the same reviewer lowers his score to 1.9, which pushes the proposal out of the top 'competitive' range of projects that stand a chance of getting funding.
Another outstanding application — this one on stem cells — runs into trouble because of a lack of scientific details. It is already borderline, with the primary reviewer giving it a score of 1.8 and the secondary reviewer giving it 1.5. The primary reviewer praises the applicant's productivity and thinking. "So, why didn't I give it a 1.1?" she says. "I think a lot of this grant is open-ended." She can't see how the applicant will filter the genes that are pulled from the proposed screen. The problem with this particular fishing expedition, says the second reviewer, is that "he didn't explain how he would sort through all the fish". This proposal, too, is knocked out of the competitive range.
By 11 a.m., every outstanding proposal on the flip-pad has been discussed and its adjusted scores have been marked beside it. Four remain in the outstanding range. Now the room turns to the remaining proposals, which will be scored and critiqued to help the investigators revise their applications and try again. The reviewers get irritated by applicants who don't follow the rules, or who leave out essential data. "What annoyed me to no end," a panel member says of one applicant, "is that he put the most important figures in the appendix, where there is not supposed to be any data."
"The entire proposal is based on the success of aim one," says another reviewer, referring to a proposal to isolate a cancer-cell population through cycles of specialized cell culture. "If she doesn't achieve that, she doesn't say what she will do next." Another application is criticized because the investigator does not seem truly 'independent' — he seems to be continuing his postdoctoral work at the same institution. And testiness sets in when the reviewers think that the wrong model system has been chosen, or budgets seem extravagant. "My main problem is, why isn't she just doing this in a mouse?" says a panel member of an experiment with an unusually expensive approach. "She could [do the experiment] much easier and cheaper."
By the end of the morning, the panel has distilled the list of 17 applications into the four outstanding proposals and six 'non-competitive' ones which can be resubmitted in the next funding cycle. They have rejected seven proposals with scores higher than 2.0 — a range unlikely to win funding even on resubmission. "I reviewed this last time and I don't want to review it again," says one panel member shortly, as lunchtime nears. "I spent three hours reading the grant and it is just hard to follow. I read sentences out loud and had a hard time deciding what's an adjective and what's a verb."
At lunch, the committee members sit together at two tables, wary of a reporter listening in. But one member, a physician scientist, discusses what he has learned about negotiating the review process for himself. A senior investigator at his own institution had explained to him that a well written proposal can transform the two or three main reviewers, who will read your proposal in depth, into your cheerleaders. "If you wow those two people in the room of 20, the other 18 will vote similarly," he says.
A look around the room suggests that only three of the scientist reviewers are older than 50 — reflecting a wider concern about participation in peer review. "Sometimes, very good scientists are not willing to serve on study sections" because of the time commitment, says Pietro De Camilli, a neuroscientist at Yale School of Medicine in New Haven, Connecticut, who has taken part in NIH study sections. "You might be respected for serving, but there are no tangible rewards for it." Each ACS reviewer is paid a small honorarium of $250 for their in-depth critiques and attendance at the meeting, but it is a task that can easily take up to two weeks twice a year, for four years — a typical term as a reviewer for the ACS. Gregory Petsko, a structural biologist at Brandeis University in Waltham, Massachusetts, who has also participated in numerous review panels, says that having senior, well established investigators on the panel is important because they can push back against the play-it-safe arguments that creep into the discussion these days. "Isn't ambition the reason why we are here?" he says.
With lunch over, the reviewers have a difficult conversation ahead: they have to rank the four most competitive proposals. The unique animal-model and childhood-tumour proposals will win the top two slots and almost certainly get funding: these two "are clearly above the rest", says the chairman.
It is the two proposals jockeying for the third and fourth spots — the 'deadly' cancer application with a score of 1.3 and another scoring 1.4 — that are sticky. Slot three has a reasonable chance of getting funded. But slot four will almost certainly land in what is called the 'pay-if' category, meaning that it will be funded only if there is an unexpected budget surplus, a top-slot recipient turns down their grant or a big donor asks to fund that proposal's specific field. In 2009, 151 proposals for ACS Research Project Grants and postdoctoral fellowships landed in this range — twice as many as usual — and 45 were eventually funded.
The deadly cancer application, which has been submitted twice before, is on its last chance according to ACS rules. "Every time, [this proposal] has been ranked as 'outstanding'. It's now or never for this one," says one panel member. "The science is strong, but there is this issue of novelty that seems to be dogging this grant," says another.
Its rival is a first-time submission, with the aim of studying the downstream events in a signalling network important in numerous cancers. It is technically superb, the reviewers agree, but the committee chairman, who reviewed it, has concerns about the applicant's productivity. "This is well conceived, nicely written and, by the end of it, it's really great science," he says. "But this investigator had an extended postdoc and she had very few first-author publications."
"The publication rate out of her postdoctoral laboratory is slower than most other labs," points out another panel member. "The stories that come out [of that lab] are very big. That rate of publication is not unusual."
The debate reaches a deadlock. Both sets of reviewers feel strongly that their grant deserves a shot.
The chairman breaks the impasse, arguing that the field will learn more from the signalling-network grant. Heads nod around the room, and the chairman adjusts the scores on the flip-pad to move the signalling proposal ahead of its rival. Everyone seems satisfied — yet a silent pulse of regret can be felt for the losing application. Finally, the full panel is instructed to 'vote their conscience' and, by secret ballot, rank each of the top four proposals on the basis of the recommended scores. The ultimate order will be decided by a tally of the votes, which only the programme officer will see, but it is very likely to be the same as the order on which the panel has already agreed.
A silent pulse of regret can be felt for the losing application. ,
Later, back at his office, a panel member who had reviewed the deadly cancer proposal says he was frustrated at not being able to back strong science. "If I hadn't got my own ACS grant when I did, I would not be here," he says. "This could be a bad omen for this person. Stuff does fall through the cracks. I just hope other funding agencies will pick up this grant." He says the current strategy for most researchers is to apply to as many funding agencies, with as many proposals as possible. "It's brutal, the funding situation has ended lots of careers," he says.
The vice-chairwoman, who had also promoted the deadly cancer proposal, expresses similar regret. The grant might not have gained "amazing new information about a particular [molecular] pathway, but it might have been really important for this cancer system, which doesn't have much else out there". She worries that, in the current economy, this type of science loses out. "There is routine stuff that has to be done in the [research] system, so how does that get funded?" she asks. "That's the thing all of the funding agencies have to think about."
And that is what the ACS applicants have to think hard about, too. In the last week of July, they were all furiously scanning e-mails notifying them of the panel's conclusions. Just two of them saw, "I am pleased to inform you that the committee recommended your application be considered for funding". The other two 'outstanding' applicants had disappointing news. "Unfortunately, due to budgetary constraints", the message began, before going on to tell them that they were being considered for the pay-if programme.
So how can applicants gain an edge in this environment? The vice-chairman advises investigators to test-run their applications through a peer-review process of their own making, by showing their proposals to colleagues with varied perspectives. She also says that applicants should use their contacts to sniff out the personality of the panel and the nature of the competition. "Will 10 senior people in the field also be applying? Does the panel like X versus Y types of approaches? The bottom line is to ask for help, don't try to do it on your own," she says.
"It makes me sad that people who are really strong are struggling to get funded," says the committee chairman. "Ultimately, peer review is going to be an imperfect process. But we're not doing a bad job."
Related external links
About this article
Cite this article
Powell, K. Research funding: Making the cut. Nature 467, 383–385 (2010). https://doi.org/10.1038/467383a
EMBO reports (2010)