Grant applications often fail — it's a fact of life. But the deliberations that lead to rejection are not made public, much to the relief of the applicants. The value of breaking this vow of secrecy, however, was made clear at the end of last month at a meeting held in Toronto, Canada, by the American Society of Human Genetics.

To give junior scientists an insight into the grant-reviewing process, the meeting featured a mock assessment of applications. Decisions are made by weighing up five factors: significance, approach, innovation, investigator and environment. And the panel, which featured actual grant reviewers grading fictional proposals, aimed to show how these factors come into play.

First up was a proposal to analyse genetic variation based on small deletions. All of the panellists agreed that the approach was technologically innovative, scientifically sound and would produce quality data. But Vishwajit Nimgaonkar, a geneticist from the University of Pittsburgh, Pennsylvania, scored the proposal low on the basis of significance. “It's not clear to me why we need another set of polymorphisms,” he said. “It looks like an application in search of a use.” Another panellist had some questions about the ‘environment’ — in this case whether a key collaborator was really committed to the project.

At the end of the session, the panellists debriefed the audience. As in the real world, they said, many of the applicants were able to cover all five factors adequately in their proposals — but they hadn't, assuming instead that the reviewers would be able to read their minds. In fact, the panellists had repeatedly been lenient to new scientists, saying that not spelling out how the proposal met the five criteria was a typical ‘young investigator error’. One senses that the young scientists present all hoped that real reviews are undertaken in a similar atmosphere of understanding.