The research funding system is broken: scientists don't have time for science any more. Because they are judged on the amount of money they bring to their institutions, writing, reviewing and administering grants absorb their efforts1. The requirement that they promise taxpayers specific results to justify research tends to invite either exaggeration or boringly predictable projects. Yet the research behind 30% of the pivotal papers from Nobel laureates in medicine, physics and chemistry was done without direct funding2.

Every scientist recognizes this problem and hopes for a solution. Although detailed proposals may be indispensable for some projects, such as rigorous clinical trials and large-scale collaborative research, ideas abound for more efficient ways to fund general research. Some organizations are already experimenting. Multiple options could co-exist, with portions of the budget earmarked for different schemes.

Credit: ILLUSTRATIONS BY JONATHAN BURTON

Here are some of the most promising proposals to reduce the amount of time scientists spend trying to fund their research, and the pros and cons of each (see table). Definitive fixes would require major system overhauls, which are likely to make some scientists justifiably nervous. But smaller, pilot efforts that enable us to evaluate what works could begin right away.

Table 1 Options for revamping the funding system

Fund everybody (or a lucky few)

Some — or all — of the research budget could be allocated to eligible scientists in equal shares, or given to a few lucky ones at random. With egalitarian sharing, each scientist would receive only a small amount, which could quickly evaporate without returns when research costs are high. But scientists in some fields — mathematics, say — could achieve much on a small share; and in some settings the shares could be substantial. For instance, if half of the US$31.2 billion that the US National Institutes of Health spends on research each year was shared equally among 300,000 researchers, each would get more than $50,000 a year.

Lottery distribution, too, flies against the principle that science funding should be meritocratic. Still, some agencies are trying it — the Foundational Questions Institute in New York, which tackles key questions in physics and cosmology, uses a lottery system to award its mini-grants, which range in value from $1,000 to $15,000. Such an approach may not be as radical as it sounds: the imperfections of peer review mean that as many as one-third of current grants are effectively being awarded at random3. This situation will only worsen as falling acceptance rates encourage investigators to bombard agencies with proposals, leaving fewer qualified reviewers to judge each one. The downside of using aleatoric allocation is that not every deserving scientist will be funded.

Fund according to merit

Leading thinkers and experimenters worthy of unconditional support could be identified through peer assessment of their work and credentials. Appraisals of project-based proposals already take a scientist's merit into account, but they typically give less weight to it than to the project plan. Peer assessment does not work well for early-career scientists, who have a short track record. But for those more established in their field, a career trajectory offers a wealth of information. By contrast, an isolated project is only a snapshot.

In the MacArthur Fellows Program, for example, meticulous peer assessment is used to select 20–30 individuals a year on the basis of exceptional creativity and promise for important advances. Recipients do not have to justify what they do with the $500,000 award, which is spread over 5 years. However, close scrutiny of an individual's career may become prohibitive for systems that award thousands of grants — it might save grant recipients time, but it adds to the administrative load of reviewers. The approach is also vulnerable to favouritism, in which only elite individuals and lines of research are selected and thousands of scientists doing quality, smaller-scale science are left out.

It is a scandal that billions of dollars are spent on research without knowing the best way to distribute that money.

To avoid the subjectivity and burden of evaluating thousands of careers, an automated system to evaluate relative merit would have to be devised. Such a system would depend on objective indices. The share of the annual funding budget scientists receive would be based on their value, calculated with a pre-specified formula.

Metric-based appraisals are familiar to many scientists already, particularly those in European countries. The UK Research Assessment Exercise, for example, relies on them. It is a much hated and debated system for evaluating departments, but its replacement, the Research Excellence Framework, will rely even more on indices when it comes into effect in 2014. Metrics also underlie many hiring decisions in Italy, a country that is struggling to remedy widespread nepotism, and in Germany's Max Planck institutes. However, most of these assessments are simplistic, focusing on number of peer-reviewed publications, or inappropriate — looking at the impact factor of the journal rather than of an individual article. More sophisticated formulae are needed if a scientist's merit is to be captured.

Furthermore, indices are open to gaming, although some are more difficult to influence than others. To counter this, the system could use indices that exclude self-citations and capture quality rather than quantity (such as average citations per paper instead of number of papers), discourage gift authorship by adjusting for co-authors and penalize quantity that is not accompanied by quality. Several metrics could be combined.

Funding systems could reward good scientific citizenship practices, such as data sharing4, high-quality methods, careful study design and meticulous reporting of scientific work5. Openness to collaboration, non-selective publication of 'negative' findings, balanced discussion of limitations in articles and high-quality contributions to peer-review, mentoring, blogging or database curation could also be encouraged. Researchers might be rewarded for publishing reproducible data, protocols and algorithms. However, some citizenship practices are difficult to capture in automated databases, so would be subject to the disadvantages of peer assessment.

State broad goals

Another way to save time would be to simplify the application. Researchers could be asked, for example, to submit short summaries of their intended research, describing broad goals only. Such applications require less effort to write, review and administer and would allow flexibility in carrying out the work, if funded. Examples include the NIH Director's Pioneer Awards and the Howard Hughes Medical Institute (HHMI) grants. The HHMI selects 300 established investigators and 50 young investigators through peer assessment of their credentials and of proposals for high-risk, uncertain prospects of innovation. Its awards are usually renewed after 5 years, requiring simple documentation of effort rather than demonstrated results, although results are needed for renewal after 10 years. Nevertheless, any system that demands high-risk innovative goals, and requires results, generates potential for exaggeration.

Ignore grant portfolios in promotions

Many institutions use the size of a scientist's grant portfolio as a basis for tenure and promotion. This practice may prompt scientists to prepare multiple grant applications focusing on expensive, even if dull, projects and to abandon brilliant ideas that need limited funding to test. But going after many and expensive grants costs institutions money, because both scientists and administrators spend an inordinate amount of time evaluating proposals, supplements, timesheets and justifications, as well as progress and final reports.

Many large projects never result in a scientific achievement, so even if the strategy brings in short-term grant funding, it may not pay off in the long term. The size of a portfolio should therefore not be a criterion for promotion; committees should focus instead on real work and achievements. Judging scientists by the size of their portfolio is equivalent to judging art by how much money was spent on paint and brushes, rather than the quality of the paintings.

What we can do now

All of the options above could be achieved — either through small, progressive steps, or through more extensive changes to the system. A major overhaul is likely to take years to implement, and will meet with wide resistance and debate. Smaller steps, such as changing promotion and tenure criteria, are easier to make. Pilot programmes of proposal-free or broad-goal-based funding can be incorporated into existing funding structures.

There are issues still to be resolved. All funding options face a tension over how many scientists should receive awards, and there is no good evidence on whether it is better to give fewer scientists more money or to distribute smaller amounts between more researchers. Some funding schemes are well suited to funding numerous scientists; others favour elitism (see table).

We will need to find ways to figure out which approach works best. It is a scandal that billions of dollars are spent on research without knowing the best way to distribute that money. Retrospective assessments are easy, but subject to confounding when comparing groups that were funded through different schemes. For example, one study has found that HHMI-funded investigators publish more high-impact papers and get more recognition than matched NIH-funded peers6. But it's impossible to match scientists perfectly: the prestige of the HHMI name alone may lead to more peer-based recognition. Prospective comparisons are more reliable, but require long follow-up. For example, controlled trials could randomize consenting scientists to different funding schemes, then compare surrogate metrics and long-term successes.

Ultimately, funding schemes should support the long-term goals of science. Few isolated research efforts have an immediate, substantial and durable impact; successful translation of basic research to practical applications occurs sparingly and with average delays of almost three decades7. The aim of science is to expand our knowledge base, which, eventually, yields useful applications. This is what scientists entered their profession to do, so requiring them to spend most of their time writing grants is irrational. It's time to seriously consider another approach.