A group of scientists having a meeting in a laboratory.

Economic and social experiments will assess the processes for receiving, reviewing and funding research and innovation.Credit: sanjeri/Getty

The US National Science Foundation (NSF) has announced plans to explore creative ways of funding research, emphasizing the need to make decisions more quickly and to support more proposals that are innovative.

The agency has partnered with the Institute for Progress (IFP), a think tank based in Washington DC, which will conduct a series of economic and social experiments based on data provided by the NSF. The aim is to evaluate how efficiently the agency is funding and supporting research, and how its processes could be improved.

The initiative was prompted by the CHIPS and Science Act, which was passed by the US government in 2022 to authorize funding in exploratory and curiosity-driven research. The NSF says that the act includes the processes that the agency uses to receive, review and fund grant proposals.

Nature Index spoke to IFP co-founder and joint chief executive Caleb Watney, who manages the organization’s metascience working group, which will be doing the experiments.

Can you tell me more about this initiative?

I think there’s widespread recognition that scientific progress is one of the main forces pushing humanity forward in the long run. But we don’t actually know that much about the funding, organizational structures and incentives of science. The NSF is trying to be proactive about building these kinds of research feedback loop into its systems.

NSF administrative data have often been difficult to share with outside academics. This initiative gives us the opportunity to create infrastructure to confidentially interview NSF programme managers and collaborate with them to identify interesting research questions.

It’ll be an opportunity for us to see how some past changes have affected the scientific ecosystem, work with the NSF on prospective experimental design and try to scope out some pilot programmes.

What areas do you plan to study?

There are some areas that I think the NSF has already flagged that it’s particularly interested in. There was a really cool and unique study that the NSF did last year, for example, in partnership with some academics around the launch of its Regional Innovation Engines programme. The agency and researchers dived into existing academic literature investigating whether regional innovation actually works. They spoke to academics to work out the types of research question they’d like to ask and what kinds of measurements they’d need for the outcomes they’d be looking for.

Portrait of Caleb Watney.

Caleb Watney is working with the US National Science Foundation to explore creative ways of funding research.Credit: Caleb Watney

The NSF is also interested in testing out a ‘no deadline’ policy. I think the agency has suggested that if you eliminate strict proposal deadlines, that could reduce the administrative burdens on reviewers and agency staff members. It might also increase the quality of submitted proposals if people aren’t submitting ‘under the gun’, and it could empower scientists to submit proposals whenever they feel like it is most scientifically relevant, or as soon as they have the idea.

The NSF has already implemented the no-deadline policy across several directorates, so that allows us to look back and evaluate what effects this has had.

How might more innovative, ‘outside the box’ research benefit from the initiative?

The NSF has expressed interest in trying to find better ways to support unusual or high-risk, high-reward science. The agency’s director is excited to support some ways that allow reviewers to champion specific proposals, often referred to as golden tickets, or to use priority flags.

One criticism of the current process is that it can be a bit too consensus-oriented. NSF programme officers can use some discretion to find proposals that seem particularly promising, but I think it would be worthwhile to give reviewers the ability to exercise that discretion independently.

What funding or organizational models are you most interested in testing?

Where possible, we’re interested in running a diagnostic stage before introducing the intervention. For example, with golden tickets, without actually changing the distribution of who gets funded, we could ask reviewers: “If you had a golden ticket during this funding round, who would you have given it to and why?”

Then, we can find out how this changes the distribution of who would have received funding. That’s a useful first diagnostic step, because I think some people wonder whether this system is actually going to change anything.

For instance, do the scientists who get funded end up being younger, on average? Do they come from less conventional institutional backgrounds? Do their proposals have attributes that make them more unique or maybe more innovative? I think a lot of those early diagnostics are valuable to look at, even before running the formal pilot.

It’s right to be slightly cautious and to try to make sure that we are doing as much as we can before we change the system. If you do pre-diagnostic tests and find out that something doesn’t seem to be making any impact, it’s a waste of time and resources to run the whole pilot. The NSF seems excited about avoiding that waste, because it could be a lower-risk way of getting information.

Will you publish the results?

The main objective will be producing internal reports for the NSF. It will ultimately be up to the agency’s discretion whether the outputs will be published. If we can identify some exciting interventions that can show empirically that they seem to work or at least have the potential to work, I would guess that the NSF is going to be excited to publish those data or open the pilot up to the wider scientific community.

Will you be launching similar partnerships with other funders?

We’re working most closely with the agency, and this specific agreement is with only the NSF. But we are having a pretty open dialogue with folks at the US National Institutes of Health.

I’ve also been in touch with our friends in the United Kingdom who are involved with some of the science agencies there, but those conversations are at a much earlier stage. We’d love to see whether we can play a similar part in evaluating UK research processes as well.

The underlying structure of science has changed a lot in the past 60 years, and yet the way that we fund and structure research has been pretty stable over that period. I would be pretty surprised if we’re at the absolute frontier; I think the current system has a lot of merit, but the NSF will be the first to say that there’s always room for improvement.