Silhouetted people with digital devices in front of the OpenAI logo.

Support staff in research institutions are looking at ways they can harness the technology behind tools such as ChatGPT to better aid scientists in tasks such as writing grant applications.Credit: Peter Kováč/Alamy

“Almost magic.” That’s how scientist colleagues of Mads Lykke Berggreen used to describe his ability to put their complex research ideas into compelling prose. But over the past year, he has felt his star begin to wane as generative artificial intelligence (AI) tools such as ChatGPT have shown that they have similar abilities, yet are faster — and perhaps even better. “All of a sudden, I was replaceable,” says the research adviser, who is based at VIA University College in Aarhus, Denmark.

Since coming to this realization, Lykke Berggreen has thought hard about how generative AI will influence research-management practices — both his own, and those of the broader profession. He has decided to embrace the technology.

For instance, he uses ChatGPT to help researchers write a first draft of their research proposals. “I prepare headlines and the structure of the application in advance, then I interview the researcher about their proposal. Everything that the researchers would have put into the first draft anyway, I will just draw out in a conversation.” He takes manual notes using a word processor, and inputs these into ChatGPT. “Then ChatGPT will give us the prose.” It has reduced the duration of a task that used to take several working days to a couple of hours.

Lykke Berggreen is not alone. Around the world, research managers are exploring how they can use generative AI to help with their daily tasks.

Yolanda Davids, deputy director of research development at the University of the Witwatersrand, in Johannesburg, South Africa, says she uses ChatGPT to draft letters and reports. For example, she might give the tool a short description of a research project, along with other information she wants to highlight, and ask the tool to write a letter of support for the funder, highlighting the potential impact of the study and its importance in the context of South Africa. “After ChatGPT gives me the results, I review them and make amendments,” she says. That includes ensuring that the English sounds South African, rather than having a US flavour, and removing the “elaborate adjectives and descriptors” that tend to pepper ChatGPT’s prose.

Kelly Basinger, a senior proposal manager at the Advanced Environmental Research Institute at the University of North Texas in Denton, says she uses ChatGPT to show researchers how they can improve the readability of their writing. The tool can take complex, jargon-filled text and reword it to suit the literacy level of a late secondary-school or early college student, demonstrating to faculty members how they can make their writing more accessible. “Obviously, faculty want their ideas to be funded,” Basinger says. “The first step is to help others understand those ideas.”

Many research managers, such as Nik Claesen, managing director of the European Association of Research Managers and Administrators in Brussels, see AI as an opportunity for the profession. But using AI for grant writing is not without risk, says Ellen Schenk, a research-funding consultant based in Rotterdam in the Netherlands. She says one nefarious aspect of ChatGPT is its tendency to want to please its user, to the point that it invents material — a phenomenon known as hallucination.

Schenk experienced this at first hand when working on a proposal for a European funding call on inequity and access to health care. She asked ChatGPT whether the proposed project was a good fit for the call. The answer was a resounding yes. But when Schenk asked ChatGPT to back up its claims, it gave references that did not exist. She says she is now “very, very reluctant” to ask ChatGPT to design projects. “If you are not critical of the output, you will have a beautiful proposal, and probably the reviewers will buy it, as well. But the project won’t be realistic or feasible.”

Some users are put off when the results ChatGPT comes up with look impressive at first glance, but prove to be incorrect or too wordy on closer inspection. Lykke Berggreen says the best way of getting past this “word salad” stage is to learn what information ChatGPT needs to generate good output. There are many AI influencers who are sharing prompts and dishing out advice, he says, but he has found that the best way to learn is through trial and error.

Lykke Berggreen and Schenk both use the subscription version of ChatGPT. Schenk says that it has several advantages over the free one: guaranteed access (the free service being overwhelmed with requests was particularly problematic in the tool’s early days); a much higher word limit, which results in better answers and better reasoning; and access to AI plug-ins — tools written with specific tasks in mind, such as searching databases of academic literature.

Scaling up

Regardless of AI tools’ limitations, many research managers think the technology will have profound labour implications. James Shelley, who works on knowledge mobilization and science communication at Western University in Ontario, Canada, says he has become interested in developing AI applications for research administration partly because he wants to have a job in the future. His work doesn’t use ChatGPT itself; instead, he uses the technology behind the tool.

Shelley and his colleagues pay a few dollars a month to access this back-end technology from Open AI, the California-based company behind ChatGPT, and use it to develop automated workflows that aid research management. He thinks that this type of bespoke tool represents the way the profession will incorporate AI in the future, rather than individual managers copying and pasting text into ChatGPT.

One such workflow, which his university is now using internally, generates plain-language summaries of new journal articles published by researchers in the institution’s Faculty of Health Sciences. These feed into a regular e-mail for the department’s research administration and communications teams. This is something that wasn’t done before, Shelley adds, because it would not have made sense to hire a person just to summarize every research paper the department produced. So far, he says, the feedback has been great.

Another example of low-hanging fruit the technology could target, Shelley says, is systems that generate a first-pass review of funding proposals, checking that they comply with basic submission guidelines before they are passed to a member of staff. “I imagine this will most likely be where most institutions deploy AI at scale in research administration,” he says.

Appropriate guidance

Several of the research managers interviewed for this article raised concerns about the lack of guidance on what constitutes appropriate use of the technology in research administration. Tse-Hsiang Chen, a funding adviser and grant writer in the research office of the University Medical Center Utrecht in the Netherlands, says he expects that this will become clearer soon. The European Union is developing AI legislation that sets out rules and guidelines on how to use AI systems safely and legally, with respect for fundamental human rights. His institution, in collaboration with Utrecht University, also in the Netherlands, is developing guidelines, particularly for the use of generative AI in the context of research support. Where taking on this kind of work is concerned, he says, “I’m fairly certain that we’re not alone”.

Scalability is also a preoccupation for Lykke Berggreen, who has created an AI assistant to write applications for the Danish national research council, Danmarks Frie Forskningsfond (DFF). The assistant uses the same interview-based system Lykke Berggreen developed to draft grant proposals using ChatGPT. The questions are tailored to extract the detailed information the council requires in the application, with the researcher typing in their responses. The tool then produces a first draft tailored to the DFF’s specifications.

Lykke Berggreen is sanguine about the threat AI tools might pose to his own employment. “AI will definitely replace a lot of research-management tasks and probably some research managers,” he says. But he thinks there are key parts of his work that a machine would not be able to do. He hopes that AI will take over the menial tasks, giving him more one-to-one time to spend coaching researchers. “I do a lot of confidence building when I talk to researchers, telling them that their ideas are good enough. That they can and should apply for this and that grant. I think that is hard to replace with a machine,” he says.