As professional editors, our first reaction to the use of ChatGPT-like tools for writing was scepticism; not because we feared that our profession would become obsolete, but because we know that one of the trickiest parts of our job is understanding, and helping authors understand, what they are trying to say. To check whether our initial scepticism was justified, we decided to consult some experts. So, we teamed up with Nature Human Behaviour and asked researchers and science communicators to discuss the opportunities, limitations and risks of using generative AI (GenAI) in science communication across the entire spectrum of research from physical to social sciences. You can read their thoughts in a Viewpoint in this issue and in a Feature1 in Nature Human Behaviour.

GenAI has great potential to democratize science communication by helping to create engaging content and reach out to various audiences, but it currently has technical limitations such as ‘hallucinations’ or ‘knowledge cutoffs’. Even if these are solved, there is concern that the widespread use of these tools “could eliminate diversity from the pool of science communicators”1 and reduce the amount of science communication targeted to specific cultural and regional audiences leading to a “monoculture”2 of predominantly English-language, western world-rooted discourse. Furthermore, in the wrong hands GenAI can be a dangerous weapon to create and spread science disinformation.

To complement these points and give an editorial perspective, we stress an obvious, yet underappreciated fact. As mathematician Paul Halmos put it, “to say something well you must have something to say […] Much bad writing […] is caused by a violation of that first principle”3. Surely every writer knows what they want to say. That’s not always the case: ideas can be fuzzy with too many details (all seemingly essential) to include. A good editor will challenge the writer and help distil one clear idea and just enough information to convey it well. It’s also possible to have a clear idea but be reluctant to spell it out for fear that it will be criticized for being too strong, too weak, not original, and so on. A good editor will gauge whether that is indeed the case and advise accordingly, helping the writer to build confidence in the piece. Can the use of GenAI also help? Not so much in developmental editing, because AI systems can’t (yet) provide critical thinking; they lack the broader context and are unable to detect subtle flaws in the logic or challenge the ideas.

Can GenAI be of help in other ways? An oft-mentioned use of AI tools in writing is brainstorming or “story seeds”4: a starting point to help the human writer overcome blank page anxiety. GenAI can help writers who are nervous about expressing themselves in a foreign language by smoothing the writing or providing better synonyms or idioms. GenAI can also provide punchier titles, translate into other languages or for different audiences and summarize text. These are all helpful features and can be used to enhance writing but are add-ons: one still needs something to say.

If you’re unconvinced, do this experiment: ask a GenAI to write an essay or an opinion piece on a certain topic. It will do so by stitching together a few more-or-less logically connected platitudes. This clichéd nothingness is unsurprising because the AI system has no agency in writing or otherwise: it simply doesn’t have ‘opinions’ or a “narrative agenda of its own”4. You might try to be more prescriptive in the prompt to push towards a certain perspective — perhaps there is such a thing as ‘the prompt’; the right way of describing the piece you had in mind so that the GenAI outputs it exactly. You have just given the AI system agency to write. But isn’t that prompt precisely the clear articulation of your idea? That’s your agency. Writing that perfect prompt is the hard part; the rest is dressing, whether you do it with or without help from GenAI.

“the AI system has no agency in writing or otherwise”

As pointed out in a Perspective in Nature, the proliferation of AI tools in science “carries epistemic risks when scientists trust them as knowledge production partners”2. AI systems lack agency in doing, or writing about science, and scientists need to be mindful of this fundamental limitation.