Hello Nature readers, would you like to get this Briefing in your inbox free every week? Sign up here.

An animated sequence from a video generated by OpenAI's Sora of a young man reading a book while sitting on a cloud.

Sora is one of several AI tools that generates video from text promptsCredit: OpenAI

How Sora & co could change research

The release of OpenAI’s sophisticated video-generating tool Sora has been met with a mix of trepidation and excitement. Some observers worry that the technology could lead to a barrage of realistic-looking misinformation. “We’re going to have to learn to evaluate the content we see in ways we haven’t in the past,” says digital-culture researcher Tracy Harwood. Others see positive potential: such systems could help to simplify and communicate complex scientific findings, and speed up the process of illustrating papers, conference posters or presentations. In some cases, for example when reconstructing extinct lifeforms, AI illustrations could mislead both scientists and the public. For now, many scientific journals prohibit AI-generated imagery in papers.

Nature | 5 min read & Nature | 6 min read

Could AI-designed proteins be weaponized?

Researchers have laid out safety guidelines for AI-powered protein design to head off the possibility of the technology being used to develop bioweapons. The voluntary effort calls for the biodesign community to police itself and improve screening of DNA synthesis, a key step in translating proteins into actual molecules. “It’s a good start,” says global health policy specialist Mark Dybul. But he also thinks that “we need government action and rules, and not just voluntary guidance”.

Nature | 5 min read

Algorithm learns better by forgetting

Occasionally erasing part of an AI model’s ‘memory’ seems to make it better at adapting to new languages, particularly those for which not much data is available or that are linguistically distant from English. Researchers periodically reset a neural network’s embedding layer during the initial training in English. When the periodic-forgetting system was retrained on a language with a small dataset, its accuracy score dropped by only 22 points, compared with almost 33 for a standard model. “An apple is something sweet and juicy, instead of just a word,” says AI researcher and study co-author Yihong Chen, who explains that the neural network displays the same high-level reasoning.

Quanta Magazine | 5 min read

Reference: arXiv preprint (not peer reviewed)

Image of the week

An animated gif of a snail robot. Its soft body is made from a white rubber-like material, wearing a real snail shell. Scissors enter the frame, cutting a strip of material along the robot’s side, which (though only briefly) stops its crawling motion.

(Soft Machine Lab, Carnegie Mellon University)

This robot snail can heal itself when it’s damaged. The electrically conductive gel connecting the motor to the battery was designed with specific chemical bonds that knit the material back together after it is cut. (Nature | 12 min read)This article is part of Nature Outlook: Robotics and artificial intelligence, an editorially independent supplement produced with financial support from FII Institute.

Features & opinion

Why researchers trust AI too much

Researchers should be careful about projecting ‘superhuman’ abilities onto AI systems, warn anthropologist Lisa Misseri and cognitive scientist Molly Crockett. They characterized four mindsets — AI as oracle, AI as arbiter, AI as quant and AI as surrogate — after reviewing 100 papers, preprints, conference proceedings and books. Scientists should consider these cognitive ‘traps’ before embedding AI tools in their research.

Nature | 44 min read

Read more: Why scientists trust AI too much — and what to do about it (Nature editorial | 6 min read)

How to write effective prompts

A well-structured prompt increases the likelihood of accurate text prediction in large language models and minimizes the compounding effect of errors, says psychologist Zhicheng Lin. Here are his tips for prompt engineering:

• Break down tasks into sequential components

• Provide examples and relevant context as input

• Be explicit in your instructions

• Ask for multiple options

• Instruct the model to roleplay, for example as a writing coach or a sentient cheesecake

• Specifying the response format such as reading level and tone

• Experiment a lot

Nature Human Behaviour | 13 min read

AI is squeezing the US power grid

The ageing US electricity grid is struggling to keep up with skyrocketing demand from green-technology factories and the data centres that crunch the numbers for crypto, cloud computing and AI. “How were the projections that far off?” asks Jason Shaw from Georgia’s electricity regulator. “This has created a challenge like we have never seen before.” Already, the power crunch is delaying coal plant closures and it remains unclear who should pay for creating new power infrastructure. Some data-centre developers are hoping that off-grid small nuclear or fusion power plants will eventually solve the problem.

The Washington Post | 10 min read

Quote of the day

“A machine-learning algorithm walks into a bar. The bartender asks: ‘What'll you have?’ The algorithm says: ‘What's everyone else having?’”

Software engineer Chet Haase’s joke sums up the problem of algorithmic recommendations: by guiding what we watch, read and listen to, they influence what gets made in the first place — a self-reinforcing cycle. (MIT Technology Review | 11 min read)