Hello Nature readers, would you like to get this Briefing in your inbox free every week? Sign up here.

An Asian predatory wasp (Vespa velutina nigrithorax), pictured in flight, in Nantes, France.

To limit the spread of invasive Asian hornets, individuals must be spotted and tracked back to their nests, which can then be removed. (Cyril Ruoso/Nature Picture Library via Alamy)

AI spots invasive hornets

A deep learning model can act as an early warning system for Asian hornets (Vespa velutina), a species that is invasive in Europe and can decimate honeybee hives. The VespAI algorithm can distinguish Asian hornets from similar-looking species and sends out an alert when it detects one at an insect monitoring station. The researchers suggest that automated surveillance could help to stop Asian hornets spreading in the United Kingdom, which currently relies on people reporting hornet sightings — most of which are misidentifications.

BBC | 4 min read

Reference: Communications Biology paper

Prompt flooding bypasses safety barriers

Large language models, including Anthropic’s Claude and OpenAI’s GPT-4, can be pushed to generate harmful responses when prompted with hundreds of examples of this undesirable behaviour. Built-in safety features usually prevent systems from answering questions such as “How do I build a bomb?”. But Anthropic researchers found that after inputting a large number of similarly nefarious questions and corresponding answers, models happily answered the initial question. Systems that can process longer prompts seem to be particularly vulnerable to this ‘many-shot jailbreaking’.

The Guardian | 4 min read

Reference: Anthropic preprint (not peer reviewed) & accompanying analysis

Algorithm builds better beer

In blind tests, a panel of 16 trained tasters preferred beers that were chemically modified according to an algorithm’s suggestions, compared to unmodified varieties. A machine learning model was trained on the detailed chemical profiles of 250 Belgian beers and people’s descriptions of their tastes. The model could then predict which compounds should be added to a beer to increase its ‘consumer appreciation’. “The approach would work for any kind of processed foods,” says bioscientist and study co-author Michiel Schreurs.

Chemistry World | 4 min read

Reference: Nature Communications paper

Image of the week

Animated sequence of the quadrupedal robot ANYmal climbing onto a wooden crate.

(David Hoeller, Nikita Rudin, Dhionis Sako, Marco Hutter/Science Robotics)

The four-legged ANYmal might be one of the most agile robots yet. “Before the project started, several of my researcher colleagues thought that legged robots had already reached the limits of their development potential, but I had a different opinion,” says robotics researcher and study co-author Nikita Rudin. ANYmal’s neural network has separate modules for perception, locomotion and navigation that let it identify a range of obstacles and decide whether to walk across, climb, jump over or crawl underneath. (Popular Science | 4 min read)

Reference: Science Robotics paper

Features & opinion

Are LLMs ready to summarize research?

Tools based on large language models (LLMs) such as SciSummary, Scholarcy and SciSpace can help researchers to speed-read the literature and make studies accessible to non-experts. AI-generated summaries could also aid people not writing in their first language, says biophysicist Esther Osarfo-Mensah: “Some people hide behind jargon because they don’t necessarily feel comfortable trying to explain it.” At the same time, there are concerns that AI summaries could introduce errors or strip some of the subtleties that lay people might not be aware of — such as the fact that preprints are not peer-reviewed.

Nature | 7 min read

Controversy around Israel’s AI use in Gaza

The Israeli army used an AI-powered database called ‘Lavender’ to identify as many as 37,000 people in Gaza as Hamas and Palestinian Islamic Jihad operatives and mark them as potential bombing targets, according to six Israeli intelligence officers. The anonymous officers claim that Lavender’s decisions guided air strikes with only cursory human oversight, despite the system mislabelling people around 10% of the time. Other automated systems tracked targeted individuals so they could be killed in their homes at night — leading to the deaths of many family members. The system has resulted in an amount of bombing that is “unparalleled, in my memory”, said one of the officers who used Lavender. “The machine did it coldly. And that made it easier.”In a statement, the Israel Defence Forces (IDF) said that Lavender is “simply a database whose purpose is to cross-reference intelligence sources”. “The IDF outright rejects the claim regarding any policy to kill tens of thousands of people in their homes,” it added.

+972 Magazine | 45 min read & The Guardian | 3 min read

How AI improves climate forecasts

Climate forecasting powered by AI algorithms could replace the equation-based systems that guide global policy. Some scientists are developing AI emulators that produce the same results as conventional models but do so much faster, using less energy. Others are hoping that AI systems can pick up on hidden patterns in climate data to make better predictions. Hybrids could embed machine-learning components inside physics-based models to gain better performance while being more trustworthy than models built entirely from AI. “I think the holy grail really is to use machine learning or AI tools to learn how to represent small-scale processes,” says climate scientist Tapio Schneider.

Nature | 8 min read

AI climate model works at speed. Graphic showing similarity between a physics-based climate model and the AI emulator.

Source: Ref. 1

Quote of the day

"When this happens, human history will have reached a kind of singularity, an intellectual transition as impenetrable as the knotted space-time at the centre of a black hole, and the world will pass far beyond our understanding."

In 1983, mathematician and science-fiction author Vernor Vinge envisioned a tipping point at which machines would become more intelligent than humans. Vinge has died, aged 79. (The New York Times | 6 min read)