Hello Nature readers, would you like to get this Briefing in your inbox free every week? Sign up here.
AI spots invasive hornets
A deep learning model can act as an early warning system for Asian hornets (Vespa velutina), a species that is invasive in Europe and can decimate honeybee hives. The VespAI algorithm can distinguish Asian hornets from similar-looking species and sends out an alert when it detects one at an insect monitoring station. The researchers suggest that automated surveillance could help to stop Asian hornets spreading in the United Kingdom, which currently relies on people reporting hornet sightings — most of which are misidentifications.
Reference: Communications Biology paper
Prompt flooding bypasses safety barriers
Large language models, including Anthropic’s Claude and OpenAI’s GPT-4, can be pushed to generate harmful responses when prompted with hundreds of examples of this undesirable behaviour. Built-in safety features usually prevent systems from answering questions such as “How do I build a bomb?”. But Anthropic researchers found that after inputting a large number of similarly nefarious questions and corresponding answers, models happily answered the initial question. Systems that can process longer prompts seem to be particularly vulnerable to this ‘many-shot jailbreaking’.
Reference: Anthropic preprint (not peer reviewed) & accompanying analysis
Algorithm builds better beer
In blind tests, a panel of 16 trained tasters preferred beers that were chemically modified according to an algorithm’s suggestions, compared to unmodified varieties. A machine learning model was trained on the detailed chemical profiles of 250 Belgian beers and people’s descriptions of their tastes. The model could then predict which compounds should be added to a beer to increase its ‘consumer appreciation’. “The approach would work for any kind of processed foods,” says bioscientist and study co-author Michiel Schreurs.
Reference: Nature Communications paper
Image of the week
The four-legged ANYmal might be one of the most agile robots yet. “Before the project started, several of my researcher colleagues thought that legged robots had already reached the limits of their development potential, but I had a different opinion,” says robotics researcher and study co-author Nikita Rudin. ANYmal’s neural network has separate modules for perception, locomotion and navigation that let it identify a range of obstacles and decide whether to walk across, climb, jump over or crawl underneath. (Popular Science | 4 min read)
Reference: Science Robotics paper
Features & opinion
Are LLMs ready to summarize research?
Tools based on large language models (LLMs) such as SciSummary, Scholarcy and SciSpace can help researchers to speed-read the literature and make studies accessible to non-experts. AI-generated summaries could also aid people not writing in their first language, says biophysicist Esther Osarfo-Mensah: “Some people hide behind jargon because they don’t necessarily feel comfortable trying to explain it.” At the same time, there are concerns that AI summaries could introduce errors or strip some of the subtleties that lay people might not be aware of — such as the fact that preprints are not peer-reviewed.
Controversy around Israel’s AI use in Gaza
The Israeli army used an AI-powered database called ‘Lavender’ to identify as many as 37,000 people in Gaza as Hamas and Palestinian Islamic Jihad operatives and mark them as potential bombing targets, according to six Israeli intelligence officers. The anonymous officers claim that Lavender’s decisions guided air strikes with only cursory human oversight, despite the system mislabelling people around 10% of the time. Other automated systems tracked targeted individuals so they could be killed in their homes at night — leading to the deaths of many family members. The system has resulted in an amount of bombing that is “unparalleled, in my memory”, said one of the officers who used Lavender. “The machine did it coldly. And that made it easier.”In a statement, the Israel Defence Forces (IDF) said that Lavender is “simply a database whose purpose is to cross-reference intelligence sources”. “The IDF outright rejects the claim regarding any policy to kill tens of thousands of people in their homes,” it added.
+972 Magazine | 45 min read & The Guardian | 3 min read
How AI improves climate forecasts
Climate forecasting powered by AI algorithms could replace the equation-based systems that guide global policy. Some scientists are developing AI emulators that produce the same results as conventional models but do so much faster, using less energy. Others are hoping that AI systems can pick up on hidden patterns in climate data to make better predictions. Hybrids could embed machine-learning components inside physics-based models to gain better performance while being more trustworthy than models built entirely from AI. “I think the holy grail really is to use machine learning or AI tools to learn how to represent small-scale processes,” says climate scientist Tapio Schneider.