Hello Nature readers, would you like to get this Briefing in your inbox free every week? Sign up here.

A happy red robot goes a different direction against lines of many identical robots.

Some researchers think that AI could eventually achieve general intelligence, matching and even exceeding humans on most tasks.Credit: Charles Taylor/Alamy

Superintelligent AI won’t sneak up on us

Sudden jumps in large language models’ apparent intelligence don’t mean that they will soon match or even exceed humans on most tasks. Signs that had been interpreted as emerging artificial general intelligence disappear when the systems are tested in different ways, reported scientists at the NeurIPS machine-learning conference in December. “Scientific study to date strongly suggests most aspects of language models are indeed predictable,” says computer scientist and study co-author Sanmi Koyejo.

Nature | 4 min read

Reference: NeurIPS 2023 Conference paper

GPT-powered chemist aces experiments

A robotic chemist might be the ideal laboratory partner: it scours the literature for instructions, designs experiments and then carries them out to make compounds including paracetamol and aspirin. The system, called Coscientist, is powered by several large language models, including GPT-4 and Claude. It “can do most of the things that really well-trained chemists can do”, says Coscientist co-developer Gabe Gomes. The team hasn’t yet made Coscientist’s full code freely available, because some applications are likely to be dangerous.

Nature | 4 min read

Reference: Nature paper

Algorithm predicts people’s life stories

A large language model can predict people’s health, earnings and likelihood of a premature death. The system was trained on the equivalent of ‘sentences’ that were generated from the work and health records of around 6 million people in Denmark. For example, write the researchers, a sentence “can capture information along the lines of ‘In September 2012, Francisco received twenty thousand Danish kroner as a guard at a castle in Elsinore’”. When asked to predict whether a person in the database had died by 2020, it was accurate almost 80% of the time, outperforming other state-of-the-art models by a wide margin. Some scientists caution that the model might not work for other populations and that biases in the data could confound predictions.

Science | 4 min read

Reference: Nature Computational Science paper

We need answers on artificial consciousness

Research into the boundaries between conscious and unconscious systems is urgently needed, a trio of scientists say. In comments to the United Nations, theoretical computer scientist Lenore Blum, mathematicians Jonathan Mason and Johannes Kleiner — all of the Association for Mathematical Consciousness Science — call for more funding for the effort. Some researchers predict that AI with human-like intelligence is 5–20 years away, yet there is no standard method to assess whether machines are conscious and whether they share human values. We should also consider the possible needs of conscious systems, the researchers say.

Nature | 6 min read

An animated gif showing a robot’s snake-like hose move towards a fire. The hose is held up by jets of water and connected to the robot’s main body, a large white box rolling along on sturdy wheels. When the robot hose reaches the fire, it extinguishes it with the water jets located in its ‘head’.

(Y. Yamauchi et al./Front. Robot. AI (CC-BY-4.0))

This ‘flying dragon’ spits water to fight fires. The robot uses water jets to keep its hose up to two metres above the ground and tackles blazes with high-pressure nozzles in its ‘head’. The first version of this Dragon Firefighter needed to be pushed by hand to get close to a fire; the new version can be operated remotely to keep people out of harm’s way. (Interesting Engineering | 4 min read)

Reference: Frontiers in Robotics and AI paper

Features & opinion

The AI–quantum computing mash-up

Whether machine-learning algorithms run on quantum computers can be faster or better than those run on classical computers remains an unanswered question. Some scientists hope that quantum AI could spot patterns in data that classical varieties miss — even if it isn’t faster. This could particularly be the case for data that are already quantum, for example those coming from particle colliders or superconductivity experiments. “Our world inherently is quantum-mechanical. If you want to have a quantum machine that can learn, it could be much more powerful,” says physicist Hsin-Yuan Huang.

Nature | 9 min read

What’s in store for AI in 2024

This year could see the decline of the term ‘large language model’ as systems increasingly deal in images, audio, video, molecular structures or mathematics. There might even be entirely new types of AI that go beyond the transformer architecture used by almost all generative models so far. At the same time, proprietary AI models will probably continue to outperform open-source approaches. And generating synthetic content has become so easy that some experts are expecting more misinformation, deepfakes and other malicious material. “What I most hope for 2024 — though it seems slow in coming — is stronger AI regulation,” says computer scientist Kentaro Toyama.

Forbes | 25 min read & The Conversation | 7 min read

Podcast: How to open the black box

“We've never before built machines where even the creators don't know how they will behave, or why,” says Jessica Newman, director of the AI Security Initiative. That’s particularly worrying when AI is involved in high-stakes decisions, such as in healthcare and policing. Researchers and policymakers agree that algorithms need to become more explainable, though it’s still unclear what this means in practice. For AI to be fair, reliable and safe, we need to go beyond opening the black box, says Newman, “to ensure there is accountability for any harm that's caused”.

Nature Podcast | 38 min listen

Subscribe to the Nature Podcast on Apple Podcasts, Google Podcasts or Spotify, or use the RSS feed.

Click to listen

Quote of the day

“Why am I even a researcher if I don’t write my own research?”

Psychologist Ada Kaluzna says that using AI in her scientific writing could disrupt her ability to learn and think creatively. (Nature | 5 min read)