Hello Nature readers, would you like to get this Briefing in your inbox free every week? Sign up here.

A composite artwork of a laptop displaying the OpenAI ChatGPT website on a stool surrounded by professional lighting equipment in a photo studio.

Credit: Olga Yastremska/Alamy, Gabby Jones/Bloomberg via Getty

HOW CHATGPT SHAPED SCIENCE IN 2023

For more than a decade, Nature’s 10 has highlighted ten of the people behind the biggest science stories. This year, for the first time, the list has an 11th, non-human entry: ChatGPT. It’s an acknowledgement of the profound and wide-ranging effect that the chatbot has had on science. The tool has rekindled the debate about the benefits and risks of AI — and the nature of human intelligence.Ilya Sutskever, OpenAI’s chief scientist, ChatGPT’s co-creator and another of Nature’s 10, recognized early that large language models (LLMs) need huge amounts of computing power to become smarter. He thinks that systems with human-level intelligence could be developed within years or decades, and is now focusing his work on how “to steer and control AI systems much smarter than us”.

Nature | 5 min read

Read all Nature’s 10 profiles here

AI outdoes humans on maths problems

An LLM has helped mathematicians to break new ground on combinatorics problems inspired by the card game Set. In the game, players must spot combinations of cards with certain symbols. This is the first time that an LLM-based system has gone beyond solving mathematics problems with known solutions. One important feature is that the AI system isn’t a black box: people can see and learn from successful programs. “What’s most exciting to me is modelling new modes of human–machine collaboration,” says mathematician and study co-author Jordan Ellenberg.

Nature | 5 min read

Reference: Nature paper

Chatbot keeps online discussions cool

A chatbot can help to keep political discussions civil. Around 1,500 participants with differing opinions on gun control were invited to have online conversations on the topic. In some conversation pairs, one person was offered optional support from an AI tool on how to make their reply friendlier — suggestions that were accepted two-thirds of the time. People who had the chatbot’s help were rated as more understanding and respectful, by four percentage points. And both conversation partners in the chatbot group “felt like the experience was better, which is something valuable in and of itself”, says psychologist and study co-author Ethan Busby.

Nautilus | 5 min read

Reference: PNAS paper

Features & opinion

What you need to know about the AI Act

European Union member states and the European Parliament have hammered out the world’s first comprehensive effort to legislate AI: the AI Act. When voted through, the law will define staggered rules based on risk.

• ‘Unacceptable risk’ systems that are a threat to people — for example, aspects of mass surveillance such as real-time facial recognition — will be banned (although there are exceptions for some police and military applications).

• AI systems must respect EU copyright rules and make public the content that they use to train generative models.

• General-purpose tools, such as ChatGPT, that could be used for good or ill are assessed by how powerful they are. Those that were trained using computing power above a certain threshold face more obligations for transparency and reporting.

The legally binding rules put the EU at the forefront of AI regulation and could influence how similar laws develop elsewhere. But critics worry that the rules will stifle innovation. And tech companies do not have to implement the rules for two years — so they could be obsolete by the time they’re in force.

Euronews | 6 min read

A crocodile, which only on second glance reveals itself to be a lifelike robotic replica, lying on a grassy riverbank under the shade of a black umbrella.

This robotic crocodile, nicknamed Krock, was developed for the 2017 BBC documentary Spy in the Wild. During filming in Uganda, the robot often overheated and the team had to get creative about how to cool it down, including by letting it rest in the shade, opening up the latex skin along its belly, covering it in wet towels and even driving it around on a boat to increase air flow. The lessons that the researchers learnt from testing Krock under harsh conditions helped them to create a more robust version, with the aim of eventually creating robots that could help with search and rescue. For example, Krock-2 sports a bright yellow waterproof suit to replace leaky plastic sleeves and collars. (Reference: Science Robotics paper) (Tomislav Horvat and Kamilo Melo, 2016 (CC-BY 4.0))

Is a toddler a stochastic parrot?

ChatGPT is a stochastic parrot: it is incredibly efficient at stitching together words according to probability and generating convincing language, without any understanding of its meaning. The way in which LLMs learn is unnervingly similar to the way her son does, writes illustrator and cartoonist Angie Wang. “Aren’t we, after all, just a wetware neural network, a complex electrochemical machine?” In her beautifully illustrated essay, Wang explores the feeling of the vertigo that comes with the ever-evolving flood of AI-produced content, and what it means to be human.

The New Yorker | 5 min read