Skip to main content

Thank you for visiting You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.


Audio long-read: Rise of the robo-writers

Robo-writers: the rise and risks of language-generating AI, read by Benjamin Thompson

In 2020, the artificial intelligence (AI) GPT-3 wowed the world with its ability to write fluent streams of text. Trained on billions of words from books, articles and websites, GPT-3 was the latest in a series of ‘large language model’ AIs that are used by companies around the world to improve search results, answer questions, or propose computer code.

However, these large language model are not without their issues. Their training is based on the statistical relationships between the words and phrases, which can lead to them generating toxic or dangerous outputs.

Preventing responses like these is a huge challenge for researchers, who are attempting to do so by addressing biases in training data, or by instilling these AIs with common-sense and moral judgement.

This is an audio version of our feature: Robo-writers: the rise and risks of language-generating AI

Never miss an episode: Subscribe to the Nature Podcast on Apple Podcasts, Google Podcasts, Spotify or your favourite podcast app. Head here for the Nature Podcast RSS feed.



Nature Careers


Nature Briefing

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

Get the most important science stories of the day, free in your inbox. Sign up for Nature Briefing


Quick links