Science and the new age of AI

Across disciplines as varied as biology, physics, mathematics and social science, artificial intelligence (AI) is transforming the scientific enterprise. From machine-learning techniques that hunt for patterns in data, to the latest general-purpose algorithms that can generate realistic synthetic outputs from vast corpuses of text and code, AI tools are accelerating the pace of research and providing fresh directions for scientific exploration.
This special website looks at how these changes are affecting different areas of science — and how it should respond to the challenges the tools present. It includes selected articles from journalists as well as editorials and comment from Nature, including subscriber-only content. The site will be updated with more content as it is published.
Editorial: AI will transform science — now researchers must tame it
Latest articles

FEATURE
Is AI leading to a reproducibility crisis in science?
Scientists worry that ill-informed use of artificial intelligence is driving a deluge of unreliable or useless research.

FEATURE
ChatGPT has entered the classroom: how LLMs could transform education
Researchers, educators and companies are experimenting with ways to turn flawed but famous large language models into trustworthy, accurate ‘thought partners’ for learning.

COMMENT
Garbage in, garbage out: mitigating risks and maximizing benefits of AI in research
Artificial-intelligence tools are transforming data-driven science — better ethical standards and more robust data curation are needed to fuel the boom and prevent a bust.

NEWS FEATURE
How ChatGPT and other AI tools could disrupt scientific publishing
Scientists who regularly use LLMs are still in the minority, but many expect that generative AI tools will become more prevalent. Here's how a world of AI-assisted writing and reviewing might transform the nature of the scientific paper.

NEWS FEATURE
AI and science: what 1,600 researchers think
A Nature survey finds that scientists are concerned, as well as excited, by the increasing use of artificial-intelligence tools in research.

NEWS FEATURE
How to stop AI deepfakes from sinking society — and science
Deceptive videos and images created using generative AI could sway elections, crash stock markets and ruin reputations. Researchers are developing methods to limit their harm.
Background to the AI revolution
Whereas the 2010s saw the creation of machine-learning algorithms that can help to discern patterns in giant, complex sets of scientific data, the 2020s are bringing in a new age with the widespread adoption of generative AI tools. These algorithms are based on neural networks and produce convincing synthetic outputs, sampling from the statistical distribution of the data they have been trained on.
The sheer pace of innovation is breathtaking and, for many, bewildering — requiring a level-headed assessment of what the tools have already achieved, and of what they can reasonably be expected to do in the future.

REVIEW
Scientific discovery in the age of artificial intelligence
Breakthroughs over the past decade in self-supervised learning, geometric deep learning and generative AI methods can help scientists throughout the scientific process — but also require a deeper understanding across scientific disciplines of the techniques’ pitfalls and limitations.

NEWS FEATURE
What ChatGPT and generative AI mean for science
The advent of generative AI based on large language models (LLMs) that can generate realistic synthetic outputs from vast corpuses of text and code is accelerating discovery and providing fresh directions for scientific exploration. That’s a reason for excitement, but also apprehension.

NEWS FEATURE
ChatGPT broke the Turing test — the race is on for new ways to assess AI
Large language models mimic human chatter, but scientists disagree on their ability to reason. Finding out where their limitations lie, and how their intelligence differs from that of humans, is crucial to assessing how best to use them.

NEWS FEATURE
In AI, is bigger always better?
Recent advances in the capabilities of AI seem to be based on ever-larger models fed with increasing amounts of data. That suggests many tasks could be conquered by AIs simply by continuing those trends — but some experts beg to differ.
AI in scientific life
From designing proteins and formulating mathematical theories, to enabling quick literature syntheses or helping to write research papers, AI tools are revolutionizing how scientists conduct their research and what they are able to achieve.
But these developments are playing out differently across the scientific enterprise. Diving into the trends in different disciplines provides a guide to the potential of AI-fuelled research and its possible pitfalls.

CAREER COLUMN
What’s the best chatbot for me? Researchers put LLMs through their paces
Large language models are becoming indispensable aids for coding, writing, teaching and more. But different research tasks call for different chatbots — here’s how to find the most appropriate match.

COMMENT
AI can help to speed up drug discovery — but only if we give it the right data
Drug development is labour-intensive and time-consuming. Used in the right way, AI tools that enable companies to share data about drug candidates while protecting sensitive information could help to short-circuit the process for the common good.

TECHNOLOGY FEATURE
Artificial-intelligence search engines wrangle academic literature
A new generation of search engines, powered by machine learning and large language models, is moving beyond keyword searches to pull connections from the tangled web of scientific literature. But can the results be trusted?
Challenges of AI – and how to deal with them
Although there is little doubt about the potential of AI to supercharge certain aspects of scientific discovery, there is also widespread disquiet. Many of the concerns surrounding the use of AI tools in science mirror those in wider society — transparency, accountability, reproducibility, and the reliability and biases of the data used to train them.

COMMENT
Living guidelines for generative AI — why scientists must oversee its use
Establish an independent scientific body to test and certify generative artificial intelligence, before the technology damages science and public trust.

NATURE PODCAST
This isn’t the Nature Podcast — how deepfakes are distorting reality
It has long been possible to create deceptive images, videos and audio to entertain or mislead audiences. Now, with the rise of AI technologies, such manipulations have become easier than ever.

COMMENT
AI tools as science policy advisers? The potential and the pitfalls
Synthesizing scientific evidence for policymakers is a data-intensive task often undertaken under significant time pressure. Large language models and other AI systems could excel at it — but only with appropriate safeguards and humans in the loop.

NEWS FEATURE
Rules to keep AI in check: nations carve different paths for tech regulation
The clamour for legal guardrails surrounding the use of AI is growing — but in practice, people still dispute precisely what needs reining in, how risky AI is and what actually needs to be restricted. China, the European Union and the United States are each approaching the issues in different ways.

COMMENT
ChatGPT: five priorities for research
Regardless of wider regulatory issues, the rise of conversational AI requires researchers to develop sensible guidelines for its use in science. What such guidance might look like is still up for debate, but it is clear where the focus for further research should lie.

CAREER FEATURE
Why AI’s diversity crisis matters, and how to tackle it
The real-world performance of AIs relies on how they are trained and which data are used. The field desperately needs more people from under-represented groups to ensure that the technologies deliver for all.