Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain
the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in
Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles
and JavaScript.
Borrowing the format of public competitions from engineering and computer science, a new type of challenge in 2023 tested real-world AI applications with legal assessments based on the EU AI Act.
Further progress in AI may require learning algorithms to generate their own data rather than assimilate static datasets. A Perspective in this issue proposes that they could do so by interacting with other learning agents in a socially structured way.
The rise of artificial intelligence (AI) has relied on an increasing demand for energy, which threatens to outweigh its promised positive effects. To steer AI onto a more sustainable path, quantifying and comparing its energy consumption is key.
AI-generated media are on the rise and are here to stay. Regulation is urgently needed, but in the meantime creators, users and content distributors need to pursue various ways, and adopt various tools, for responsible generation, sharing and detection of AI-generated content.
Advances in DNA nanoengineering promise the development of new computing devices within biological systems, with applications in nanoscale sensing, diagnostics and therapeutics.
Machine learning and quantum computing approaches are converging, fuelling considerable excitement over quantum devices and their capabilities. However, given the current hardware limitations, it is important to push the technology forward while being realistic about what quantum computers can do, now and in the near future.
Medical artificial intelligence needs governance to ensure safety and effectiveness, not just centrally (for example, by the US Food and Drug Administration) but also locally to account for differences in care, patients and system performance. Practical collaborative governance will enable health systems to carry out these challenging governance tasks, supported by central regulators.
To protect the integrity of knowledge production, the training procedures of foundation models such as GPT-4 need to be made accessible to regulators and researchers. Foundation models must become open and public, and those are not the same thing.
The development of large language models is mainly a feat of engineering and so far has been largely disconnected from the field of linguistics. Exploring links between the two directions is reopening longstanding debates in the study of language.
Virtual worlds are typically encountered through simulated visual and auditory perceptions. Incorporating touch can create more immersive experiences with a sense of agency.
There are repeated calls in the AI community to prioritize data work — collecting, curating, analysing and otherwise considering the quality of data. But this is not practised as much as advocates would like, often because of a lack of institutional and cultural incentives. One way to encourage data work would be to reframe it as more technically rigorous, and thereby integrate it into more-valued lines of research such as model innovation.
We show that large language models (LLMs), such as ChatGPT, can guide the robot design process, on both the conceptual and technical level, and we propose new human–AI co-design strategies and their societal implications.