Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain
the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in
Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles
and JavaScript.
Machine learning and quantum computing approaches are converging, fuelling considerable excitement over quantum devices and their capabilities. However, given the current hardware limitations, it is important to push the technology forward while being realistic about what quantum computers can do, now and in the near future.
Medical artificial intelligence needs governance to ensure safety and effectiveness, not just centrally (for example, by the US Food and Drug Administration) but also locally to account for differences in care, patients and system performance. Practical collaborative governance will enable health systems to carry out these challenging governance tasks, supported by central regulators.
To protect the integrity of knowledge production, the training procedures of foundation models such as GPT-4 need to be made accessible to regulators and researchers. Foundation models must become open and public, and those are not the same thing.
The development of large language models is mainly a feat of engineering and so far has been largely disconnected from the field of linguistics. Exploring links between the two directions is reopening longstanding debates in the study of language.
Virtual worlds are typically encountered through simulated visual and auditory perceptions. Incorporating touch can create more immersive experiences with a sense of agency.
There are repeated calls in the AI community to prioritize data work — collecting, curating, analysing and otherwise considering the quality of data. But this is not practised as much as advocates would like, often because of a lack of institutional and cultural incentives. One way to encourage data work would be to reframe it as more technically rigorous, and thereby integrate it into more-valued lines of research such as model innovation.
We show that large language models (LLMs), such as ChatGPT, can guide the robot design process, on both the conceptual and technical level, and we propose new human–AI co-design strategies and their societal implications.
As many authors are experimenting with using large language models in writing articles, some guidelines are becoming clear, but these will need to evolve as the capabilities and integration of such tools develop further.
Metaverse-enabled healthcare is no longer hypothetical. Developers must now contend with ethical, legal and social hazards if they are to overcome the systematic inefficiencies and inequities that exist for patients who seek care in the real world.
Generative AI programs can produce high-quality written and visual content that may be used for good or ill. We argue that a credit–blame asymmetry arises for assigning responsibility for these outputs and discuss urgent ethical and policy implications focused on large-scale language models.
Fairness approaches in machine learning should involve more than an assessment of performance metrics across groups. Shifting the focus away from model metrics, we reframe fairness through the lens of intersectionality, a Black feminist theoretical framework that contextualizes individuals in interacting systems of power and oppression.