Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain
the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in
Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles
and JavaScript.
Current AI policy recommendations differ on what the risks to human autonomy are. To systematically address risks to autonomy, we need to confront the complexity of the concept itself and adapt governance solutions accordingly.
Growing criticisms of datasets that were built from user-generated data scraped from the web have led to the retirement or redaction of many popular benchmarks. Their afterlife, as copies or subsets that continue to be used, is a cause for concern.
For a third year in a row, we followed up with authors of several recent Comments and Perspectives in Nature Machine Intelligence about what happened after their article was published: how did the topic they wrote about develop, did they gain new insights, and what are their hopes and expectations for AI in 2022?
The provisioning of information about product attributes in e-commerce environments is today left entirely to owners of online platforms. Product transparency in online stores can be increased by client-side enrichment of retailer Web pages.
A well-known internet truth is that if the product is free, you are the product being sold. But with a growing range of regulations and web content tools, users can gain more control over the data they interact with.
Although the initial inspiration of neural networks came from biology, insights from physics have helped neural networks to become usable. New connections between physics and machine learning produce powerful computational methods.
As service and industrial robots enter our lives, new types of cybersecurity issues emerge that involve the manipulation of a robot’s behaviour. Now is the time to develop countermeasures.
In the AlphaPilot Challenge, teams compete to fly autonomous drones through an obstacle course as fast as possible. The 2019 winning team MAVLab reflects on the challenge of beating human pilots.
Very large neural network models such as GPT-3, which have many billions of parameters, are on the rise, but so far only big tech has the resources to train, deploy and study such models. This needs to change, say Stanford AI researchers, who call for an investment in academic collaborations to build and study large neural networks.
The regulatory landscape for artificial intelligence (AI) is shaping up on both sides of the Atlantic, urgently awaited by the scientific and industrial community. Commonalities and differences start to crystallize in the approaches to AI in medicine.
Health disparities need to be addressed so that the benefits of medical progress are not limited to selected groups. Big data and machine learning approaches are transformative tools for public and population health, but need ongoing support from insights in algorithmic fairness.
The COVID-19 pandemic is not over and the future is uncertain, but there has lately been a semblance of what life was like before. As thoughts turn to the possibility of a summer holiday, we offer suggestions for books and podcasts on AI to refresh the mind.
A new international competition aims to speed up the development of AI models that can assist radiologists in detecting suspicious lesions from hundreds of millions of pixels in 3D mammograms. The top three winning teams compare notes.
Accurate and fair medical machine learning requires large amounts and diverse data to train on. Privacy-preserving methods such as federated learning can help improve machine learning models by making use of datasets in different hospitals and institutes while the data stays where it is collected.
Large language models, which are increasingly used in AI applications, display undesirable stereotypes such as persistent associations between Muslims and violence. New approaches are needed to systematically reduce the harmful bias of language models in deployment.