Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain
the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in
Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles
and JavaScript.
Synthesizing robots via physical artificial intelligence is a multidisciplinary challenge for future robotics research. An education methodology is needed for researchers to develop a combination of skills in physical artificial intelligence.
Addressing the problems caused by AI applications in society with ethics frameworks is futile until we confront the political structure of such applications.
For machine learning developers, the use of prediction tools in real-world clinical settings can be a distant goal. Recently published guidelines for reporting clinical research that involves machine learning will help connect clinical and computer science communities, and realize the full potential of machine learning tools.
There is a need to consider how AI developers can be practically assisted in identifying and addressing ethical issues. In this Comment, a group of AI engineers, ethicists and social scientists suggest embedding ethicists into the development team as one way of improving the consideration of ethical issues during AI development.
Artificial intelligence tools can help save lives in a pandemic. However, the need to implement technological solutions rapidly raises challenging ethical issues. We need new approaches for ethics with urgency, to ensure AI can be safely and beneficially used in the COVID-19 response and beyond.
The COVID-19 pandemic poses a historical challenge to society. The profusion of data requires machine learning to improve and accelerate COVID-19 diagnosis, prognosis and treatment. However, a global and open approach is necessary to avoid pitfalls in these applications.
In an unprecedented effort of scientific collaboration, researchers across fields are racing to support the response to COVID-19. Making a global impact with AI tools will require scalable approaches for data, model and code sharing; adapting applications to local contexts; and cooperation across borders.
The attention and resources of AI researchers have been captured by COVID-19. However, successful adoption of AI models in the fight against the pandemic is facing various challenges, including moving clinical needs as the epidemic progresses and the necessity to translate models to local healthcare situations.
The Catholic Church is challenged by scientific and technological innovation but can help to integrate multiple voices in the ongoing dialogue regarding AI and machine ethics. In this context, a multidisciplinary working group brought together by the Church reflected on roboethics, explored the themes of embodiment, agency and intelligence.
As robotic systems become more autonomous, it gets less straightforward to determine liability when humans are harmed. This is an emerging challenge, with legal implications, in the field of surgical robotic systems. The iRobotSurgeon Survey explores public opinions about responsibility and liability in the area of surgical robotics.
As artificial intelligence becomes prevalent in society, a framework is needed to connect interpretability and trust in algorithm-assisted decisions, for a range of stakeholders.
Machine learning models have great potential in biomedical applications. A new platform called GradioHub offers an interactive and intuitive way for clinicians and biomedical researchers to try out models and test their reliability on real-world, out-of-training data.
A valid machine model is predictive, but a predictive model may not be valid. The gap between these two can be larger than many practitioners may expect.
Many high-level ethics guidelines for AI have been produced in the past few years. It is time to work towards concrete policies within the context of existing moral, legal and cultural values, say Andreas Theodorou and Virginia Dignum.