Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain
the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in
Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles
and JavaScript.
Artificial intelligence tools can help save lives in a pandemic. However, the need to implement technological solutions rapidly raises challenging ethical issues. We need new approaches for ethics with urgency, to ensure AI can be safely and beneficially used in the COVID-19 response and beyond.
The COVID-19 pandemic poses a historical challenge to society. The profusion of data requires machine learning to improve and accelerate COVID-19 diagnosis, prognosis and treatment. However, a global and open approach is necessary to avoid pitfalls in these applications.
In an unprecedented effort of scientific collaboration, researchers across fields are racing to support the response to COVID-19. Making a global impact with AI tools will require scalable approaches for data, model and code sharing; adapting applications to local contexts; and cooperation across borders.
The attention and resources of AI researchers have been captured by COVID-19. However, successful adoption of AI models in the fight against the pandemic is facing various challenges, including moving clinical needs as the epidemic progresses and the necessity to translate models to local healthcare situations.
The Catholic Church is challenged by scientific and technological innovation but can help to integrate multiple voices in the ongoing dialogue regarding AI and machine ethics. In this context, a multidisciplinary working group brought together by the Church reflected on roboethics, explored the themes of embodiment, agency and intelligence.
As robotic systems become more autonomous, it gets less straightforward to determine liability when humans are harmed. This is an emerging challenge, with legal implications, in the field of surgical robotic systems. The iRobotSurgeon Survey explores public opinions about responsibility and liability in the area of surgical robotics.
As artificial intelligence becomes prevalent in society, a framework is needed to connect interpretability and trust in algorithm-assisted decisions, for a range of stakeholders.
Machine learning models have great potential in biomedical applications. A new platform called GradioHub offers an interactive and intuitive way for clinicians and biomedical researchers to try out models and test their reliability on real-world, out-of-training data.
A valid machine model is predictive, but a predictive model may not be valid. The gap between these two can be larger than many practitioners may expect.
Many high-level ethics guidelines for AI have been produced in the past few years. It is time to work towards concrete policies within the context of existing moral, legal and cultural values, say Andreas Theodorou and Virginia Dignum.
Artificial intelligence and machine learning are increasingly seen as key technologies for building more decentralized and resilient energy grids. However, researchers must consider the ethical and social implications of these developments.
Artificial intelligence systems copy and amplify existing societal biases, a problem that by now is widely acknowledged and studied. But is current research of gender bias in natural language processing actually moving towards a resolution, asks Marta R. Costa-jussà.
In order for the neuromorphic research field to advance into the mainstream of computing, it needs to start quantifying gains, standardize on benchmarks and focus on feasible application challenges.
To create less harmful technologies and ignite positive social change, AI engineers need to enlist ideas and expertise from a broad range of social science disciplines, including those embracing qualitative methods, say Mona Sloane and Emanuel Moss.
Deepfakes are a new dimension of the fake news problem. The criminal misuse of this technology poses far-reaching challenges and can threaten national security. Technological and governance solutions are needed to address this.
To develop scientific methods for evaluation in robotics, the field requires a more stringent definition of the subject of study, says Signe Redfield, focusing on capabilities instead of physical systems.
The European Commission’s report ‘Ethics guidelines for trustworthy AI’ provides a clear benchmark to evaluate the responsible development of AI systems, and facilitates international support for AI solutions that are good for humanity and the environment, says Luciano Floridi.
There is much to be gained from interdisciplinary efforts to tackle complex psychological notions such as ‘theory of mind’. However, careful and consistent communication is essential when comparing artificial and biological intelligence, say Henry Shevlin and Marta Halina.
If we are to realize the potential of self-driving cars, we need to recognize the limits of machine learning. We should not pretend self-driving cars are around the corner: it will still take substantial time and effort to integrate the technology safely and fairly into our societies.