Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain
the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in
Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles
and JavaScript.
Artificial intelligence and machine learning are increasingly seen as key technologies for building more decentralized and resilient energy grids. However, researchers must consider the ethical and social implications of these developments.
Artificial intelligence systems copy and amplify existing societal biases, a problem that by now is widely acknowledged and studied. But is current research of gender bias in natural language processing actually moving towards a resolution, asks Marta R. Costa-jussà.
In order for the neuromorphic research field to advance into the mainstream of computing, it needs to start quantifying gains, standardize on benchmarks and focus on feasible application challenges.
To create less harmful technologies and ignite positive social change, AI engineers need to enlist ideas and expertise from a broad range of social science disciplines, including those embracing qualitative methods, say Mona Sloane and Emanuel Moss.
Deepfakes are a new dimension of the fake news problem. The criminal misuse of this technology poses far-reaching challenges and can threaten national security. Technological and governance solutions are needed to address this.
To develop scientific methods for evaluation in robotics, the field requires a more stringent definition of the subject of study, says Signe Redfield, focusing on capabilities instead of physical systems.
The European Commission’s report ‘Ethics guidelines for trustworthy AI’ provides a clear benchmark to evaluate the responsible development of AI systems, and facilitates international support for AI solutions that are good for humanity and the environment, says Luciano Floridi.
There is much to be gained from interdisciplinary efforts to tackle complex psychological notions such as ‘theory of mind’. However, careful and consistent communication is essential when comparing artificial and biological intelligence, say Henry Shevlin and Marta Halina.
If we are to realize the potential of self-driving cars, we need to recognize the limits of machine learning. We should not pretend self-driving cars are around the corner: it will still take substantial time and effort to integrate the technology safely and fairly into our societies.
Technology companies have quickly become powerful with their access to large amounts of data and machine learning technologies, but consumers could be empowered too with automated tools to protect their rights.
Artificial intelligence (AI) promises to be an invaluable tool for nature conservation, but its misuse could have severe real-world consequences for people and wildlife. Conservation scientists discuss how improved metrics and ethical oversight can mitigate these risks.
To attract and retain talent from all backgrounds, new educational models and mentorship programmes are needed in machine intelligence, says Shannon Wongvibulsin.
Debate about the impacts of AI is often split into two camps, one associated with the near term and the other with the long term. This divide is a mistake — the connections between the two perspectives deserve more attention, say Stephen Cave and Seán S. ÓhÉigeartaigh.
Ken Goldberg reflects on how four exciting sub-fields of robotics — co-robotics, human–robot interaction, deep learning and cloud robotics — accelerate a renewed trend toward robots working safely and constructively with humans.