Perspectives

Filter By:

Article Type
  • Many researchers have become interested in implementing artificial intelligence methods in applications with socially beneficial outcomes. To provide a way to study and benchmark such ‘AI for social good’ applications, Josh Cowls et al. use the United Nations’ Sustainable Development Goals to systematically analyse AI for social good applications.

    • Josh Cowls
    • Andreas Tsamados
    • Luciano Floridi
    Perspective
  • The Conference on Neural Information Processing Systems (NeurIPS) introduced a new requirement in 2020 that submitting authors must include a statement on the broader impacts of their research. Prunkl and colleagues discuss challenges and benefits of this requirement and propose suggestions to address the challenges.

    • Carina E. A. Prunkl
    • Carolyn Ashurst
    • Allan Dafoe
    Perspective
  • Evolutionary computation is inspired by biological evolution and exhibits characteristics familiar from biology such as openendedness, multi-objectivity and co-evolution. This Perspective highlights where major differences still exist, and where the field of evolutionary computation could attempt to approach features from biological evolution more closely, namely neutrality and random drift, complex genotype-to-phenotype mappings with rich environmental interactions and major organizational transitions.

    • Risto Miikkulainen
    • Stephanie Forrest
    Perspective
  • DNN classifiers are vulnerable to small, specific perturbations in an input that seem benign to humans. To understand this phenomenon, Buckner argues that it may be necessary to treat the patterns that DNNs detect in these adversarial examples as artefacts, which may contain predictive information.

    • Cameron Buckner
    Perspective
  • Deep learning has resulted in impressive achievements, but under what circumstances does it fail, and why? The authors propose that its failures are a consequence of shortcut learning, a common characteristic across biological and artificial systems in which strategies that appear to have solved a problem fail unexpectedly under different circumstances.

    • Robert Geirhos
    • Jörn-Henrik Jacobsen
    • Felix A. Wichmann
    Perspective
  • Robots could play an important part in transforming healthcare to cope with the COVID-19 pandemic. This Perspective highlights how robotic technology integrated in a range of tasks in the surgical environment could help to ensure a continuation of medical services while reducing the risk of infection.

    • Ajmal Zemmar
    • Andres M. Lozano
    • Bradley J. Nelson
    Perspective
  • Evidence syntheses produced from the scientific literature are important tools for policymakers. Producing such evidence syntheses can be highly time- and labour-consuming but machine learning models can help as already demonstrated in the health and medical sciences. This Perspective describes a machine learning-based framework specifically designed to support evidence syntheses in the area of agricultural research, for tackling the UN Sustainable Development Goal 2: zero hunger by 2030.

    • Jaron Porciello
    • Maryia Ivanina
    • Haym Hirsh
    Perspective
  • Developing swarm robots for a specific application is a time consuming process and can be alleviated by automated optimization of the behaviour. Birattari and colleagues discuss that there are two fundamentally different design approaches; a semi-autonomous one, which allows for situation specific tuning from human engineers and one that needs to be entirely autonomous.

    • Mauro Birattari
    • Antoine Ligot
    • Ken Hasselmann
    Perspective
  • Machine learning models are commonly used to predict risks and outcomes in biomedical research. But healthcare often requires information about cause–effect relations and alternative scenarios, that is, counterfactuals. Prosperi et al. discuss the importance of interventional and counterfactual models, as opposed to purely predictive models, in the context of precision medicine.

    • Mattia Prosperi
    • Yi Guo
    • Jiang Bian
    Perspective
  • China’s New Generation Artificial Intelligence Development Plan was launched in 2017 and lays out an ambitious strategy, which intends to make China one of the world’s premier AI innovation centre by 2030. This Perspective presents the view from a group of Chinese AI experts from academia and industry about the origins of the plan, the motivations and main focus for attention from research and industry.

    • Fei Wu
    • Cewu Lu
    • Yunhe Pan
    Perspective
  • Medical imaging data is often subject to privacy and intellectual property restrictions. AI techniques can help out by offering tools like federated learning to bridge the gap between personal data protection and data utilisation for research and clinical routine, but these tools need to be secure.

    • Georgios A. Kaissis
    • Marcus R. Makowski
    • Rickmer F. Braren
    Perspective
  • As artists are beginning to employ deep learning techniques to create new and interesting art, questions arise about how copyright and ownership apply to those works. This Perspective discusses how artists, programmers and users can ensure clarity about the ownership of their creations.

    • Jason K. Eshraghian
    Perspective
  • Applications of machine learning in the life sciences and medicine require expertise in computational methods and in scientific subject matter. The authors surveyed articles in the life sciences that included machine learning applications, and found that interdisciplinary collaborations increased the scientific validity of published research.

    • Maria Littmann
    • Katharina Selig
    • Burkhard Rost
    Perspective
  • Current national cybersecurity and defence strategies of several governments mention explicitly the use of AI. However, it will be important to develop standards and certification procedures, which involves continuous monitoring and assessment of threats. The focus should be on the reliability of AI-based systems, rather than on eliciting users’ trust in AI.

    • Mariarosaria Taddeo
    • Tom McCutcheon
    • Luciano Floridi
    Perspective
  • AI ethics initiatives have seemingly converged on a set of principles that closely resemble the four classic principles of medical ethics. Despite this, Brent Mittelstadt highlights important differences between medical practice and AI development that suggest a principled approach may not work in the case of AI.

    • Brent Mittelstadt
    Perspective
  • Robots and machines are generally designed to perform specific tasks. Unlike humans, they lack the ability to generate feelings based on interactions with the world. The authors propose a new class of machines with evaluation processes akin to feelings, based on the principles of homeostasis and developments in soft robotics and multisensory integration.

    • Kingson Man
    • Antonio Damasio
    Perspective
  • As AI technology develops rapidly, it is widely recognized that ethical guidelines are required for safe and fair implementation in society. But is it possible to agree on what is ‘ethical AI’? A detailed analysis of 84 AI ethics reports around the world, from national and international organizations, companies and institutes, explores this question, finding a convergence around core principles but substantial divergence on practical implementation.

    • Anna Jobin
    • Marcello Ienca
    • Effy Vayena
    Perspective
  • Traditional robotic grasping focuses on manipulating an object, often without considering the goal or task involved in the movement. The authors propose a new metric for success in manipulation that is based on the task itself.

    • V. Ortenzi
    • M. Controzzi
    • P. Corke
    Perspective
  • There has been a recent rise of interest in developing methods for ‘explainable AI’, where models are created to explain how a first ‘black box’ machine learning model arrives at a specific decision. It can be argued that instead efforts should be directed at building inherently interpretable models in the first place, in particular where they are applied in applications that directly affect human lives, such as in healthcare and criminal justice.

    • Cynthia Rudin
    Perspective
  • Artificial intelligence and machine learning systems may reproduce or amplify biases. The authors discuss the literature on biases in human learning and decision-making, and propose that researchers, policymakers and the public should be aware of such biases when evaluating the output and decisions made by machines.

    • Alexander S. Rich
    • Todd M. Gureckis
    Perspective