• Perspective |

    There has been a recent rise of interest in developing methods for ‘explainable AI’, where models are created to explain how a first ‘black box’ machine learning model arrives at a specific decision. It can be argued that instead efforts should be directed at building inherently interpretable models in the first place, in particular where they are applied in applications that directly affect human lives, such as in healthcare and criminal justice.

    • Cynthia Rudin
  • Perspective |

    Artificial intelligence and machine learning systems may reproduce or amplify biases. The authors discuss the literature on biases in human learning and decision-making, and propose that researchers, policymakers and the public should be aware of such biases when evaluating the output and decisions made by machines.

    • Alexander S. Rich
    •  & Todd M. Gureckis
  • Perspective |

    A bibliometric analysis of the past and present of AI research suggests a consolidation of research influence. This may present challenges for the exchange of ideas between AI and the social sciences.

    • Morgan R. Frank
    • , Dashun Wang
    • , Manuel Cebrian
    •  & Iyad Rahwan
  • Perspective |

    A survey of 300 fictional and non-fictional works featuring artificial intelligence reveals that imaginings of intelligent machines may be grouped in four categories, each comprising a hope and a parallel fear. These perceptions are decoupled from what is realistically possible with current technology, yet influence scientific goals, public understanding and regulation of AI.

    • Stephen Cave
    •  & Kanta Dihal
  • Perspective |

    A new vision for robot engineering, building on advances in computational materials techniques, additive and subtractive manufacturing as well as evolutionary computing, describes how to design a range of specialized robots uniquely suited to specific tasks and environmental conditions.

    • David Howard
    • , Agoston E. Eiben
    • , Danielle Frances Kennedy
    • , Jean-Baptiste Mouret
    • , Philip Valencia
    •  & Dave Winkler
  • Perspective |

    Arguably one of the most promising as well as critical applications of deep learning is in supporting medical sciences and decision making. It is time to develop methods for systematically quantifying uncertainty underlying deep learning processes, which would lead to increased confidence in practical applicability of these approaches.

    • Edmon Begoli
    • , Tanmoy Bhattacharya
    •  & Dimitri Kusnezov