Volume 2 Issue 1, January 2020

Volume 2 Issue 1

Tree explainer.

Machine learning models based on trees are popular non-linear models. But are trees easy to interpret? In real-world situations, models must be accurate and interpretable, so that humans can understand how the model uses input features to make predictions. In a paper in this issue, Scott Lundberg and colleagues propose TreeExplainer, a general method to explain the individual decisions of a tree-based model in terms of input contributions. The utility of tree-based machine learning models for explainable artificial intelligence is further explored in a News & Views by Wojciech Samek.

See Lundberg et al.

Image: Scott Lundberg. Cover Design: Karen Moore.

Editorial

Features

  • Feature |

    There is no shortage of opinions on the impact of artificial intelligence and deep learning. We invited authors of Comment and Perspective articles that we published in roughly the first half of 2019 to look back at the year and give their thoughts on how the issue they wrote about developed.

    • Alexander S. Rich
    • , Cynthia Rudin
    • , David M. P. Jacoby
    • , Robin Freeman
    • , Oliver R. Wearn
    • , Henry Shevlin
    • , Kanta Dihal
    • , Seán S. ÓhÉigeartaigh
    • , James Butcher
    • , Marco Lippi
    • , Przemyslaw Palka
    • , Paolo Torroni
    • , Shannon Wongvibulsin
    • , Edmon Begoli
    • , Gisbert Schneider
    • , Stephen Cave
    • , Mona Sloane
    • , Emmanuel Moss
    • , Iyad Rahwan
    • , Ken Goldberg
    • , David Howard
    • , Luciano Floridi
    •  & Jack Stilgoe

Comment & Opinion

  • Comment |

    Many high-level ethics guidelines for AI have been produced in the past few years. It is time to work towards concrete policies within the context of existing moral, legal and cultural values, say Andreas Theodorou and Virginia Dignum.

    • Andreas Theodorou
    •  & Virginia Dignum

News & Views

  • News & Views |

    Tree-based models are among the most popular and successful machine learning algorithms in practice. New tools allow us to explain the predictions and gain insight into the global behaviour of these models.

    • Wojciech Samek

Reviews

  • Perspective |

    Applications of machine learning in the life sciences and medicine require expertise in computational methods and in scientific subject matter. The authors surveyed articles in the life sciences that included machine learning applications, and found that interdisciplinary collaborations increased the scientific validity of published research.

    • Maria Littmann
    • , Katharina Selig
    • , Liel Cohen-Lavi
    • , Yotam Frank
    • , Peter Hönigschmid
    • , Evans Kataka
    • , Anja Mösch
    • , Kun Qian
    • , Avihai Ron
    • , Sebastian Schmid
    • , Adam Sorbie
    • , Liran Szlak
    • , Ayana Dagan-Wiener
    • , Nir Ben-Tal
    • , Masha Y. Niv
    • , Daniel Razansky
    • , Björn W. Schuller
    • , Donna Ankerst
    • , Tomer Hertz
    •  & Burkhard Rost

Research

  • Article |

    Predicting the structure of proteins from amino acid sequences is a hard problem. Convolutional neural networks can learn to predict a map of distances between amino acid residues that can be turned into a three-dimensional structure. With a combination of approaches, including an evolutionary technique to find the best neural network architecture and a tool to find the atom coordinates in the folded structure, a pipeline for rapid prediction of three-dimensional protein structures is demonstrated.

    • Wenzhi Mao
    • , Wenze Ding
    • , Yaoguang Xing
    •  & Haipeng Gong
  • Article |

    Neural networks are often implemented with reduced precision in order to meet the tight energy and memory budget required by edge computing devices. Chakraborty et al. develop a technique for assessing which layers can be quantized, and by how much, without sacrificing too much on performance.

    • Indranil Chakraborty
    • , Deboleena Roy
    • , Isha Garg
    • , Aayush Ankit
    •  & Kaushik Roy
  • Article |

    Tree-based machine learning models are widely used in domains such as healthcare, finance and public services. The authors present an explanation method for trees that enables the computation of optimal local explanations for individual predictions, and demonstrate their method on three medical datasets.

    • Scott M. Lundberg
    • , Gabriel Erion
    • , Hugh Chen
    • , Alex DeGrave
    • , Jordan M. Prutkin
    • , Bala Nair
    • , Ronit Katz
    • , Jonathan Himmelfarb
    • , Nisha Bansal
    •  & Su-In Lee
  • Article |

    By assembling conceptual systems from real-word datasets of text, images and audio, Roads and Love propose that objects embedded within a conceptual system have a unique signature that allows for conceptual systems to be aligned in an unsupervised fashion.

    • Brett D. Roads
    •  & Bradley C. Love