Skip to main content

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

Volume 2 Issue 1, January 2020

Tree explainer.

Machine learning models based on trees are popular non-linear models. But are trees easy to interpret? In real-world situations, models must be accurate and interpretable, so that humans can understand how the model uses input features to make predictions. In a paper in this issue, Scott Lundberg and colleagues propose TreeExplainer, a general method to explain the individual decisions of a tree-based model in terms of input contributions. The utility of tree-based machine learning models for explainable artificial intelligence is further explored in a News & Views by Wojciech Samek.

See Lundberg et al.

Image: Scott Lundberg. Cover Design: Karen Moore.

Editorial

Top of page ⤴

Features

  • There is no shortage of opinions on the impact of artificial intelligence and deep learning. We invited authors of Comment and Perspective articles that we published in roughly the first half of 2019 to look back at the year and give their thoughts on how the issue they wrote about developed.

    • Alexander S. Rich
    • Cynthia Rudin
    • Jack Stilgoe
    Feature
Top of page ⤴

Comment & Opinion

  • Many high-level ethics guidelines for AI have been produced in the past few years. It is time to work towards concrete policies within the context of existing moral, legal and cultural values, say Andreas Theodorou and Virginia Dignum.

    • Andreas Theodorou
    • Virginia Dignum
    Comment
Top of page ⤴

News & Views

  • Tree-based models are among the most popular and successful machine learning algorithms in practice. New tools allow us to explain the predictions and gain insight into the global behaviour of these models.

    • Wojciech Samek
    News & Views
Top of page ⤴

Reviews

Top of page ⤴

Research

  • Predicting the structure of proteins from amino acid sequences is a hard problem. Convolutional neural networks can learn to predict a map of distances between amino acid residues that can be turned into a three-dimensional structure. With a combination of approaches, including an evolutionary technique to find the best neural network architecture and a tool to find the atom coordinates in the folded structure, a pipeline for rapid prediction of three-dimensional protein structures is demonstrated.

    • Wenzhi Mao
    • Wenze Ding
    • Haipeng Gong
    Article
  • Tree-based machine learning models are widely used in domains such as healthcare, finance and public services. The authors present an explanation method for trees that enables the computation of optimal local explanations for individual predictions, and demonstrate their method on three medical datasets.

    • Scott M. Lundberg
    • Gabriel Erion
    • Su-In Lee
    Article
  • By assembling conceptual systems from real-word datasets of text, images and audio, Roads and Love propose that objects embedded within a conceptual system have a unique signature that allows for conceptual systems to be aligned in an unsupervised fashion.

    • Brett D. Roads
    • Bradley C. Love
    Article
Top of page ⤴

Search

Quick links