Skip to main content

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

Volume 2 Issue 9, September 2020

Finding ground states with reinforcement learning

Finding the ground state of a complex material or quantum system is an important task in condensed-matter physics. Mills et al. develop a method they call ‘controlled online optimization learning’ – COOL — based on reinforcement learning, which can find a temperature schedule to anneal and then slowly cool an atomic system with strong interactions, to find its ground state.

See Mills et al.

Image: Kyle Mills, University of Ontario Institute of Technology. Cover design: Karen Moore.

Editorial

  • The challenge of practically integrating an ethical and social approach in the development and implementation of AI needs to be urgently addressed, to help restore public trust in technology.

    Editorial

    Advertisement

Top of page ⤴

Comment & Opinion

  • There is a need to consider how AI developers can be practically assisted in identifying and addressing ethical issues. In this Comment, a group of AI engineers, ethicists and social scientists suggest embedding ethicists into the development team as one way of improving the consideration of ethical issues during AI development.

    • Stuart McLennan
    • Amelia Fiske
    • Alena Buyx
    Comment
Top of page ⤴

Books & Arts

Top of page ⤴

News & Views

  • The proper response to an ever-changing environment depends on the ability to quantify elapsed time, memorize short intervals and forecast when an upcoming experience may occur. A recent study describes the encoding principles of these three types of time using computational modelling.

    • Hugo Merchant
    • Oswaldo Pérez
    News & Views
Top of page ⤴

Reviews

  • Developing swarm robots for a specific application is a time consuming process and can be alleviated by automated optimization of the behaviour. Birattari and colleagues discuss that there are two fundamentally different design approaches; a semi-autonomous one, which allows for situation specific tuning from human engineers and one that needs to be entirely autonomous.

    • Mauro Birattari
    • Antoine Ligot
    • Ken Hasselmann
    Perspective
  • Recent developments in machine learning have seen the merging of ensemble and deep learning techniques. The authors review advances in ensemble deep learning methods and their applications in bioinformatics, and discuss the challenges and opportunities going forward.

    • Yue Cao
    • Thomas Andrew Geddes
    • Pengyi Yang
    Review Article
Top of page ⤴

Research

  • Reinforcement learning has become a popular method in various domains, for problems where an agent must learn what actions must be taken to reach a particular goal. An interesting example where the technique can be applied is simulated annealing in condensed matter physics, where a procedure is determined for slowly cooling a complex system to its ground state. A reinforcement learning approach has been developed that can learn a temperature scheduling protocol to find the ground state of spin glasses, magnetic systems with strong spin–spin interactions between neighbouring atoms.

    • Kyle Mills
    • Pooya Ronagh
    • Isaac Tamblyn
    Article
  • Recent accidents with autonomous test vehicles have eroded trust in such self-driving cars. A shift in approach is required to ensure autonomous vehicles can never be the cause of accidents. An online verification technique is presented that guarantees provably safe motions, including fallback solutions in safety-critical situations, for any intended trajectory calculated by the underlying motion planner.

    • Christian Pek
    • Stefanie Manzinger
    • Matthias Althoff
    Article
  • When automated decisions are provided by a company without providing the full model, users and law makers might demand a ‘right to an explanation’. Le Merrer and Trédan show that malicious manipulations of these explanations are hard to detect, even for simple strategies to obscure the model’s decisions.

    • Erwan Le Merrer
    • Gilles Trédan
    Article
Top of page ⤴

Amendments & Corrections

Top of page ⤴

Search

Quick links