Skip to main content

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

Volume 4 Issue 12, December 2022

Learning the hidden dynamics of data

Many systems that are found in nature display complex, even chaotic, dynamics, yet often there is structure lying within the apparent complexity. Daniel Floryan and Michael Graham devised a method, combining mathematical theory of manifolds with neural networks, that can discover a system’s intrinsic structure, a low-dimensional manifold formed by intrinsic state variables, and learn to predict its dynamics from observed time series data. The cover image represents the low-dimensional manifolds hidden in data and the dynamics on them.

SeeFloryan and Graham

Image: Daniel Floryan, University of Houston. Cover design: Thomas Phillips

Editorial

  • 2022 has seen eye-catching developments in AI applications. Work is needed to ensure that ethical reflection and responsible publication practices are keeping pace.

    Editorial

    Advertisement

Top of page ⤴

Comment & Opinion

  • Artificial intelligence systems are used for an increasing range of intellectual tasks, but can they invent, or will they be able to do so soon? A recent series of patent applications for two inventions that are claimed to have been made by an artificial intelligence program are bringing these questions to the fore.

    • Alexandra George
    • Toby Walsh
    Comment
  • The implementation of ethics review processes is an important first step for anticipating and mitigating the potential harms of AI research. Its long-term success, however, requires a coordinated community effort, to support experimentation with different ethics review processes, to study their effect, and to provide opportunities for diverse voices from the community to share insights and foster norms.

    • Madhulika Srikumar
    • Rebecca Finlay
    • Joelle Pineau
    Comment
  • The notion of ‘interpretability’ of artificial neural networks (ANNs) is of growing importance in neuroscience and artificial intelligence (AI). But interpretability means different things to neuroscientists as opposed to AI researchers. In this article, we discuss the potential synergies and tensions between these two communities in interpreting ANNs.

    • Kohitij Kar
    • Simon Kornblith
    • Evelina Fedorenko
    Comment
Top of page ⤴

Reviews

Top of page ⤴

Research

  • Recent developments in deep learning have allowed for a leap in computational analysis of epigenomic data, but a fair comparison of different architectures is challenging. Toneyan et al. use GOPHER, their new framework for model evaluation and comparison, to perform a comprehensive analysis, exploring modelling choices of deep learning for epigenomic profiles.

    • Shushan Toneyan
    • Ziqi Tang
    • Peter K. Koo
    Analysis
  • In recent years, deep learning techniques have enhanced the possibility to extract useful, high-resolution physical information from electron and scanning probe microscopy images. AtomAI, an open-source software package, can accelerate this process by bringing deep learning and simulation tools into a single framework for a range of instruments.

    • Maxim Ziatdinov
    • Ayana Ghosh
    • Sergei V. Kalinin
    Article
  • Learning minimal representations of dynamical systems is essential for mathematical modelling and prediction in science and engineering. Floryan and Graham propose a deep learning framework able to estimate accurate global dynamical models by sewing together multiple local representations learnt from high-dimensional time-series data.

    • Daniel Floryan
    • Michael D. Graham
    Article
  • The lack of generalizability and reproducibility of machine learning models in medical applications is increasingly recognized as a substantial barrier to implementing such approaches in real-world clinical settings. Highlighting this issue, Jie Cao et al. aim to reproduce a recent acute kidney injury prediction model, and find persistent discrepancies in model performance in different subgroups.

    • Jie Cao
    • Xiaosong Zhang
    • Karandeep Singh
    Article
  • Advances in ultra-widefield retinal imaging have created a need for automated disease detection. Engelmann and colleagues develop a deep learning model for the detection of retinal diseases. They evaluate it under more realistic conditions than has been considered previously and investigate what regions of ultra-widefield images are important for the performance of such a model.

    • Justin Engelmann
    • Alice D. McTrusty
    • Miguel O. Bernabeu
    Article
  • A promising area for deep learning is in modelling complex physical processes described by partial differential equations (PDEs), which is computationally expensive for conventional approaches. An operator learning approach called DeepONet was recently introduced to tackle PDE-related problems, and in new work, this approach is extended with transfer learning, which transfers knowledge obtained from learning to perform one task to a related but different task.

    • Somdatta Goswami
    • Katiana Kontolati
    • George Em Karniadakis
    Article
  • The problem of reconstructing full-field quantities from incomplete observations arises in various real-world applications. Güemes and colleagues propose a super-resolution algorithm based on a generative adversarial network that can achieve reconstruction of the underlying field from random sparse measurements without requiring full-field high-resolution training data.

    • Alejandro Güemes
    • Carlos Sanmiguel Vila
    • Stefano Discetti
    Article
  • Predicting RNA degradation is a fundamental task in designing RNA-based therapeutics. Two crowdsourcing platforms, Kaggle and Eterna, united to develop accurate deep learning models for RNA degradation on a timescale of 6 months.

    • Hannah K. Wayment-Steele
    • Wipapat Kladwang
    • Rhiju Das
    Article Open Access
  • A challenge for any machine learning system is to continually adapt to new data. While methods to address this issue are developed, their performance is hard to compare. A new framework to facilitate benchmarking divides approaches into three categories, defined by whether models need to adapt to new tasks, domains or classes.

    • Gido M. van de Ven
    • Tinne Tuytelaars
    • Andreas S. Tolias
    Article Open Access
  • Liquid chromatography–tandem mass spectrometry (LC-MS2) provides high-throughput screening of molecules with a large number of features. But these features are difficult to associate with specific molecular structures of each sample. To improve structure prediction from these features, Bach et al. propose a machine learning model trained to also take into account stereochemistry to combine the different kinds of features provided by LC-MS2.

    • Eric Bach
    • Emma L. Schymanski
    • Juho Rousu
    Article Open Access
  • Evolutionary computation is a very active field of research, with an ever-growing number of metaheuristic optimization algorithms being published. A serious problem plaguing the field is the use of inadequate benchmarks. Kudela exposes the issue and provides recommendations that can help to fairly evaluate and compare new methods.

    • Jakub Kudela
    Article
  • There is growing interest in using sophisticated machine learning models for the prediction of molecular properties, such as potency of novel drugs. However, Janela and Bajorath show that simple nearest-neighbour analysis meets or exceeds the accuracy of state-of-the-art complex machine learning methods and that randomized prediction models still reproduce compound potency values within an order of magnitude.

    • Tiago Janela
    • Jürgen Bajorath
    Article
  • Large language models have recently emerged with extraordinary capabilities, and these methods can be applied to model other kinds of sequence, such as string representations of molecules. Ross and colleagues have created a transformer-based model, trained on a large dataset of molecules, which provides good results on property prediction tasks.

    • Jerret Ross
    • Brian Belgodere
    • Payel Das
    Article
Top of page ⤴

Challenge Accepted

Top of page ⤴

Amendments & Corrections

Top of page ⤴

Search

Quick links