Articles in 2023

Filter By:

  • For virtual protein docking, an accurate scoring function is necessary that evaluates how likely a protein conformation is. Stebliankin and colleagues present a method based on vision transformers that provides a more accurate score by evaluating individual binding interfaces as multi-channel images.

    • Vitalii Stebliankin
    • Azam Shirali
    • Giri Narasimhan
    Article
  • Achieving sequential robotic actions involving different manipulation skills is an open challenge that is critical to enable robots to interact meaningfully with their physical environment. Triantafyllidis and colleagues present a hierarchical learning framework based on an ensemble of specialized neural networks to solve complex long-horizon manipulation tasks.

    • Eleftherios Triantafyllidis
    • Fernando Acero
    • Zhibin Li
    ArticleOpen Access
  • Machine learning and quantum computing approaches are converging, fuelling considerable excitement over quantum devices and their capabilities. However, given the current hardware limitations, it is important to push the technology forward while being realistic about what quantum computers can do, now and in the near future.

    Editorial
  • Traditional feedback-state selection in robot learning is empirical and requires substantial engineering efforts. Yu et al. develop a quantitative and systematic state-importance analysis, revealing crucial feedback signals for learning locomotion skills.

    • Wanming Yu
    • Chuanyu Yang
    • Zhibin Li
    ArticleOpen Access
  • Tandem mass spectroscopy is a useful tool to identify metabolites but is limited by the capability of computational methods to annotate peaks with chemical structures when spectra are dissimilar to previously observed spectra. Goldman and colleagues use a transformer-based method to annotate chemical structure fragments, thereby incorporating domain insights into its architecture, and to simultaneously predict the structure of the metabolite and its fragments from the spectrum.

    • Samuel Goldman
    • Jeremy Wohlwend
    • Connor W. Coley
    Article
  • The heterogeneous and compartmentalized environments within living cells make it difficult to deploy theranostic agents with precise spatiotemporal accuracy. Zhao et al. demonstrate a DNA framework state machine that can switch among multiple structural states according to the temporal sequence of molecular cues, enabling temporally controlled CRISPR–Cas9 targeting in living mammalian cells.

    • Yan Zhao
    • Shuting Cao
    • Chunhai Fan
    Article
  • A challenging problem in deep learning consists in developing theoretical frameworks suitable to study generalization. Feng and colleagues uncover a duality relation between neuron activities and weights in deep learning neural networks, and use it to show that sharpness of the loss landscape and norm of the solution act together in determining its generalization performance.

    • Yu Feng
    • Wei Zhang
    • Yuhai Tu
    Article
  • Limited interpretability and understanding of machine learning methods in healthcare hinder their clinical impact. Imrie et al. discuss five types of machine learning interpretability. They examine medical stakeholders, highlight how interpretability meets their needs and emphasize the role of tailored interpretability in linking machine learning advancements to clinical impact.

    • Fergus Imrie
    • Robert Davis
    • Mihaela van der Schaar
    Perspective
  • Deep learning applied to live-cell images of patient-derived neurons aids predicting underlying mechanisms and gains insights into neurodegenerative diseases, facilitating the understanding of mechanistic heterogeneity. D’Sa and colleagues use patient-derived stem cell models, high-throughput imaging and machine learning algorithms to investigate Parkinson’s disease subtyping.

    • Karishma D’Sa
    • James R. Evans
    • Sonia Gandhi
    ArticleOpen Access
  • Medical artificial intelligence needs governance to ensure safety and effectiveness, not just centrally (for example, by the US Food and Drug Administration) but also locally to account for differences in care, patients and system performance. Practical collaborative governance will enable health systems to carry out these challenging governance tasks, supported by central regulators.

    • W. Nicholson Price II
    • Mark Sendak
    • Karandeep Singh
    Comment
  • Microscopic imaging and holography aim to decrease reliance on labelled experimental training data, which can introduce biases, be time-consuming and costly to prepare, and may lack real-world diversity. Huang et al. develop a physics-driven self-supervised model that eliminates the need for labelled or experimental training data, demonstrating superior generalization on the reconstruction of experimental holograms of various samples.

    • Luzhe Huang
    • Hanlong Chen
    • Aydogan Ozcan
    ArticleOpen Access
  • The tendency of machine learning algorithms to learn biases from training data calls for methods to mitigate unfairness before deployment to healthcare and other applications. Yang et al. propose a reinforcement-learning-based method for algorithmic bias mitigation and demonstrate it on COVID-19 screening and patient discharge prediction tasks.

    • Jenny Yang
    • Andrew A. S. Soltan
    • David A. Clifton
    ArticleOpen Access
  • To protect the integrity of knowledge production, the training procedures of foundation models such as GPT-4 need to be made accessible to regulators and researchers. Foundation models must become open and public, and those are not the same thing.

    • Fabian Ferrari
    • José van Dijck
    • Antal van den Bosch
    Comment
  • Algorithmic super-resolution in the context of fluorescence microscopy is challenging due to the difficulty to reliably represent biological nanostructures in synthetically generated images. Bouchard and colleagues propose a deep learning model for live-cell imaging that can leverage auxiliary microscopy imaging tasks to guide and enhance reconstruction, while preserving the biological features of interest.

    • Catherine Bouchard
    • Theresa Wiesner
    • Flavie Lavoie-Cardinal
    ArticleOpen Access
  • To ensure that a machine learning model has learned the intended features, it can be useful to have an explanation of why a specific output was given. Slack et al. have created a conversational environment, based on language models and feature importance, which can interactively explore explanations with questions asked in natural language.

    • Dylan Slack
    • Satyapriya Krishna
    • Sameer Singh
    ArticleOpen Access
  • The development of large language models is mainly a feat of engineering and so far has been largely disconnected from the field of linguistics. Exploring links between the two directions is reopening longstanding debates in the study of language.

    Editorial