Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain
the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in
Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles
and JavaScript.
The cover image shows superficial upper extremity vein patterns in the human hand and forearm, extracted by a deep neural network under near-infrared light. Such patterns encode rich anatomical information that may be used to develop clinical decision software systems or to guide the supervised autonomous delivery of medical procedures such as vascular access and blood drawing.
In a recent workshop at the Conference on Neural Information Processing Systems (NeurIPS), future directions at the intersection of neuroscience and AI were considered. A panel discussion at the end of the day started with a provocative question: do we need AI to understand the brain?
Machine learning models have great potential in biomedical applications. A new platform called GradioHub offers an interactive and intuitive way for clinicians and biomedical researchers to try out models and test their reliability on real-world, out-of-training data.
Our understanding of concepts can differ depending on the modality — such as vision, text or speech — through which we learn this concept. A recent study uses computational modelling to demonstrate how conceptual understanding aligns across modalities.
This Review surveys machine learning techniques that are currently developed for a range of research topics in biological and artificial active matter and also discusses challenges and exciting opportunities. This research direction promises to help disentangle the complexity of active matter and gain fundamental insights for instance in collective behaviour of systems at many length scales from colonies of bacteria to animal flocks.
Getting safe and fast access to blood vessels is vital to many methods of treatment and diagnosis in medicine. Robot-assisted or even fully autonomous methods can potentially do the task more reliably than humans, especially when veins are hard to detect. In this work, a method is tested that uses deep learning to find blood vessels and track the movement of a patient’s arm.
Persistent homology provides an efficient approach to simplifying the complexity of protein structure. Wang et al. combine this approach with convolutional neural networks and gradient-boosting trees to improve predictions of protein–protein interactions.
Counting different types of circulating tumour cells can give valuable information on the severity of the disease and on whether treatments are effective for a specific patient. In this work, the authors show that their method based on autoencoders can identify and count cells more accurately and faster than human experts.
When predicting the interaction of proteins with potential drugs, the
protein can be encoded as its one-dimensional sequence or a three-dimensional
structure, which can capture more relevant features of the protein, but also makes
the task to predict the interactions harder. A new method predicts these
interactions using a two-dimensional distance matrix representation of a protein,
which can be processed like a two-dimensional image, striking a balance between the
data being simple to process and rich in relevant structures.
Age-related macular degeneration is a serious eye disease which should be detected as early as possible. Using both fundus images and genetic information, a deep neural network is able to detect the severity of the disease and predict its progression seven years into the future.