Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain
the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in
Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles
and JavaScript.
Effy Vayena runs a lab at ETH Zürich that studies ethics, legal and social implications of precision medicine and digital health. We asked her views on the code of conduct for using artificial intelligence (AI) systems in healthcare, recently published by the UK’s National Health Service (NHS).
Deep learning has revolutionized the technology industry, but beyond eye-catching applications such as virtual assistants, recommender systems and self-driving cars, deep learning is also transforming many scientific fields.
Deep neural networks are a powerful tool for predicting protein function, but identifying the specific parts of a protein sequence that are relevant to its functions remains a challenge. An occlusion-based sensitivity technique helps interpret these deep neural networks, and can guide protein engineering by locating functionally relevant protein positions.
There has been a recent rise of interest in developing methods for ‘explainable AI’, where models are created to explain how a first ‘black box’ machine learning model arrives at a specific decision. It can be argued that instead efforts should be directed at building inherently interpretable models in the first place, in particular where they are applied in applications that directly affect human lives, such as in healthcare and criminal justice.
Many functions of RNA strands that do not code for proteins are still to be deciphered. Methods to classify different groups of non-coding RNA increasingly use deep learning, but the landscape is diverse and methods need to be categorized and benchmarked to move forward. The authors take a close look at six state-of-the-art deep learning non-coding RNA classifiers and compare their performance and architecture.
Diagnostic pathology currently requires substantial human expertise, often with high inter-observer variability. A whole-slide pathology method automates the prediction process and provides computer-aided diagnosis using artificial intelligence.
The European Commission’s report ‘Ethics guidelines for trustworthy AI’ provides a clear benchmark to evaluate the responsible development of AI systems, and facilitates international support for AI solutions that are good for humanity and the environment, says Luciano Floridi.
Classic theories of reinforcement learning and neuromodulation rely on reward prediction errors. A new machine learning technique relies on neuromodulatory signals that are optimized for specific tasks, which may lead to better AI and better explanations of neuroscience data.
Accurate manoeuvring of autonomous aerial and aquatic robots requires detailed knowledge of the fluid forces, which can be challenging especially in turbulent water or air. A control method for autonomous underwater vehicles (AUVs) uses intelligent distributed sensing inspired by fish ‘lateral line’ sensing. This is used by many species of fish to feel the flow around them and respond instantly, before they are displaced by disturbances. An AUV designed with such a sensory shell similarly compensates for disturbances and has improved position tracking.
To accelerate the development of energy-efficient and intelligent machines, Yung-Hsiang Lu and organizers launched a challenge for low-power approaches to image recognition.
Clustering groups of cells in single-cell RNA sequencing datasets can produce high-resolution information for complex biological questions. However, it is statistically and computationally challenging due to the low RNA capture rate, which results in a high number of false zero count observations. A deep learning approach called scDeepCluster, which efficiently combines a model for explicitly characterizing missing values with clustering, shows high performance and improved scalability with a computing time increasing linearly with sample size.
Artificial intelligence and machine learning systems may reproduce or amplify biases. The authors discuss the literature on biases in human learning and decision-making, and propose that researchers, policymakers and the public should be aware of such biases when evaluating the output and decisions made by machines.
Biomedical publications provide a rich and largely untapped source of knowledge. INtERAcT exploits word embeddings trained on a corpus of cancer-specific articles to estimate molecular interactions. The algorithm is able to reconstruct molecular pathways associated with ten cancer types, even in corpora of limited size.
There is much to be gained from interdisciplinary efforts to tackle complex psychological notions such as ‘theory of mind’. However, careful and consistent communication is essential when comparing artificial and biological intelligence, say Henry Shevlin and Marta Halina.
The online availability of large amounts of publicly posted images and other data is fuelling machine learning research and applications. However, it is time to take privacy concerns seriously.
If we are to realize the potential of self-driving cars, we need to recognize the limits of machine learning. We should not pretend self-driving cars are around the corner: it will still take substantial time and effort to integrate the technology safely and fairly into our societies.