Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain
the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in
Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles
and JavaScript.
The field of bioimage analysis is poised for a major transformation, owing to advancements in imaging technologies and artificial intelligence. The emergence of multimodal foundation models — which are akin to large language models (such as ChatGPT) but are capable of comprehending and processing biological images — holds great potential for ushering in a revolutionary era in bioimage analysis.
Here we discuss the prospects of bioimage analysis in the context of the African research landscape as well as challenges faced in the development of bioimage analysis in countries on the continent. We also speculate about potential approaches and areas of focus to overcome these challenges and thus build the communities, infrastructure and initiatives that are required to grow image analysis in African research.
The language used by microscopists who wish to find and measure objects in an image often differs in critical ways from that used by computer scientists who create tools to help them do this, making communication hard across disciplines. This work proposes a set of standardized questions that can guide analyses and shows how it can improve the future of bioimage analysis as a whole by making image analysis workflows and tools more FAIR (findable, accessible, interoperable and reusable).
We dream of a future where light microscopes have new capabilities: language-guided image acquisition, automatic image analysis based on extensive prior training from biologist experts, and language-guided image analysis for custom analyses. Most capabilities have reached the proof-of-principle stage, but implementation would be accelerated by efforts to gather appropriate training sets and make user-friendly interfaces.
The future of bioimage analysis is increasingly defined by the development and use of tools that rely on deep learning and artificial intelligence (AI). For this trend to continue in a way most useful for stimulating scientific progress, it will require our multidisciplinary community to work together, establish FAIR (findable, accessible, interoperable and reusable) data sharing and deliver usable and reproducible analytical tools.
Concurrent advances in imaging technologies and deep learning have transformed the nature and scale of data that can now be collected with imaging. Here we discuss the progress that has been made and outline potential research directions at the intersection of deep learning and imaging-based measurements of living systems.
The bridging of domains such as deep learning-driven image analysis and biology brings exciting promises of previously impossible discoveries as well as perils of misinterpretation and misapplication. We encourage continual communication between method developers and application scientists that emphases likely pitfalls and provides validation tools in conjunction with new techniques.
In the ever-evolving landscape of biological imaging technology, it is crucial to develop foundation models capable of adapting to various imaging modalities and tackling complex segmentation tasks.
I share my opinions on the benefits of and bottlenecks for hyperspectral and time-resolved imaging. I also discuss current and future perspectives for analyzing these types of data using the phasor approach.
A key step toward biologically interpretable analysis of microscopy image-based assays is rigorous quantitative validation with metrics appropriate for the particular application in use. Here we describe this challenge for both classical and modern deep learning-based image analysis approaches and discuss possible solutions for automating and streamlining the validation process in the next five to ten years.
Advanced imaging techniques provide holistic observations of complicated biological phenomena across multiple scales while posing great challenges to data analysis. We summarize recent advances and trends in bioimage analysis, discuss current challenges toward better applicability, and envisage new possibilities.
Volume electron microscopy (vEM) is a group of techniques that reveal the 3D ultrastructure of cells and tissues through continuous depths of at least 1 micrometer. A burgeoning grassroots community effort is fast building the profile and revealing the impact of vEM technology in the life sciences and clinical research.
The nanopore community is stepping toward a new frontier of single-molecule protein sequencing. Here, we offer our opinions on the unique potential for this emerging technology, with a focus on single-cell proteomics, and some challenges that must be overcome to realize it.
The development of mass spectrometry-based single-cell proteomics technologies opens unique opportunities to understand the functional crosstalk between cells that drive tumor development.
Recent technological advances in mass spectrometry promise to add single-cell proteomics to the biologist’s toolbox. Here we discuss the current status and what is needed for this exciting technology to lead to biological insight — alone or as a complement to other omics technologies.
Increasing evidence suggests that the spatial distribution of biomolecules within cells is a critical component in deciphering single-cell molecular heterogeneity. State-of-the-art single-cell MS imaging is uniquely capable of localizing biomolecules within cells, providing a dimension of information beyond what is currently available through in-depth omics investigations.
We argue that the study of single-cell subcellular organelle omics is needed to understand and regulate cell function. This requires and is being enabled by new technology development.
Mammalian cells have about 30,000 times as many protein molecules as mRNA molecules, which has major implications in the development of proteomics technologies. We discuss strategies that have been helpful for counting billions of protein molecules by liquid chromatography–tandem mass spectrometry and suggest that these strategies can benefit single-molecule methods, especially in mitigating the challenges posed by the wide dynamic range of the proteome.
Human neuroscience is enjoying burgeoning population data resources: large-scale cohorts with thousands of participant profiles of gene expression, brain scanning and sociodemographic measures. The depth of phenotyping puts us in a better position than ever to fully embrace major sources of population diversity as effects of interest to illuminate mechanisms underlying brain health.
Dramatic advances in protein structure prediction have sparked debate as to whether the problem of predicting structure from sequence is solved or not. Here, I argue that AlphaFold2 and its peers are currently limited by the fact that they predict only a single structure, instead of a structural distribution, and that this realization is crucial for the next generation of structure prediction algorithms.