Imagine receiving a text during breakfast. You glance at your phone and see that the text is not from your mom, but is rather from your microscope, informing you of the status of your overnight acquisition. Perhaps it tells you everything went well and that it captured ten events of interest during the recording period. Perhaps it saw signs of phototoxicity at hour seven or maybe it asks whether to keep the experiment running after sharing some sample data. Maybe it even does a preliminary quantification of desired phenotypes and points to regions or time points of interest for further investigation. You sip your coffee and decide what you will text back as you plan the rest of your day.

This is just one version of a near-future scenario for bioimaging and bioimage analysis, in which microscopes are increasingly ‘smart’ and automated, computer-driven decisions become the norm. This type of sea change towards automation has long been in the works but has come within reach owing to the growing uptake of ideas from computer vision in the bioimaging community.

An artistic representation of an image of mammalian mitochondria transforming from widefield acquisition (upper left) to super-resolution with deep learning. Credit: Qionghai Dai & Chang Qiao

From an editorial perspective, we have observed an enormous, sustained effort from microscopists, computational biologists and software developers to push microscope control and image analysis into the next century, with notable advances in the areas of event-driven microscopy, automated and versatile image analysis, and image augmentation. We wanted to capture some of this momentum and begin thinking beyond what is happening now and on to what could be. We invited a range of experts working around the globe to tell us what excites them about the future of the field and challenged them to opine on their vision for bioimage analysis over the next five to twenty years. The resulting Comments cover a wide range of topics, including exciting next-generation tools, the need for more open science and the importance of using advanced tools appropriately.

One dominant theme that runs through many of the pieces is the importance of advances in artificial intelligence on the future of bioimaging. Loïc Royer muses on how image analysis will rise to the occasion to handle increasingly complex multimodal image data that are acquired by cutting-edge microscopes and asks whether inspiration can be drawn from large language models (like those behind ChatGPT) to generate a ‘large vision model’, and whether and how such a tool could be disruptive to image analysis.

Royer is not alone in his excitement about large foundation models for shaping the future of the field. Jun Ma and Bo Wang discuss how the state of the art in bioimage segmentation has become stagnant, with top-performing tools being useful only for very specific tasks. They make the case that a large foundation model is much needed to make universal, generalizable tools for segmentation, and that such models are within immediate reach with state-of-the-art technologies and data from existing image databases.

David Van Valen and colleagues on the DeepCell team note that advances in deep learning have already provided effective solutions to many image analysis problems in the life sciences — so much so that a template is established even when specific new problems emerge. As they look to the future, they envision artificial intelligence (AI)-empowered solutions to problems in multimodal imaging across scales, universal models for image analysis, large language models as a means to enhance data exploration and applications of AI in experimental design. They also propose a national laboratory for AI to ensure broad access to the AI revolution.

Qionghai Dai and colleagues share areas within computer vision that they envision will be important for the immediate and more distant future. They discuss different types of models, including self-supervised and unsupervised models and vision transformer architectures, as being important for the future of bioimage analysis. They note that physics-informed models may be important for vision tasks that are aimed at improving image quality. They also see large language models as potentially being important for the future of bioimage analysis, although they caution that these can give fictitious answers. They end by highlighting several bottlenecks that must be addressed to improve the applicability of these approaches and move the field forward.

Anne Carpenter and colleagues switch gears somewhat to imagine the future of smart microscopy. In this future, as in the scenario described above, one can have plain-language conversations with microscopes, which offer language-guided image acquisition and both automated and language-guided image analyses depending on the task. The authors share their vison for how this might look and the steps that it will take to get there.

Switching gears again, Leonel Malacrida discusses what it will take to bring methods such as spectral and fluorescence lifetime imaging — both of which are beloved by the biophysics community — into the twenty-first century and into the hands of more biologists. He asserts that phasor analysis and associated user-friendly tools for analyzing these complex data will be important for democratizing these techniques.

Separate pieces from Susanne Rafelski and colleagues and Talley Lambert and Jennifer Waters focus less on what can technically be achieved in the future and more on ensuring that these tools are applied correctly. Rafelski and colleagues focus on quantitative image analysis and the necessity of ensuring that any applied image analysis tools are validated specifically for the application at hand. They also raise action items to help to ensure that this critical part of application (especially of advanced AI tools) is not glossed over by biologists or developers. Lambert and Waters are excited about the ongoing development of advanced image analysis tools yet cautious about their appropriate implementation. They stress the importance of matching the imaging modality and analysis tools to the biological question at hand and ensuring that appropriate validation metrics are put in place to assess the performance of any image analysis task. They further stress that developers and publishers should take steps towards promoting rigor and reproducibility, offer caution against turnkey applications before they are ready, and urge data and software sharing.

Florian Jug and colleagues stress the importance of the bioimaging community to the future of bioimage analysis. They share their opinion on two great, intertwined challenges that face the community. The first is that developers require a large amount of FAIR data (meeting findability, accessibility, interoperability and reusability principles) to develop improved tools. The second is that it remains challenging for users to find, implement and validate appropriate models for their experiments. They discuss possible solutions to these challenges, emphasizing the synergies that arise when life scientists and developers work together.

Along similar lines, Kevin Eliceiri and Beth Cimini seek to connect biologists with appropriate image analysis tools. They describe an approach that is currently under development in which users can play a game of Twenty Questions with specific queries whose answers steer users toward proper tools and algorithms. Such an approach could help to create a common language between biologists and developers, and help to prevent analysts from starting from scratch when a useful tool already exists.

Finally, Michael Reiche and colleagues describe the future of bioimage analysis in Africa. They note that, despite Africa’s strength in data science and computational biology, there are relatively few scientists working in bioimage analysis. They describe specific ways to bridge this gap and to promote quantitative bioimaging on the African continent and globally. They further emphasize that for bioimage analysis tools to be truly global, there need to be strong international collaborations and open and accessible tools and data.

From these diverse perspectives, many common threads emerge. Clearly, experts think large, general-purpose models are likely to be implemented — possibly in the very near future — to offer general solutions for problems such as segmentation and object detection. Experts also seem to agree that multimodal data analysis will offer challenges and fodder for future development. According to these researchers, the future also holds tools that bridge gaps between biologists and developers, making it easier for users to find the best tools for their specific questions rather than starting from scratch. And, once they do, it is clear that there need to be best practices in place to make sure they are being used appropriately. Another clear theme is that open data are absolutely critical — a topic that is discussed in this month’s Technology Feature, which focuses on data sharing across the life sciences and covers the unique pain points and needs to the bioimaging community.

So how do we, as editors, foresee bioimage analysis over the next five to twenty years? We raise a few questions that may serve as food for thought. How far can we push image reconstruction tasks? Will AI be the hammer that breaks the age-old interdependencies among resolution, speed, contrast and sample health? Will deep learning help to tackle fundamental limits associated with deep tissue imaging and enable clearer views into living systems? Is it possible to make a general model for image augmentation that still produces accurate, quantitative results? For more conventional image analysis, will we look back and see segmentation as essentially a solved problem? Will there be a comparable breakthrough for cell tracking in complex systems? For smart microscopy, how quickly will these tools interface with existing tools such as ChatGPT to make our science fiction scenario a reality? How close are we to implementing all of the necessary tools in real time? And for all of the above, how will biologists learn to verify and trust these myriad tools? What will be required in terms of community standards, demonstrations and benchmarking?

It is a privilege to be seeing and publishing these groundbreaking tools as they arise. We take this responsibility seriously and are always updating our standards to ensure that we are sharing robust and reproducible tools. We strongly support FAIR data sharing and open-source software and hardware development, and we cannot wait to see what the bioimaging community has in store over the next twenty-plus years.