In early February, more than 20,000 attendees packed San Francisco’s Moscone Center to attend arguably one of the biggest photonics conferences in the world, SPIE Photonics West (PW2020). Normally composed of three symposia (BiOS, LASE and OPTO), this year the conference was co-located with the inaugural Augmented Reality/Virtual Reality/Mixed Reality (AR/VR/MR) conference. With 5,300 presentations and nearly 60 technical courses, PW2020 attracted 1,300 exhibitors and multiple industry-related special events.

Attended by more than 20,000 attendees, Photonics West 2020 took place at its usual venue, the Moscone Center, in the heart of San Francisco, California, USA.

Many would probably agree that this year witnessed yet another successful year for the BiOS symposium. The boom in the use of artificial intelligence (AI) technologies was a clear highlight at BiOS with many presentations discussing the beneficial roles that AI can play in various forms of biomedical imaging.

“Imaging, in general, relies on optimization to arrive at solutions that simultaneously satisfy physical constraints, which are the laws of light propagation and scattering from matter, and prior knowledge, which are the types of objects and object shapes that are likely to be encountered in any particular situation,” said George Barbastathis from the Massachusetts Institute of Technology, USA. “Artificial intelligence and its subset, machine learning, are a class of optimization algorithms that happen to be particularly well suited to this problem. It is very powerful and has enabled many breakthroughs, for example, non-direct line-of-sight imaging, and imaging transparent objects under very strong scattering conditions or with a single photon per pixel.”

Aydogan Ozcan from the University of California at Los Angeles, USA could not agree more: “microscopy provides unique opportunities for using deep learning to solve inverse problems in imaging.” He says that the potential to learn inferences from high-quality experimentally generated data has many advantages compared to using simulations or models that often rely on simplifying assumptions.

At BiOS, Ozcan introduced his recent work on a deep-neural-network-based framework, termed Deep-Z, that can digitally refocus a two-dimensional (2D) fluorescence image onto user-defined 3D surfaces. Deep-Z can be especially useful to capture 3D transient phenomena within live organisms, while also reducing the photon dose on the sample. Using Deep-Z, his group imaged the neuronal activity of Caenorhabditis elegans in three dimensions using a time-sequence of fluorescence images acquired at a single focal plane, digitally increasing the depth of field by 20-fold without any additional imaging hardware, or a sacrifice in resolution or imaging speed. Since the axial scanning is performed virtually, Deep-Z can also reduce sample photobleaching and is also likely to reduce phototoxicity.

“Another very interesting feature of Deep-Z is that it can work with spatially non-uniform propagation matrices even though the network is only trained using uniform propagation matrices,” commented Ozcan. “Non-uniform propagation can be used to virtually refocus an input image onto an arbitrary 3D surface within the sample, including for example curved or tilted surfaces.”

According to Ozcan, Deep-Z introduces a powerful methodology with the desired physical parameters used as inputs to deep neural networks to achieve new inference functionalities, and he believes that the underlying principles of Deep-Z can be broadly applied to digital refocusing of other microscopy modalities, including bright-field microscopy and light-sheet microscopy.

In addition to improving the quality of images, Ozcan says that deep learning can also be used as a “virtual stain” to replace labelling techniques that are commonly used to add contrast to tissue samples. He points out that his recent work has demonstrated the efficacy of deep-learning-based staining using images of a single autofluorescence channel of an unstained tissue. “This study was able to validate the technique using a panel of pathologists who determined that the quality of the virtual stain was equivalent to that of a histologically stained slide, and that diagnoses could be made accurately using a virtually stained slide,” added Ozcan.

At BiOS, Barbastathis presented research on high-speed optical diffraction tomography (ODT) using deep learning. At present, the imaging speed of ODT is limited by the fact that 50 different illumination angles are needed to reconstruct the 3D refraction index map. With their 3D UNet architecture and large training dataset of different species of cells, his group could reduce the number of illumination angles from 49 to 5 with similar reconstruction performance. This improves the imaging speed of ODT, making it possible to reveal high-speed biological dynamics.

Gabriel Popescu from the University of Illinois at Urbana-Champaign gave a talk on multiscale quantum phase imaging (QPI) at the BiOS Hot Topics session. He categorized the AI contributions to imaging into two classes — image analysis and image generation.

“Deep learning is a powerful tool for extracting features from a given dataset, often much better than the human eye can perform. This operation is ideal for problems of classification, such as in cell sorting and cancer diagnosis,” explained Popescu. “With AI, the physics is not apparent anymore, but the parameter space is much, much broader, as one can compute millions of features.” According to Popescu, AI can help generate new data as well. The machine can learn all the fine details of the imaging system and remove that system function from the data. This becomes very useful, especially because physical models only typically work well within certain approximations.

“To me, the most exciting opportunity for AI and QPI is to add molecular specificity to the QPI data,” commented Popescu. “The machine learns how to predict fluorescence maps, but then we take those maps back into the QPI images and measure quantitatively cell parameters with high specificity.”

He strongly believes that phase imaging with computational specificity will elevate the QPI field to new heights. “The lack of specificity has been the main drawback of QPI. Now, with AI, we can actually retrieve the specificity without staining the specimens. The fact that the user can push a button and train for a dye in a couple of hours means that one can multiplex a large number of digital markers very efficiently. Finally, the inference is in real time and integrated in the acquisition software, which makes it really practical for in-depth biology,” Popescu explained.

Outside the lab, there is now considerable activity in commercializing AI-enhanced imaging. During the BiOS Expo Industry Stage, Maryellen Giger from the University of Chicago, USA presented QuantX, the first Food and Drug Administration (FDA)-cleared, machine-learning-driven computer-assisted diagnostic system to aid in cancer diagnosis.

“It is very natural to pair AI with imaging and it has been paired for multiple decades. However, AI is becoming more integrated in clinical medical imaging given advances in algorithm complexities, large datasets and computational power,” said Giger, who is a co-founder, equity holder and scientific advisor of Quantitative Insights (now Qlarity Imaging) that produced QuantX. She remarked that AI methods have been implemented in clinical medical imaging interpretations since the late 1990s starting with computer-aided detection for screening mammography using algorithms based on convolutional neural networks. AI now has a promising role in the radiology workflow in terms of improving both the effectiveness and efficiency of medical imaging exams, and that it will serve as a second reader, a concurrent reader, a primary (triage) reader, and perhaps even an autonomous reader.

Although it is very exciting times for using AI in imaging and microscopy, just like any other emerging fields, limitations exist. Popescu says that imaging single molecules in a live cell is not likely to work, due to the inhomogeneous background that is inherent to phase images. He also added that a likely bottleneck in many cases will be the generation of ground truth data, which are generally produced by hand.

Ozcan also confirms that limitations exist. “Most notably, for deep-learning-based microscopy approaches, it can be challenging to create a matching dataset to train these networks if the sample is dynamic and rapidly changing, because a co-registered training dataset cannot be created easily. Even for static samples, dataset creation can be time consuming or expensive when a sample-specific dataset is required. The performance of these networks also depends on the quality and breadth of the data used during the training phase.”

Barbastathis held the same opinion that the most significant limitation is the generation of examples for training. “It can be very costly, especially when dealing with complex imaging systems and sensitive samples with limited availability. However, there have also been some interesting cases where training was done with rigorous electromagnetic simulation and then showing that the algorithms had been trained well enough to handle physically generated data. There are also efforts to train such algorithms in unsupervised mode,” he continued.

When asked about the future outlook for radiology, Giger commented that AI is not likely to replace the radiologist but instead act as an additional complementary tool for aiding interpretation. It may also enhance workflow, handling routine, simpler tasks within radiology. Clinical predictive tasks may ultimately include diagnosis, prognosis, risk assessment and response to therapy. However, database sizes vary as do the statistical tests in evaluation. “More attention is needed to the concern of ‘garbage in, garbage out’,” commented Giger. “For example, when assessing a tumour in an image, how does one handle the clutter of normal structures surrounding the tumour. Do they add or distract from the decision making?” Giger questioned.

“One of the most interesting directions for the use of big data in computational imaging is to co-design the sensing system along with the inference algorithm. Deep-learning-inspired imaging instrumentation can be used to engineer new types of optics for specific tasks such as imaging or classification,” said Ozcan. “We believe that in the future, new optical architectures and deep networks might fully unlock the potentials of task-specific microscopy, and help us design low-cost and high-throughput imaging and sensing modalities. Along these lines, one of the more interesting directions is to create a ‘thinking’ imaging system, which can decide what measurement should be taken next, based on previous data rather than a priori design.”

Most presentations from this year’s Photonics West, including works by Barbastathis, Ozcan and Popescu mentioned above, have been recorded and are available in the SPIE Digital Library. Introduced by SPIE in 2017, this initiative allows both attendees and non-attendees alike to engage with the technical talks in a post-conference digital experience. With disruption and uncertainty due to COVID-19, innovations such as this are timely and are to be congratulated.

Photonics West 2021 will take place from 23–28 January at the Moscone Center in San Francisco.