Skip to main content

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

Deeply learned broadband encoding stochastic hyperspectral imaging

Abstract

Many applications requiring both spectral and spatial information at high resolution benefit from spectral imaging. Although different technical methods have been developed and commercially available, computational spectral cameras represent a compact, lightweight, and inexpensive solution. However, the tradeoff between spatial and spectral resolutions, dominated by the limited data volume and environmental noise, limits the potential of these cameras. In this study, we developed a deeply learned broadband encoding stochastic hyperspectral camera. In particular, using advanced artificial intelligence in filter design and spectrum reconstruction, we achieved 7000–11,000 times faster signal processing and ~10 times improvement regarding noise tolerance. These improvements enabled us to precisely and dynamically reconstruct the spectra of the entire field of view, previously unreachable with compact computational spectral cameras.

Introduction

Spectral imaging provides a powerful sensing method for science, where spectral and spatial detection is simultaneously expected. Its applications include art conservation1, astronomy2, earth remote sensing3, biomedical engineering4, and atmospheric science5. However, traditional spectral imaging instruments have bulky mechanics for spectral dispersion6. Moreover, either spatial or spectral scanning is essential to generate a three-dimensional (3D) spectral-spatial data cube. These scanning processes dramatically extend the acquisition time. To overcome these limitations, the so-called “snapshot” computational spectral imaging technique was proposed7,8,9,10. In this regard, the basic idea was to place a coded mask in front of the dispersion prism9. However, this approach failed to reduce the volume because of its complex optical scheme. A more effective solution10,11 was to apply random spectral filters, which were initially developed for the spectrometer12,13,14. In contrast to classic optical filters with unique or multiple passing bands, random spectral filters refer to an array of filters featuring irregular and unrelated transitions/reflections12. Integrating these random spectral filters into cameras simultaneously with advanced computational algorithms allows the reconstruction of more spectral bands with fewer detection channels.

Nevertheless, the efficiency of the algorithm becomes the key driver of the snapshot system’s performance. The algorithms currently applied in these spectral cameras are generally extensions of the compressive sensing (CS) concept15. These methods require a sufficient number of random spectral filters to guarantee spectral resolution. For example, the hyperspectral cameras developed in previous studies10,11 usually have 25 or more filters to cover the entire visible region. Furthermore, these CS-based methods obtain sufficiently convergent results through an iterative process. Consequently, they are time-consuming, limiting the possible output data volume of the spectral camera. Therefore, the snapshot methods always have to sacrifice the spatial resolution to maintain a high reconstruction precision of the spectrum or vice versa. However, the derivation fails even under low-level noise. Extra efforts to digitally handle noise could extend the computing time and might induce artifacts if they are not appropriately employed.

Recently, deep learning methods have been adopted to improve reconstruction accuracy and speed16,17. However, their hardware still suffers from the large volume due to the coded mask and prism. For the compact random filter design, some deep learning approaches have been developed for both the target response definition18,19 and inverse design20,21,22,23. These methods work to some extent; however, none of them comprehensively considered both procedures. As such, their results are compromised either by the producibility of the designed spectral responses or by the sensitivity to the fabrication error24.

To address the problems of computational speed, noise tolerance, and filter design, we develop a new hyperspectral camera, namely broadband encoding stochastic (BEST) camera, based on a deep neural network (DNN)25 for both the filter design and the spectrum reconstruction. This study is organized as follows. First, we introduce the schematic of the BEST camera and address the filter design problem. Subsequently, we analyze the reconstruction speed and noise tolerance advantage of the proposed DNN. Then, we verify our proposal on two hyperspectral imaging architectures: passive mode (measuring the spectral radiance) and active mode (measuring the spectral reflectance). Finally, we discuss the reliability of our systems and summarize our work.

Results

Schematic of BEST camera

A simplified schematic of the proposed device is shown in Fig. 1a. The proposed BEST camera is built using 16 random spectral filters. It has both active and passive modes. For the active mode, the filtered illumination spectrum encodes the sample; thus, the device detects the spectral reflectance of the sample \(S(\lambda ,x,y)\). For the passive mode, the spectrum reflected by the sample is encoded by the filters; thus, the device detects the spectral radiance of the reflected light \(L(\lambda )S(\lambda ,x,y)\). These filters were designed considering the parameter constrained spectral encoder and decoder (PCSED) method24 and fabricated by an electronic beam evaporator. In particular, as an extension of DNN for designing random spectral filters, PCSED focuses on obtaining the optimal spectral responsivity for each random spectral filter in the group, while ensuring their producibility. In brief, PCSED verifies large volumes of data to understand coating behaviors. Without manual intervention, it provides the coating design combinations maximizing the spectral resolution that are not apparent to humans. Using this strategy, we designed the proposed BEST camera with only 16 filters, corresponding to 40–60% fewer than those in previous studies10,11. The respective spectra and design details are available in Supplementary Information S1.

Fig. 1: Principle and performance of BEST camera.
figure1

a Simplified schematic. Depending on where the light spectrum is encoded, the camera can work either in the active (upper) or passive (lower) modes. b Principle of DNN-based spectral reconstruction algorithm. The initial data captured by the monochrome camera is fed into the DNN and outputs the reconstructed 3D hyperspectral data cube. c, d Spectral profiles of laser beams with narrow bandwidth. In c, the DNN is trained by the “precise” dataset, whereas d is for the results from the DNN trained by “general” datasets. E Spectral profile of two peaks corresponding to 598.0 nm and 603.2 nm. The peak-to-peak distance is highlighted in black. In ce, the ground truths and the DNN reconstructed results are represented by dashed (ground truth) and solid (reconstructed) curves, respectively. The graphs are normalized to their peak intensity

Advantages of applying DNN

To extract the exact spectrum from these random spectral filters with rich features, we used the DNN for data processing (“Methods” section and Fig. 1b), which provides two major improvements over the original CS algorithms. First, the spectrum reconstruction speed can be orders of magnitude faster. This substantial acceleration is rooted, because the iterative algorithm solves an optimization problem for each pixel, whereas the DNN operates matrix multiplexing. The latter is especially suitable for parallel computing using a graphic processing unit (GPU). Therefore, for the iterative algorithm, the reconstruction time of a spectral image increases with increasing pixel number. However, the reconstruction speed of the DNN is almost pixel-independent. Although the speed is still comparable when the image only includes several thousand or fewer pixels, DNN is advantageous for images with higher resolution. To quantitatively compare the performance of the above two algorithms, we applied the proposed DNN to reconstruct a hyperspectral image (480 × 640 pixels, 1 nm spectral step size, 400–700 nm spectral scale) on an Intel Core i9-10900x CPU, Nvidia GeForce 2080Ti GPU platform. The DNN could generate such an image in 0.48 s. Although further optimization is possible, the current speed level is sufficient to perform real-time data reconstruction, as the proposed recording frame rate is below 2 fps. In contrast, reconstructing the same image using the basis pursuit denoising algorithm (BPDN, a classic iterative CS algorithm)26 requires 3307.3 s on the same computer platform, nearly 7000 times slower than that of the proposed DNN. Doubling the image size to 1280 × 480 pixels further expands the diversity to 11,000 times (0.65 s for DNN and 7219.2 s for gradient projection), which is consistent with our prediction (see Supplementary Information S2 for details).

Second, a well-trained DNN has better denoising abilities, which can be readily obtained by adding more samples with noise at different levels during training. Moreover, adding noise has a regularization effect and, in turn, further improves the robustness of the model. In CS algorithms, however, denoising is largely experience-dependent. In general, a variable parameter is manually set during the iteration to cancel out the noise-induced bias. This operation works to some extent; however, it does not provide convincing results as long as the noise level changes. On average, the DNN enhances noise tolerance by 8.14 times under different noise levels (see Supplementary Information S3 for details).

BEST camera in passive mode

We conducted a passive detection experiment to discern the target spectrum. Before illuminating the monochrome camera, the incident light from the target was modulated using random spectral filters (“Methods” and Supplementary Information S4). To quantify the spectral resolution of the proposed device, we measured the wavelength of a laser beam selected by an acousto-optic tunable filter (AOTF) from a white-light laser source. The bandwidth of the single-wavelength laser beam after AOTF was ~3.2 nm (full width at half maximum). Figure 1c–e presents a comparison of the reconstructed results from the proposed device (solid line) and the ground truth curves from a commercial spectrometer (Ocean Optics FLAME-S, dashed line). Two DNNs with identical frameworks were trained separately using two datasets (see Supplementary Information S5 for details), to guarantee the objectivity of the measurements. The first dataset contained only the spectrum information from the monochromatic light source, i.e., a laser beam. Moreover, the output spectrum from the trained DNN was strictly defined by a narrowband envelope. In this way (the “precise” mode), the average localization precision of the center wavelength in the reconstructed spectrum is 0.55 nm (Fig. 1c). In the “general” mode, we expanded the training dataset to include more broadband spectral data. In addition, we removed all the boundary conditions to support arbitrary spectrum shapes. This strategy shows comparable spectral localization precision at an average of 0.63 nm (Fig. 1d). However, it provides our camera with the generality to satisfy different operational needs. Such trained DNN allows a spectral resolution of ~5.2 nm (Fig. 1e), sufficient for most hyperspectral imaging applications in practice. Regarding this fact, we used the DNN working in “general” mode in the following experiments.

First, we used BEST camera to acquire a hyperspectral image from a standard color calibration card (X-Rite ColorChecker Classic Mini, Fig. 2a, b, e, f). For different colors, the mean square error (MSE) between the measured (solid) and ground truth (dashed) spectra ranged from 5 × 10−6 to 0.0032, with an average value of 0.0008. All spectra considered 16 color patches, as shown in Fig. 2a (see Supplementary Information S8). Compared to these artificially standard samples, natural objects, such as plants, represents a challenge for hyperspectral cameras. These samples usually contain rich structure and color details, requiring high spatial and high spectral resolutions to recover. However, the proposed hyperspectral camera overcomes this challenge by introducing DNN. Figure 2c, d, g–j show the results. The average reconstruction MSE is 0.0016, which is in good agreement with the previous experiment. However, the results are now obtained from a much more complex sample.

Fig. 2: Applications of BEST camera in passive mode.
figure2

a Photo of the color patches from the standard color calibration board. An RGB camera was used to take the photo. b Reconstructed spectral image (visualized in RGB form) of the color calibration card. Photo (c) and reconstructed spectral image (d) of practical plant sample. ej Spectral profiles at the positions denoted by the white squares in ad. The ground truths (measured by a commercial spectroradiometer, Konica Minolta CS-2000) and the DNN reconstructed results are represented by solid (reconstructed) and dashed (ground truth) curves, respectively

BEST camera in active mode

In addition to the passive model described above, the DNN can be applied to active hyperspectral imaging modes. Compared with passive cameras, active BEST cameras excel at measuring spectral reflectance regardless of the ambient illumination. To show the compatibility of the proposed DNN, we set up an active hyperspectral camera by placing a random spectral filter array in front of white-light light-emitting diodes (LEDs) to modulate the illumination light (“Methods” and Supplementary Information S6). We sequentially turned on each LED during the experiment and collected the reflected light from the target using a monochrome camera. We selected a watercolor painting (Fig. 3a) as the testing target. By feeding all captured images (a total of 16 frames) into the pre-trained DNN, we acquired its hyperspectral image, a 640 × 480 × 301 3D matrix. The results cover the entire visible range (400–700 nm) at a step size of 1 nm. We converted the spectral information into color space (Fig. 3b and “Methods”) to improve visualization. The retrieved image was highly consistent with the original target. The example spectral profiles from the labeled targets in Fig. 3b are presented in Fig. 3c–f. The MSE between the measured (solid) and ground truth (dashed) spectra was quantified at an average of 0.0010. Moreover, in good agreement with the simulations, the spectra reconstructed by the CS algorithm (gray lines in Fig. 3c–f) have an average MSE of 0.0061, 6.1 times worse than the DNN results.

Fig. 3: Applications of BEST camera in active mode.
figure3

RGB photo (a) and reconstructed spectral images (b) of the watercolor painting sample. cf Spectral profiles at the positions denoted by the white squares in a, b. The ground truths (measured by a commercial spectrophotometer, Olympus USPM-RU) and the DNN reconstructed results are represented by solid (reconstructed) and dashed (ground truth) curves, respectively

The dimension of the active BEST cameras represents the major limitation. As the light source requires extra space, it is challenging to design and fabricate a compact active device. Nevertheless, benefitting from the advances of electronics technology, in particular, the surface-mounted devices, we compressed the dimensions of the proposed active BEST camera to 28.5 × 13.6 × 7.15 mm3 by miniaturizing the modulated illumination array and switching the monochrome charge-coupled device (CCD) camera to a cell-phone RGB complementary metal-oxide-semiconductor sensor (see Supplementary Information S7 for the details). These devices can potentially be integrated into wearables or cell phones for portable applications, such as electronics and health care.

Discussion

The reliability of the output of the DNN was mainly determined by the spectral response of each random spectral filter. Any chromatic dispersion can severely worsen the reconstruction precision. The proposed random spectral filters were deposited by optical thin-film coating. A metasurface can also be considered to reduce the overlay steps during the production and pursue higher integration with the spectral camera. However, maintaining the optical properties such as transmittance/reflection spectra of the metasurfaces is challenging, especially when the polarization or the incident angle of light changes. The sensitivities to the polarization and incident angle, if not properly addressed, would narrow the application scale of hyperspectral cameras. Although the polarization effects are negligible for the thin-film random spectral filters used in the proposed system, the angular dispersion is still a force that should be considered. To compensate for the angular dispersion, we calibrated the spectral transmittance of each random spectral filter as a function of the light incident angle. In addition, we substituted them into the spectrum retrieval to improve the reconstruction precision (see Supplementary Information S9 for details). This strategy is beneficial for expanding the field-of-view or off-axis imaging, where most of the light with a large incident angle is present.

In summary, DNN enables us to capture hyperspectral images with a high spatial resolution (>105 pixels) for complex samples and to develop an improved algorithm for spectral detection techniques. DNN allows a dramatically accelerated reconstruction to perform real-time spectral reconstruction and noise reduction. Compared with previous ones using CS algorithms, BEST camera excels in several aspects (see Supplementary Information S10 for details). Using our DNN to decode the spectrum from random spectral filters is only one of the several mechanisms in the computational spectroscopy family. In general, the algorithm presented in this study can be applied to techniques with similar purposes. This underscores the potential of applying our method to other computational spectroscopy techniques, such as linear X-ray spectroscopy and photoelectron spectroscopy27. This study expands the application of DNNs for optics. However, the functions discussed here, such as filter design, calculation acceleration, and de-noise, are just several possibilities. The entire potential of the technique is still to be realized.

Finally, the proposed system detects the spectrum at nanoscale resolution by computational spectroscopy and represents an important step toward artificial intelligence for optics.

Methods

DNN framework

The architecture of the DNN for spectral reconstruction can be described as “16-(LR-FC-500)5-LR-FC-301-LR.” Each number represents the number of units in the corresponding layer. LR indicates leaky ReLU units. FC denotes a fully connected layer, and the superscript “5” represents five repeat layers in the bracket. The input unit number corresponds to 16 random spectral filters and the output unit number 301 indicates the reconstructed spectral channels (400–700 nm, 1 nm step).

Setup of passive BEST camera

In the passive BEST camera, the sample was imaged at random spectral filters through the relay lens. The encoded image was captured by a monochrome CCD camera (Vieworks VH-310G2) immediately behind the filter array. During the data acquisition process, 16 encoded images of the samples were captured. Each image corresponded to a specific filter of a random spectral filter array. Finally, the spectral data cube was reconstructed using DNN. A photograph of the experimental setup is shown in Supplementary Fig. S3.

Setup of active BEST camera

In contrast to the passive one, the random spectral filters in the active BEST camera were placed in front of the white-light LEDs (Luminus MP-3030-1100-56-95) to tune the illumination light. During the data acquisition process, each LED was sequentially turned on. A monochrome CCD camera (Vieworks VH-310G2) was used to collect the reflected light from the target. As the encoding manner changes, the DNN is retrained using the data collected by the active mode. A photograph of the experimental setup is shown in Supplementary Fig. S5.

PCSED and filter design

During the filter design, PCSED focused on solving the following problem:

$$(\hat P,\hat \theta ) = \mathop {{{\mathrm{argmin}}}}\limits_{P,\theta } ||{\mathbf{s}} - {\cal{D}}_\theta [{\cal{F}}{\cal{M}}\left( P \right)^T \cdot {\mathbf{s}}]||_2^2 + R(P)$$

where s represents the spectrum in discrete form, \({\cal{F}}{\cal{M}}\) denotes the forward modeling network (FMN), and P is the filter structure parameter. \({\cal{D}}_\theta\) represents the DNN for spectral reconstruction with network parameters θ and R(P) is the regularization term constraining the structural parameters, which, in this case, includes the thin-film thickness and coating angles. The initial parameters of P comprise 16 angles and 30 thicknesses. The FMN maps the parameters P to the corresponding spectral responses theoretically calculated from 30 layers of SiO2 and TiO2 at 16 coating angles. By solving the PCSED problem with a dataset of 1,000,000 spectra from Computer Vision Laboratory (CAVE) and Interdisciplinary Computational Vision Laboratory (ICVL), a set of structural parameters directly used by the coating machine was obtained. Moreover, the DNN for spectral reconstruction was also trained. All 16 filters were coated for faster fabrication in a single deposition process using an electronic beam evaporator (Optorun OTFC-1300). More details on the filter fabrication are available in Supplementary Information S11.

Visualization of hyperspectral image

The direct visualization of the hyperspectral image was allowed by converting the spectra into CIEXYZ color space by calculating the X, Y, and Z tristimulus values according to:

$$\left\{ {\begin{array}{*{20}{c}} {X = \mathop {\smallint }\limits_{\lambda 1}^{\lambda 2} S\left( \lambda \right)\bar x\left( \lambda \right)d\lambda } \\ {Y = \mathop {\smallint }\limits_{\lambda 1}^{\lambda 2} S\left( \lambda \right)\bar y\left( \lambda \right)d\lambda } \\ {Z = \mathop {\smallint }\limits_{\lambda 1}^{\lambda 2} S\left( \lambda \right)\bar z\left( \lambda \right)d\lambda } \end{array}} \right.$$

where \(\lambda _1\) and \(\lambda _2\) represent the wavelength range, \(S\left( \lambda \right)\) is the spectrum of each pixel, and \(\bar x\), \(\bar y\), and \(\bar z\) are the standard observer color-matching functions. The calculated (X, Y, Z) was further converted into an sRGB color space to display using the MATLAB function “xyz2rgb.”

References

  1. 1.

    Gabrieli, F. et al. Near-UV to mid-IR reflectance imaging spectroscopy of paintings on the macroscale. Sci. Adv. 5, eaaw7794 (2019).

    ADS  Article  Google Scholar 

  2. 2.

    Bahauddin, S. M., Bradshaw, S. J. & Winebarger, A. R. The origin of reconnection-mediated transient brightenings in the solar transition region. Nat. Astron. 5, 237–245 (2021).

    ADS  Article  Google Scholar 

  3. 3.

    Williams, L. J. et al. Remote spectral detection of biodiversity effects on forest biomass. Nat. Ecol. Evol. 5, 46–54 (2021).

    Article  Google Scholar 

  4. 4.

    Zhang, D. L. et al. Depth-resolved mid-infrared photothermal imaging of living cells and organisms with submicrometer spatial resolution. Sci. Adv. 2, e1600521 (2016).

    ADS  Article  Google Scholar 

  5. 5.

    Greaves, J. S. et al. Phosphine gas in the cloud decks of Venus. Nature Astronomy, https://doi.org/10.1038/s41550-020-1174-4 (2020).

  6. 6.

    Pavia, D. L. et al. Introduction to Spectroscopy (Cengage Learning, 2014).

  7. 7.

    Hagen, N. A. & Kudenov, M. W. Review of snapshot spectral imaging technologies. Opt. Eng. 52, 090901 (2013).

    ADS  Article  Google Scholar 

  8. 8.

    Cao, X. et al. Computational snapshot multispectral cameras: toward dynamic capture of the spectral world. IEEE Signal Process. Mag. 33, 95–108 (2016).

    ADS  Article  Google Scholar 

  9. 9.

    Gehm, M. E. et al. Single-shot compressive spectral imaging with a dual-disperser architecture. Opt. Express 15, 14013–14027 (2007).

    ADS  Article  Google Scholar 

  10. 10.

    Cai, X. S. et al. One-shot ultraspectral imaging with reconfigurable metasurfaces. Preprint at https://arxiv.org/abs/2005.02689 (2020).

  11. 11.

    Wang, Z. et al. Single-shot on-chip spectral sensors based on photonic crystal slabs. Nat. Commun. 10, 1020 (2019).

    ADS  Article  Google Scholar 

  12. 12.

    Oliver, J., Lee, W. B. & Lee, H. N. Filters with random transmittance for improving resolution in filter-array-based spectrometers. Opt. Express 21, 3969–3989 (2013).

    ADS  Article  Google Scholar 

  13. 13.

    Bao, J. & Bawendi, M. G. A colloidal quantum dot spectrometer. Nature 523, 67–70 (2015).

    ADS  Article  Google Scholar 

  14. 14.

    Yang, Z. Y. et al. Single-nanowire spectrometers. Science 365, 1017–1020 (2019).

    ADS  Article  Google Scholar 

  15. 15.

    Baraniuk, R. G. Compressive sensing. IEEE Signal Process. Mag. 24, 118–121 (2007).

    ADS  Article  Google Scholar 

  16. 16.

    Choi, I. et al. High-quality hyperspectral reconstruction using a spectral prior. ACM Trans. Graph. 36, 218 (2017).

    Article  Google Scholar 

  17. 17.

    Wang, L. et al. Hyperspectral image reconstruction using a deep spatial-spectral prior. In Proc. IEEE/CVF Conference on Computer Vision and Pattern Recognition 15–20 June 2019 (IEEE, Long Beach, CA, USA, 2019).

  18. 18.

    Gewali, U. B., Monteiro, S. T. & Saber, E. Spectral super-resolution with optimized bands. Remote Sens. 11, 1648 (2019).

    ADS  Article  Google Scholar 

  19. 19.

    Nie, S. J. et al. Deeply learned filter response functions for hyperspectral reconstruction. In Proc. IEEE/CVF Conference on Computer Vision and Pattern Recognition 18–23 June 2018 (IEEE, Salt Lake City, UT, USA, 2018).

  20. 20.

    So, S. et al. Deep learning enabled inverse design in nanophotonics. Nanophotonics 9, 1041–1057 (2020).

    Article  Google Scholar 

  21. 21.

    Voznyy, O. et al. Machine learning accelerates discovery of optimal colloidal quantum dot synthesis. ACS Nano 13, 11122–11128 (2019).

    Article  Google Scholar 

  22. 22.

    Gao, L. et al. A bidirectional deep neural network for accurate silicon color design. Adv. Mater. 31, 1905467 (2019).

    Article  Google Scholar 

  23. 23.

    Liu, D. J. et al. Training deep neural networks for the inverse design of nanophotonic structures. ACS Photonics 5, 1365–1369 (2018).

    Article  Google Scholar 

  24. 24.

    Song, H. Y. et al. Deep-learned broadband encoding stochastic filters for computational spectroscopic instruments. Adv. Theory Simul. 4, 2000299 (2021).

    Article  Google Scholar 

  25. 25.

    Goodfellow, I., Bengio, Y. & Courville, A. Deep Learning 1 (MIT Press, 2016).

    MATH  Google Scholar 

  26. 26.

    Figueiredo, M. A. T., Nowak, R. D. & Wright, S. J. Gradient projection for sparse reconstruction: application to compressed sensing and other inverse problems. IEEE J. Sel. Top. Signal Process. 1, 586–597 (2007).

    ADS  Article  Google Scholar 

  27. 27.

    Alford, T. L., Feldman, L. C. & Mayer, J. W. Fundamentals of Nanoscale Film. Analysis (Springer, 2007).

Download references

Acknowledgements

We thank Sunny Optical Technology (Group) Company for the technical support on the device integration. We thank Professor M. R. Luo for the help on experimental samples and instruments. We thank the three anonymous reviewers for their insightful comments on the initial version of our manuscript. This work was financially supported by the Major Research Plan of the National Natural Science Foundation of China (92050115), Zhejiang Provincial Natural Science Foundation of China (LZ21F050003), and ZJU-Sunny Innovation Center (2019-01).

Author information

Affiliations

Authors

Contributions

X. Hao and X.L. conceived the idea. H.S. and W.Z. designed and constructed the setup. H.S. and W.Z. developed the software. W.S., H.S., and X.L. optimized the protocols and prepared the filters. W.Z., L.H., X.Z., and J.Z. conducted the experiments. W.Z. and H.S. analyzed and visualized the data. X.H., H.S., W.Z., and X. He drafted the manuscript.

Corresponding authors

Correspondence to Xiang Hao or Xu Liu.

Ethics declarations

Conflict of interest

The authors declare no competing interests.

Supplementary information

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Zhang, W., Song, H., He, X. et al. Deeply learned broadband encoding stochastic hyperspectral imaging. Light Sci Appl 10, 108 (2021). https://doi.org/10.1038/s41377-021-00545-2

Download citation

Search

Quick links