Introduction

With the ability of getting distinctive information in spatial and spectral domain, spectral imaging technology has vast applications in remote sensing1, medical diagnosis2, biomedical engineering3, archeology and art conservation4, and food inspection5. Traditional methods of spectral imaging include whiskbroom scanning, pushbroom scanning, and wavelength scanning. Whiskbroom spectroscopy performs scanning pixel by pixel. A widely acknowledged example is Airborne Visible/Infrared Imaging Spectrometer6,7, which implemented whiskbroom approach on aircraft for Earth remote sensing. Pushbroom scan system uses the entrance slit and builds image line by line. The Hyperspectral Digital Imagery Collection Experiment instrument8,9 implemented pushbroom imaging optics with a prism spectrometer, offering a good capability for remote sensing. Wavelength scanning methods capture spectral image cubes through swapping narrow bandpass filters in front of the camera lens or using electronically tunable filters10,11. These typical scanning spectral imaging approaches are illustrated in Fig. 1.

Fig. 1: Typical scanning spectral imaging approaches.
figure 1

a Whiskbroom. b Pushbroom. PGP prism-grating-prism. c Wavelength scan

However, traditional scanning methods suffer from the low speed of the spectral image acquisition process because of the time-consuming scanning mechanism. As a consequence, they are not applicable for large scenes or dynamic recording. To solve this problem, researchers started to explore snapshot spectral imaging methods12. Early endeavors include integral field spectrometry, multispectral beam splitting, and image-replicating imaging spectrometer, as mentioned in ref. 12. These methods cannot obtain massive spectral channels and have bulky optical systems, though achieving multispectral imaging through splitting light.

With the development of compressed-sensing (CS) theory13,14, compressive spectral imaging has received growing attention from researchers because of its elegant combination of optics, mathematics and optimization theory. It has the ability to perform spectral imaging through fewer measurements, which is essential in resource-constrained environments. Compressive spectral imaging techniques often use a coded aperture to block or filter the input light field, namely the encoding process in the compressive sensing pipeline. As the name indicates, this process plays a role in information compression, which is flexible in design and provides the prior knowledge for later reconstruction. Different from the hardware-based encoding, its decoding process requires the computation via designed algorithms. Traditional reconstruction approach is iterative, using designed measurement of the encoding process and other prior knowledge for reconstruction. As a consequence, the decoding procedure is computationally expensive and can take minutes or even hours for spectral reconstruction. Furthermore, degradation problem when using fewer measurements also limits its application in resource-constrained environments.

While using coded aperture for amplitude encoding has shown the capability of spectral imaging from fewer measurements, the reduced light throughput and large system volume make it unsuitable for practical applications. To overcome this drawback, phase-coded spectral imaging15,16 is developed to improve light throughput and reduce system volume. Its main idea is using a carefully-designed thin diffraction optical element to manipulate the input light phase, which will affect the spectra in the diffraction process. Then, to recover spectra modeled in the complex diffraction process, powerful deep-learning techniques are required.

Researchers in computer graphics are also seeking to optimize spectra reconstruction, because using spectra is better than RGB triplets when rendering a scene illumination or display a virtual object on a monitor device. Early works17,18,19 obtain spectrum from RGB triplet, but this can be an ill-posed problem that has non-unique solutions and negative spectrum values. Later works involved more effective methods such as basis function fitting20 and dictionary learning21. The latter is based on the hyperspectral dataset, yet still have the problem of long-time weight fitting procedure. As demonstrated in a statistical research on hyperspectral images22, spectra within an image patch are correlated. Nevertheless, these pixel-wise methods fail to exploit the correlation information in a spectral data cube, hence effective patch feature extraction algorithms are expected. The pursuit of accurate and fast RGB-to-spectra approach has pushed the development of wavelength-coded methods. Researchers extended the RGB filters to multiple self-designed broadband filters for delicate wavelength encoding, and a reliable decoding algorithm is in demand. Completing such complex computing tasks is the mission of deep learning.

To alleviate the high computation costs in the aforementioned methods, deep-learning algorithm has been proposed as an alternative for learning spatial–spectral prior and spectral reconstruction. Deep-learning techniques can perform faster and more accurate reconstruction than iterative approaches, thus is suitable to apply on spectral recovery tasks. In recent years, many works have employed deep-learning models (such as convolutional neural networks, CNNs) in their spectral imaging framework and showed improved reconstruction speed and quality15,16,23,24,25.

In this review, we will look back at the development in spectral imaging with deep-learning tools and look forward to the future directions for computational spectral imaging systems with deep-learning technology. In the following sections, we will first discuss the deep-learning-empowered compressive spectral imaging methods that perform amplitude encoding using coded apertures in “Amplitude-coded Spectral Imaging”. We will then introduce phase-coded methods that use diffractive optical element (DOE, or diffuser) in “Phase-coded Spectral Imaging”. In “Wavelength-coded Spectral Imaging”, we will introduce wavelength-coded methods that use RGB or broadband optical filters for wavelength encoding, and adopt deep neural networks for spectral reconstruction. To boost future researches on learned spectral imaging, we have organized existing spectral datasets and the evaluation metrics (in “Spectral Imaging Datasets”). Finally, we will summarize the deep-learning-empowered spectral imaging methods in “Conclusions and Future Directions” and share our thoughts on the future.

Amplitude-coded spectral imaging

Amplitude-coded methods use coded aperture and dispersive elements for compressive spectral imaging. The classical system is coded aperture snapshot spectral imager (CASSI). To date, there are four CASSI architectures based on different spatial–spectral modulation styles, as shown in Fig. 2. The first proposed architecture is dual-disperser CASSI (DD-CASSI)26, which consists of two dispersive elements for spectral shearing with a coded aperture in between. Single-dispersive CASSI (SD-CASSI)27 is a later work, using one dispersive element placed behind the coded aperture. Snapshot colored compressive spectral imager (SCCSI)28 uses also a coded aperture and a dispersive element, but places the coded aperture behind the dispersive element. In comparison to SCCSI that attaches colored coded aperture (or, color filter array) to the camera sensor, spatial–spectral CASSI (SS-CASSI) architecture29 adds the flexibility of coded aperture position between spectral plane and sensor plane. This increases the complexity of the coded-aperture model, which may play a role in improving the system performance. Some deep-learning-based compressive spectral imaging methods have found better results with SS-CASSI30,31.

Fig. 2: Illustration of four CASSI architectures.
figure 2

In DD-CASSI architecture, the spectral scene experiences a shearing-coding-unshearing procedure. In SD-CASSI, the diffraction grating before the coded aperture is removed, therefore it becomes a coding-shearing process. In SCCSI, the coded aperture is placed behind the dispersive element and the spectral data will experience a shearing-coding procedure. In SS-CASSI, coded aperture position becomes flexible between spectral plane and camera sensor, where the ratio (d1 + d2)/d2 determines the extent of spectral encoding

Coded-aperture model

Since most works were based on SD-CASSI system, we will give a detailed derivation of the image construction process of SD-CASSI. The image formation procedure is different for other CASSI architectures in Fig. 2, but the key processes (vectorization, discretization, etc.) are the same. We refer readers to refs. 26,28,29 for a detailed modeling of the DD-CASSI, SCCSI, and SS-CASSI, respectively.

At the time when SD-CASSI was proposed, coded aperture had block–unblock pattern, which was extended to colored pattern in ref. 32. We will use a colored coded aperture in derivation for generality. Consider a target scene with spectral density f(x, y, λ) and track its route in an SD-CASSI system: it first encounters a coded aperture with transmittance T(x, y, λ) and then is sheared by a dispersive element (assume at x-axis), finally punches on the detector array. Figure 3 illustrates the whole process.

Fig. 3: Spectral imaging process within SD-CASSI architecture.
figure 3

The spectral data cube first passes a coded aperture for spatial encoding, then its spectral arrangement is shifted by a dispersive element. Finally, a detector captures the spatial and spectral encoded data image

The spectral density before the detector is formulated as

$$\begin{array}{lll}g(x,y,\lambda )&=&\iint \delta \left(x^{\prime} \,-\,[x\,+\,\alpha (\lambda \,-\,{\lambda }_{c})]\right)\delta (y^{\prime} \,-\,y)\,\cdot\, f(x^{\prime} ,y^{\prime} ,\lambda )\,T(x^{\prime} ,y^{\prime} ,\lambda )\,\,{{\mbox{d}}}x^{\prime} {{\mbox{d}}}\,y^{\prime} \\ &=&f(x\,+\,\alpha (\lambda \,-\,{\lambda }_{c}),y,\lambda )\,T(x\,+\,\alpha (\lambda \,-\,{\lambda }_{c}),y,\lambda )\end{array}$$
(1)

where delta function represents the spectral dispersion introduced by the dispersive element, such as a prism or gratings. α is a calibration factor, and λc is the center wavelength of dispersion. Since we can only measure the intensity on the detector, the measurement should be the integral along the wavelength:

$$\begin{array}{lll}g(x,y)&=&\int _{{{\Lambda }}}g(x,y,\lambda )\,\,{{\mbox{d}}}\,\lambda \\ &=&\int _{{{\Lambda }}}f(x\,+\,\alpha (\lambda \,-\,{\lambda }_{c}),y,\lambda )\,T(x\,+\,\alpha (\lambda -{\lambda }_{c}),y,\lambda )\,\,{{\mbox{d}}}\,\lambda \end{array}$$
(2)

where Λ is the spectrum range.

Next, we discretize Eq. (2). Denote Δ as the pixel size (in x and y dimension) of the detector, and assume the coded aperture has square pixel size Δcode = qΔ, q ≥ 1. The code pattern is then represented as a spatial array of its pixels:

$$T(x,y,\lambda )\,=\,\mathop{\sum}\limits_{m,n}T(m,n,\lambda ){{{\rm{rect}}}}\left(\frac{x}{q{{\Delta }}}\,-\,m,\frac{y}{q{{\Delta }}}\,-\,n\right)$$
(3)

Finally, signals within the region of a pixel will be accumulated in the sampling process:

$$\begin{array}{ll}g(m,n)&=\iint g(x,y){{{\rm{rect}}}}\left(\frac{x}{{{\Delta }}}\,-\,m,\frac{y}{{{\Delta }}}\,-\,n\right)\,{{{\rm{d}}}}x{{{\rm{d}}}}y\\ &=\iint {{{\rm{d}}}}x{{{\rm{d}}}}y\,{{{\rm{rect}}}}\left(\frac{x}{{{\Delta }}}\,-\,m,\frac{y}{{{\Delta }}}\,-\,n\right)\int _{{{\Lambda }}}{{{\rm{d}}}}\lambda \,f\left(x\,+\,\alpha (\lambda \,-\,{\lambda }_{C}),y,\lambda \right)\\ &\qquad\times \left[\mathop{\sum}\limits_{m^{\prime} ,n^{\prime} }T(m^{\prime} ,n^{\prime} ,\lambda ){{{\rm{rect}}}}\left(\frac{x\,+\,\alpha (\lambda \,-\,{\lambda }_{C})}{q{{\Delta }}}\,-\,m^{\prime} ,\frac{y}{q{{\Delta }}}\,-\,n^{\prime} \right)\right]\end{array}$$
(4)

To further simplify Eq. (4), we discrete f and T using their central pixel intensity. Take spectral resolution Δλ as the spectral interval. We use the intensity f(m, n, l) (\(m,n,l\in {\mathbb{N}}\)) to represent a pixel of the spectral density f(x, y, λ), where x\(\in\) [mΔ − Δ/2, mΔ + Δ/2], y\(\in\) [nΔ − Δ/2, nΔ + Δ/2], λ\(\in\) [λC + lΔλ − Δλ/2, λC + lΔλ + Δλ/2]. Adjust the calibration factor α so that the dispersion distance satisfies \(\alpha {{{\Delta }}}_{\lambda }\,=\,k{{\Delta }},k\,\in\, {\mathbb{N}}\). Then Eq. (4) becomes

$$g(m,n)\,=\,\mathop{\sum}\limits_{l}f(m\,+\,lk,n,l)\,T\left(\lfloor \frac{m\,+\,lk}{q}\,+\,\frac{1}{2}\rfloor ,\lfloor \frac{n}{q}\,+\,\frac{1}{2}\rfloor \right)$$
(5)

To adopt reconstruction algorithms, we need to rewrite Eq. (5) in a matrix form. This procedure is illustrated in Fig. 4.

Fig. 4: Vectorization and coded-aperture-related sensing matrix generation procedure.
figure 4

a Illustration of the vectorization process. For a matrix A, vectorization means stacking the columns of A on top of one another; For a spectral cube of the input scene f(m, n, l), vectorization means stacking the vectorized 2D slice on top of one another. b Illustration of generating sensing matrix from colored coded aperture in SD-CASSI architecture. It consists of a set of diagonal patterns that repeat in the horizontal direction, each time with a unit downward shift M that accounts for dispersion. Each diagonal pattern is generated from the vectorized coded aperture pattern of a band. The block–unblock coded aperture is similar, just turning the color bands into black and white

First, we vectorize the measurement and spectral cube as Fig. 4a:

$$\begin{array}{lll}{{{\bf{y}}}}&=&{{{\rm{vect}}}}\left[g(m,n)\right],\\ {{{\bf{x}}}}&=&{{{\rm{vect}}}}\left[f(m,n,l)\right]\end{array}$$
(6)

where the measurement term \(g\in {{\mathbb{R}}}^{M\,\times\, N}\) and spectral cube \(f\in {{\mathbb{R}}}^{M\,\times\, N\,\times\, L}\), with spatial dimension M × N and spectral dimension L. After vectorization, we have the vectorized terms \({{{\bf{y}}}}\,\in\, {{\mathbb{R}}}^{MN},{{{\bf{x}}}}\,\in\, {{\mathbb{R}}}^{MNL}\).

Next, the coded aperture and dispersion shift are modeled into a sensing matrix \({{\Phi }}\,\in\, {{\mathbb{R}}}^{MV\,\times\, MNL},\) where V = N + k(L − 1) contains the dispersion shift (the shift distance is \(\alpha {{{\Delta }}}_{\lambda }\,=\,k{{\Delta }},k\,\in\, {\mathbb{N}}\)). A sensing matrix (for k = 1) produced from a colored coded aperture is shown in Fig. 4b.

Finally, the reconstruction problem is formulated as

$$\mathop{\min }\limits_{{{{\bf{x}}}}}\parallel {{{\bf{y}}}}\,-\,{{\Phi }}{{{\bf{x}}}}\parallel \,+\,\eta R({{{\bf{x}}}})$$
(7)

where Φ is the sensing matrix, and y is measurement. Term R stands for priority, which is a regularizer determined by the prior knowledge of the input scene x (e.g., sparsity), and term η is a weight for the prior knowledge.

Deep compressive reconstruction

Traditional methods for spectral image reconstruction usually utilize iterative optimization algorithms, such as GAP-TV33, ADMM34, etc. These methods suffers a long reconstruction time for iterations. Besides, the spatial and spectral reconstruction accuracy is not solid by using hand-crafted priors. For example, total variance (TV) prior is always used in reconstruction algorithms, but it sometimes brings over-smoothness to the result.

Deep-learning techniques can be applied to each step in amplitude-coded spectral imaging methods, from the design of amplitude encoding strategy (coded aperture optimization) to finding a representative regularizer (term R in Eq. (7)), and the whole reconstruction process can be substituted with a neural network. Adopting deep-learning methods can improve the reconstruction speed by hundred of times. Moreover, learning priors from large amount of spectral data by neural networks can promote the reconstruction accuracy in both spatial and spectral domains. We have summarized recent years’ works of deep-learning-based coded aperture spectral imaging in Table 1 for comparison.

Table 1 Comparison of different amplitude-coded compressive spectral imaging methods

Based on different places deep learning is used, we divide the deep-learning-based compressive reconstruction methods into four categories: (i) end-to-end reconstruction that uses deep neural networks for direct reconstruction; (ii) joint mask learning that simultaneously learns the coded aperture pattern and the subsequent reconstruction network; (iii) unrolled network that unfolds the iterative optimization procedure into a deep network with many stage blocks; (iv) untrained network that uses the broad range of the neural network as a prior and performs iterative reconstruction. The main ideas of these four categories are illustrated in Fig. 5.

Fig. 5: Main ideas of the four deep compressive reconstruction approaches.
figure 5

a End-to-end reconstruction. b Joint mask learning. c Unrolled network. d Untrained network

E2E reconstruction

End-to-end (E2E) reconstruction sends measurement into a deep neural network which directly outputs the reconstruction result. Among E2E methods, deep external–internal learning35 proposed a novel learning strategy. First, external learning from large dataset was performed to improve the general capability of the network. Then for a specific application, internal learning from single spectral image was used for further improvement. In addition, fusion with panchromatic image showed benefits in improving spatial resolution. λ-Net36 is an alternative architecture based on conditional generative adversarial network (cGAN). It also adopted self-attention technique and hierarchical reconstruction strategy to promote the performance.

Dataset, network design and loss function are three key factors of the E2E methods. For future improvement, various techniques from RGB patch-wise spectral reconstruction can be employed (see section “RGB Pixel-wise Spectral Reconstruction”). For example, residual blocks, dense structure, and attention module are expected to be adopted. For the choice of loss functions, back-projection pixel loss is suggested to employ, which is beneficial to data fidelity. It simulates the measurement using the known coded aperture pattern and reconstructed spectral image, and compares the simulated back-projected measurement with the ground truth. Novel losses such as feature and style loss can also be attempted.

Joint mask learning

Coded aperture relates to sensing matrix Φ involved in spectral image acquisition process. Conventional methods based on CASSI often adopt random coded apertures since the random code can preserve the properties needed for reconstruction (e.g., restricted isometry property, RIP37) in high probability. As demonstrated in ref. 38, there are approaches for optimizing coded apertures by considering RIP as the criteria. However, such optimization does not present a significant improvement compared to the random coded masks.

In deep compressive reconstruction architecture, coded aperture is seen as an encoder to embed the spectral signatures. Therefore, it should be optimized together with the decoder, i.e., the reconstruction network. HyperReconNet39 jointly learns the coded aperture and the corresponding CNN for reconstruction. Coded aperture was appended into the network as a layer, and BinaryConnect method40 was adopted to map float digits to binary coded aperture entities. However, most works that used deep learning did not carefully optimize the coded aperture, hence this direction remains to be researched deeper.

Unrolled network

Unrolled network unfolds the iterative optimization-based reconstruction procedure into a neural network. In detail, a block of the unrolled network learns the solution of one iteration in the optimization algorithm.

Wang et al.24 proposed a hyperspectral image prior network that is adapted from the iterative reconstruction problem. Based on half quadratic splitting (HQS)41, they obtained an iterative optimization formula. By using network layers to learn the solution, they unfolded the K-iteration reconstruction procedure into a K-stage neural network. As a later work, Deep Non-local Unrolling (DNU)42 further simplified the formula derived in ref. 24 and rearranged the sequential structure in ref. 24 into a parallel one. Sogabe et al. proposed an ADMM-inspired network for compressive spectral imaging43. They unrolled the adaptive ADMM process into a multi-staged neural network and showed a performance improvement compared to HQS-inspired method24.

Unrolled network can boost the reconstruction speed by freezing the parameters of iteration into neural network layers. Each stage has the mission to solve an iteration equation, which makes the neural network explainable.

Untrained network

Deep image prior, as proposed in ref. 44, states that the structure of a generative network is sufficient to capture image priors for reconstruction. To be more specific, the range of deep neural networks can be large enough to include all common spectral image that we are going to recover. Therefore, carefully-designed untrained network is capable of performing spectral image reconstruction. Though it takes time for the iterative gradient descent procedure, such approach is free from pre-training and has high generalization ability.

Those labeled untrained in Table 1 adopted untrained network for compressive spectral reconstruction. The HCS2-Net31 took random code of the coded aperture and snapshot measurement as the network input, and used unsupervised network learning for spectral reconstruction. They adopted many deep-learning techniques such as residual block and attention module to enhance the network capability. In ref. 45, spectral data cube was considered as a 3D tensor and tensor Tucker decomposition46 was performed in a learned way. They designed network layers based on Tucker decomposition and used low rank prior of the core tensor, which may be beneficial to better capture the spectral data structure.

Phase-coded spectral imaging

Phase-coded spectral imaging formulates the image generation as a convolution process between wavelength specified point spread function (PSF) and monochrome object image at each wavelength. The phase encoding manipulates the phase term of the PSF which will distinguish spectral signature as light propagates. Compared with amplitude-coded spectral imaging, phase-coded approach can greatly increase the light throughput (hence the signal-to-noise ratio). Since the phase encoding is mainly operated on a thin DOE, which is easy to attach onto a camera, the phase-coded spectral imaging system can be very compact.

One can recover the spectral signature by designing algorithms with the corresponding DOE (also called diffuser in some works16,47,48,49). With the aid of deep learning, these methods displayed comparable performance. Furthermore, benefitting from the depth dependence of diffraction model, they can also obtain depth information apart from spectral signature of a scene50.

Phase-coded approach for spectral imaging consists of two parts: (i) phase encoding strategy, often related to the design of DOE; (ii) reconstruction algorithm establishment. In this section, we first describe the phase encoding diffraction model, then introduce deep-learning-empowered works using different phase encoding strategies and systems.

Diffraction model

The phase-coded spectral imaging system is based on previous works of diffractive imaging51,52. The system often consists of a DOE (transmissive or reflective) and a bare camera sensor, separated by a distance z. As illustrated in Fig. 6, there are two kinds of phase-coded spectral imaging systems, namely DOE-Fresnel diffraction (left) and DOE-Lens system (right), different from whether there is a lens.

Fig. 6: Schematic diagram of diffractive spectral imaging via a diffractive optical element (DOE).
figure 6

The left is the system using a transmissive DOE and a sensor, where the incident wave passes a DOE and then propagates a distance z before hitting the sensor. The propagation can be modeled by Fresnel diffraction. The right system uses an imaging lens just behind the DOE. After passing the DOE, the incident wave converged on the sensor through the lens. DOE has a height profile that introduces the phase shift

PSF construction

We use the transmissive DOE for model derivation. PSF pλ(x, y) is the system response to a point source at the image plane. Suppose the incident wave field at position \((x^{\prime} ,y^{\prime} )\) of the DOE coordinate at wavelength λ is

$${u}_{0\lambda }(x^{\prime} ,y^{\prime} )\,=\,{A}_{\lambda }(x^{\prime} ,y^{\prime} ){e}^{i{\phi }_{0\lambda }(x^{\prime} ,y^{\prime} )}$$
(8)

The wave field first experiences a phase shift ϕh determined by the height profile of the DOE:

$$\begin{array}{lll}{u}_{1\lambda }(x^{\prime} ,y^{\prime} )&=&{A}_{\lambda }(x^{\prime} ,y^{\prime} ){e}^{i\left[{\phi }_{0\lambda }(x^{\prime} ,y^{\prime} )\,+\,{\phi }_{h}(x^{\prime} ,y^{\prime} )\right]},\\ {\phi }_{h}(x^{\prime} ,y^{\prime} )&=&k{{\Delta }}{n}_{\lambda }h(x^{\prime} ,y^{\prime} )\end{array}$$
(9)

where Δn is the refractive index difference between DOE (n(λ)) and air, k = 2π/λ is the wave number.

For the DOE-lens system, the PSF is16:

$${p}_{\lambda }(x,y)\,=\,\left|{{{{\mathcal{F}}}}}^{-1}\left[{u}_{1\lambda }(x^{\prime} ,y^{\prime} )\right]\right|$$
(10)

where \({{{{\mathcal{F}}}}}^{-1}\) is the inverse 2D Fourier transform due to the Fourier characteristics of the lens.

For DOE-Fresnel diffraction system, the wave field propagates a distance z that can be modeled by the Fresnel diffraction law such that λz:

$$\begin{array}{lll}{u}_{2\lambda }(x,y)&=&\frac{{e}^{ikz}}{i\lambda z}\iint {u}_{1\lambda }(x^{\prime} ,y^{\prime} ){e}^{\frac{ik}{2z}\left[{(x\,-\,x^{\prime} )}^{2}\,+\,{(y\,-\,y^{\prime} )}^{2}\right]}\,\,{{\mbox{d}}}x^{\prime} {{\mbox{d}}}\,y^{\prime} \\ &=&\frac{{e}^{ikz}}{i\lambda z}\iint {A}_{\lambda }(x^{\prime} ,y^{\prime} ){e}^{i\left[{\phi }_{0\lambda }(x^{\prime} ,y^{\prime} )\,+\,{\phi }_{h}(x^{\prime} ,y^{\prime} )\right]}{e}^{\frac{ik}{2z}\left[{(x\,-\,x^{\prime} )}^{2}\,+\,{(y\,-\,y^{\prime} )}^{2}\right]}\,\,{{\mbox{d}}}x^{\prime} {{\mbox{d}}}\,y^{\prime} \end{array}$$
(11)

Finally, for computation convenience, we expand the Eq. (11) and represent it with a Fourier transform \({{{\mathcal{F}}}}\). The final PSF is formulated as

$${p}_{\lambda }(x,y)\,\propto\, {\left|{{{\mathcal{F}}}}\left[{A}_{\lambda }(x^{\prime} ,y^{\prime} ){e}^{i\left[{\phi }_{0\lambda }(x^{\prime} ,y^{\prime} )\,+\,{\phi }_{h}(x^{\prime} ,y^{\prime} )\right]}{e}^{i\frac{\pi }{\lambda z}(x{^{\prime} }^{2}\,+\,y{^{\prime} }^{2})}\right]\right|}^{2}$$
(12)

Image formation

Considering an incident object distribution \({o}_{\lambda }(x^{\prime} ,y^{\prime} )\) at DOE, we can decompose it into integral of object points:

$${o}_{\lambda }(x^{\prime} ,y^{\prime} )\,=\,\iint {o}_{\lambda }(\xi ,\eta )\,\cdot\, \delta (x^{\prime} \,-\,\xi ,y^{\prime} \,-\,\eta )\,\,{{\mbox{d}}}\xi {{\mbox{d}}}\,\eta$$
(13)

Before hitting the sensor, the spectral distribution is

$$\begin{array}{lll}{I}_{\lambda }(x,y)&=&\iint {o}_{\lambda }(\xi ,\eta )\cdot {{{\rm{PSF}}}}\{\delta (x^{\prime} \,-\,\xi ,y^{\prime} \,-\,\eta )\}\,\,{{\mbox{d}}}\xi {{\mbox{d}}}\,\eta \\ &=&\iint {o}_{\lambda }(\xi ,\eta )\,\cdot\, {p}_{\lambda }(x\,-\,\xi ,y-\eta )\,\,{{\mbox{d}}}\xi {{\mbox{d}}}\,\eta \\ &=&{o}_{\lambda }(x,y)\,*\, {p}_{\lambda }(x,y)\end{array}$$
(14)

where PSF denotes system response to a point source and pλ is shifted by ξ and η in x and y axis because of the same shift at the point source.

Finally, on the sensor plane (with sensor spectral response D), the intensity is

$$I(x,y)\,=\,\int _{{{\Lambda }}}\,D(\lambda )\,\cdot\, \left[{o}_{\lambda }(x,y)\,*\, {p}_{\lambda }(x,y)\,\,{{\mbox{d}}}\,\lambda \right]$$
(15)

Similar to Fig. 4, vectorize oλ to x and matrixize the convolution with PSF function to Φ, we can discretize Eq. (15) and form the reconstruction problem as Eq. (7). Researchers can use similar optimization algorithms or deep-learning tools for DOE design and spectral image recovery.

Phase encoding strategies

A good PSF design contributes to the effective phase encoding, which can bring more precise spectral reconstruction results. Based on the slight difference of the imaging system, we categorize the phase encoding strategies below.

DOE with Fresnel diffraction

Many phase-coded spectral imaging methods are developed from diffractive computational color imaging. Peng et al.53 proposed an optimization-based DOE design approach to obtain a shape invariant PSF towards wavelength. Together with the deconvolution method, they reconstructed high-fidelity color image.

Although the shape invariant PSF53 is beneficial for high-quality achromatic imaging, the overlap of PSF at each wavelength causes difficulty on spectral reconstruction, which hinders its application on spectral imaging. Jeon et al.15 designed a spectrally varying PSF that regularly rotates with wavelength, which encoded the spectral information. Their rotational PSF design makes it distinct at different wavelength, which is quite suitable for spectral imaging. By putting the resultant intensity image into an optimization-based unrolled network, they achieved high peak signal-to-noise ratio (PSNR) and spectral accuracy in visible wavelength range, within a very compact system.

DOE/diffuser with lens

A similar architecture is using DOE (or, diffuser) with an imaging lens closely behind, which is shown in Fig. 6 (right). In 2016, Golub et al.49 proposed a simple diffuser-lens optical system and used compressed-sensing-based algorithm for spectral reconstruction. Hauser et al.16 extended the work to 2D binary diffuser (for binary phase encoding) and employed a deep neural network (named DD-Net) for spectral reconstruction. They reported high-quality reconstruction in both simulation and lab experiments.

Combination with other encoding approach

Combining phase encoding with other encoding architectures is also a feasible approach, and deep learning can handle such complicated combined-architecture model. For example, compressive diffraction spectral imaging method combined DOE for phase encoding with coded apertures for further amplitude encoding54. However, the reconstruction progress is very tough, and the light efficiency is not high. Another example is the combination with optical filter array. Based on previous works of lensless imaging47,55, Monakhova et al. proposed a spectral DiffuserCam48, using a diffuser to spread the point source and a tiled filter array for further wavelength encoding. As the method has a similar mathematical spectral formation model, it is promising to apply deep learning to spectral DiffuserCam’s complex reconstruction task.

Wavelength-coded spectral imaging

Wavelength-coded spectral imaging uses optical filters to encode spectral signature along wavelength axis. Among wavelength-coded methods, RGB image, which is encoded by RGB narrowband filters, is mostly used. It is necessary to reconstruct the spectral image from the RGB one, because RGB image is commonly used by people, and the corresponding spectral image is fundamental to rendering scenes on monitors. Over the years, researchers have been pursuing fast and accurate approaches of wavelength-coded spectral imaging. They found RGB filters may be suboptimal, thus different narrowband filters as well as self-designed broadband filters are explored.

Image formation model

We first introduce the image formation model in wavelength encoding context. Consider an intensity Ik(x, y) from a pixel at (x, y), k is the channel index indicating different wavelength modulation. For RGB image, k {1, 2, 3}, representing red, green, and blue. The encoded intensity is generated by the scene reflectance spectra S under illumination E:

$${I}_{k}(x,y)\,=\,\int _{{{\Lambda }}}E(\lambda )S(x,y,\lambda ){Q}_{k}(\lambda )D(\lambda )\,\,{{\mbox{d}}}\,\lambda$$
(16)

where Qk is the kth filter transmittance curve, D is the camera sensitivity, and Λ is the wavelength range. Illumination distribution E and scene spectral reflectance S can be combined as the scene spectral radiance R:

$${I}_{k}(x,y)\,=\,\int _{{{\Lambda }}}R(x,y,\lambda ){Q}_{k}(\lambda )D(\lambda )\,\,{{\mbox{d}}}\,\lambda$$
(17)

The imaging process is illustrated in Fig. 7. In practice, we have the encoded object intensities I and filter curves Q, but the camera sensitivity is sometimes inconvenient to measure, thus many methods assume it be ideally flat. Under experimental conditions, we also know illumination E. Then Eq. (17) (or Eq. (16)) becomes an (underdetermined) matrix inversion problem after discretization.

Fig. 7: Illustration of wavelength encoding spectral imaging process.
figure 7

The scene S is illuminated by the light source E, and is wavelength-coded through filters Q. Then the encoded scene spectral radiance is captured by the imaging lens on a sensor with spectral response D

RGB pixel-wise spectral reconstruction

Early works of wavelength-coded spectral reconstruction is pixel-wise on RGB images. They consider the reduced problem of how to reconstruct a spectrum vector that has more channels from a 3-channel RGB vector, without knowing the camera’s RGB-filter response. In general, these pixel-wise approaches seek a representation of the single spectrum (either manifold embedding or basis functions) and develops methods to reconstruct spectrum from that representation.

There are two modalities of methods on spectrum representation: (i) spectrum manifold learning that seeks the hidden manifold embedding space to express the spectrum effectively; (ii) basis function fitting that expands the spectrum as a set of basis functions, and fit a small number of coefficients.

Spectrum manifold learning

This approach assumes that a spectrum y is controlled by a vector x in the low-dimensional manifold \({{{\mathcal{M}}}}\) and tries to find the mapping f that relates y with x:

$${{{\bf{y}}}}\,=\,f({{{\bf{x}}}}),\quad {{{\bf{y}}}}\,\in\, {{{\mathcal{D}}}},f\,\in\, {{{\mathcal{F}}}},{{{\bf{x}}}}\,\in\, {{{\mathcal{M}}}}$$
(18)

where \({{{\mathcal{D}}}}\) is the high-dimensional data space (commonly, \({{{\mathcal{M}}}}\,=\,{{\mathbb{R}}}^{m},{{{\mathcal{D}}}}\,=\,{{\mathbb{R}}}^{n}.m,n\,\in\, {\mathbb{N}}\) is the space dimension). \({{{\mathcal{F}}}}\) is a functional space that contains functions mapping data from \({{{\mathcal{M}}}}\) to \({{{\mathcal{D}}}}\).

Manifold learning assumes a low-dimensional manifold \({{{\mathcal{M}}}}\) embedded in the high-dimensional data space \({{{\mathcal{D}}}}\), and attempts to recover \({{{\mathcal{M}}}}\) from the data drawn in \({{{\mathcal{D}}}}\). Reference56 proposed a three-step method: (i) Find an appropriate dimension of the manifold space through Isometric Feature Mapping (Isomap57); (ii) Train a radial basis function (RBF) network to embed the RGB vector in \({{{\mathcal{M}}}}\), which determines the inverse of f in Eq. (18); (iii) Use dictionary learning to map the manifold representation in \({{{\mathcal{M}}}}\) back to the spectra space, which determines the function f in Eq. (18). The RBF network and dictionary learning method can be substituted by deep neural networks (such as AutoEncoder) to improve the performance, hence the manifold-based reconstruction can be further promoted.

Basis function fitting

This approach assumes that a spectrum y = y(λ) is expanded by a set of basis functions {ϕ1(λ), … , ϕN(λ)}:

$$y(\lambda )\,=\,\mathop{\sum }\limits_{i\,=\,1}^{N}{\alpha }_{i}{\phi }_{i}(\lambda )$$
(19)

where α are the coefficients to fit.

In a short note by Glassner17, a simple matrix inversion method was developed for RGB-to-spectrum conversion, but the resultant spectrum only has three nonzero components, which is rare in real world. At the end of the note, the author reported a weighted basis function fitting approach to construct spectrum from RGB triplet, with constant, sine, and cosine three functions. To render light interference, Sun et al.18 compared different basis functions for deriving spectra from colors and proposed an adaptive method that uses Gaussian functions. Nguyen et al.20 further developed the basis function approach, proposing a data-driven method that learns RBF to map illumination normalized RGB image to spectral image.

In ref. 21, an over-complete hyperspectral dictionary was constructed using K-SVD algorithm from the proposed dataset, which contained a set of nearly orthogonal vectors that can be seen as learned basis functions. Similar to the dictionary learning approach, deep-learning tools can be used for learning basis functions. In ref. 58, basis functions are generated during training, and coefficients are predicted through a U-Net at test time. It is very computationally efficient since it only needs to fit a small number of coefficients during the test time. Although the spectral reconstruction accuracy is not as high as other CNN-based methods (which sufficiently extract spectral patch correlation), it is the fastest method in NTIRE 2020 with reconstruction time only 34 ms per image.

RGB Patch-wise spectral reconstruction

As reported in ref. 22, spectra within an image patch has certain correlation. However, pixel-wise approaches cannot exploit such correlation, which may lead to poor reconstruction accuracy in comparison with patch-wise approaches. In ref. 59, a handmade patch feature through convolution operation was proposed, which extracts neighborhood feature of a RGB pixel from the training spectral dataset. This work gave a practical idea of how to utilize such patch feature in a spectral image, which is just suitable for convolutional neural networks (CNNs).

CNNs can perform more complex feature extraction through multiple convolution operators. In 2017, Xiong et al. proposed HSCNN23 to apply a CNN on up-sampled RGB and amplitude-coded measurements for spectral reconstruction. At the same year, Galliani et al. proposed learned spectral super-resolution60, using a CNN for end-to-end RGB to spectral image reconstruction. Their works obtained good spectral reconstruction accuracy on many open spectral datasets, encouraging later works on CNN-based spectral reconstruction. The number of similar works grew rapidly as New Trends in Image Restoration and Enhancement (NTIRE) challenge was hosted in 201861 and 202025, where many deep-learning groups joined in and contributed to the exploitation of various network structures for spectral reconstruction.

Neural network-based methods takes the advantage of deep learning and can better grasp the patch spectra correlation. Diverse network structures as well as advanced deep-learning techniques are exploited by different works, which are arranged in Table 2.

Table 2 Comparison of neural network-based works for end-to-end spectral reconstruction from RGB images

Leveraging advanced deep-learning techniques

We can gain some inspirations from Table 2. First, most works are CNN-based, this perhaps because CNN can better extract patch spectral information than generative adversarial networks (GANs). There was a work based on conditional GAN (cGAN)62, which takes RGB image as conditional input. They also used L1 distance loss (mean absolute error loss) as ref. 63 to encourage less blur, but the reconstruction accuracy was not better than HSCNN23 (ref. 62 has relative root-mean-square error (RMSE) 0.0401 on ICVL dataset, while HSCNN has 0.0388).

Moreover, many advanced deep-learning techniques are introduced and shown to be effective. For instance, residual blocks64 and dense structure65 become increasingly common. This is because residual connection can broaden the network’s receptive field and dense structure can enhance the feature passing process, resulting in better extraction of spectral patch correlation. Attention mechanism66 is a popular deep-learning technique and is also introduced in spectral imaging works. For spectral reconstruction, there are two kinds of attention: spatial attention (e.g., the self-attention layer67,68) and spectral attention (channel attention69). Attention module learns a spatial or spectral weight, helping the network focus on the informative parts of the spectral image. Feature fusion is the concatenation of multiple parallel layers, which was researched in ref. 70. It was adopted in refs. 71,72,73 and showed positive influence on spectral reconstruction. Finally, ensemble technique is encouraged to further promote the network performance. Model ensemble and self ensemble are two kinds of ensemble strategies. Model ensemble averages networks that are retrained with different parameters, while self ensemble averages the results of transformed input to the same network. Single network may fall into local minimum, which leads to poor generalization performance. By applying the ensemble technique, one can fuse the knowledge of multiple networks or different viewpoint to the same input. HRNet73 adopted model ensemble, and it showed improvement on reconstruction result.

Since the spectral reconstruction is a kind of image-to-image task, many works borrow effective deep-learning techniques from other image-to-image tasks, such as U-Net architecture from74 segmentation task, sub-pixel convolution layer75, channel attention69 from image super resolution task, and feature loss and style loss from image style transfer task76,77. This is also a way to introduce advanced deep-learning techniques into spectral reconstruction.

Towards illumination invariance

Object reflectance spectrum without illumination is a desired objection for spectral reconstruction, since it honestly reflects the scene components and properties. To recover object reflectance, one need to strip out environment illumination E from scene spectral radiance R, but it is inconvenient to measure the illumination spectra. Researchers often use illumination invariant property of the object spectrum to remove attached illumination from the scene radiance.

Reference20 proposed an approach to employ illumination invariance. They proposed RGB white-balancing to normalize the scene illumination. As an additional product, they can estimate the environment illumination by comparing reconstructed scene with the original scene. In ref. 78, Denoising AutoEncoder (DAE) was used to obtain robust spectrum from noised input, which contains original spectrum under different illumination conditions. Through this many-to-one mapping, reconstruction to spectrum became invariant to illumination.

Utilizing RGB-filter response

RGB-filter response is the wavelength encoding function Q in Eq. (16). In many works79,80, the RGB-filter response is termed camera spectral sensitivity (CSS) prior. To avoid semantic ambiguity of CSS and camera response D in Eq. (16), we substitute it with RGB-filter response.

RGB-filter response is not always accessible for practical applications, which is a notable problem. A common way to tackle it is using CIE color mapping function for simulation81. Reference79 proposed another solution to address this problem. They adopted a classification neural network to estimate a suitable RGB-filter response from the given camera sensitivity set. Then they can use the estimated filter response function and another network to recover the spectral signature. These two nets were trained together via a united loss function.

When RGB-filter response is known, RGB image can be reconstructed from spectral image, thus back-projection (or perceptual) loss can be used. Experiments have shown benefits to add the filter response prior in reconstruction. For example, AWAN80, who ranked 1st in NTIRE 2020 Clean track, adopted filter response curves in loss function and got a slight improvement on MRAE metric.

In ref. 82, the RGB-filter response Q is carefully exploited. They demonstrated that the reconstructed spectrum should follow the color fidelity property QTψ(I) = I, where ψ is the RGB-to-spectrum mapping and I is the RGB pixel intensity.

They defined the set of spectra that satisfy color fidelity as plausible set:

$${{{\mathcal{P}}}}(I;Q)\,=\,\left\{{{{\bf{r}}}}| {Q}^{T}{{{\bf{r}}}}\,=\,I\right\}$$

where r is spectrum. The concept of physically plausible was illustrated in Fig. 8.

Fig. 8: Illustration of the physically implausible and plausible set.
figure 8

Physically implausible set (left) is the spectra points that cannot be mapped to the original RGB point by the RGB-filter response, while physically plausible set (right) is those spectra points that can be mapped back

They suggest that the reconstructed spectrum should contain two parts: one from the space spanned by three column filter response vectors in Q, and the other from the orthogonal complement space of the former. Formally, there exists an orthogonal basis \(B\in {{\mathbb{R}}}^{n-3}\) such that BTQ = 0. Therefore, the spectrum to be reconstructed can be expanded as

$${{{\bf{r}}}}\,=\,{{{{\rm{P}}}}}^{{{{\rm{Q}}}}}{{{\bf{r}}}}\,+\,{{{{\rm{P}}}}}^{{{{\rm{B}}}}}{{{\bf{r}}}}$$
(20)

where PQ and PB are projection operators. Note that PQ can be precisely calculated in advance, which reduces the reconstruction calculation by 3 dimensions. The remaining task is estimating the spectrum vector in an orthogonal space of filter response vectors, which can be done by training a deep neural network.

Beyond RGB filters

Since the RGB image has limited information, researchers tend to manually add more information before reconstruction. There are two ways to realize this: (i) using self-designed broadband wavelength encoding to expand the modulation range; (ii) increasing the number of encoding filters. Works in this area mainly use deep-learning tools to design filter response curves and perform spectral reconstruction83,84,85, since the modulation design and the reconstruction process are complicated in computation.

Using broadband filters

Based on the idea that traditional RGB camera’s spectral response function is suboptimal for spectrum reconstruction, Nie et al.83 employed CNNs to design filter response functions and jointly reconstruct spectral image. They observed the similarity between camera filter array and convolutional layer kernel (the Bayer filter mosaic is similar to a 2 × 2 convolution kernel) and used camera filters as a hardware-implementation layer of the network. Their result showed improvement than traditional RGB-filter-based methods. However, limited by the filter manufacture technology, they only considered filters that were commercially available.

With the maturity of the modern filter manufacture technology, flexible designed filters with specific response spectrum becomes realizable. Song et al. presented a joint learning framework for broadband filter design, named parameter constrained spectral encoder and decoder (PCSED)84, as illustrated in Fig. 9.

Fig. 9: Schematic of the PCSED framework.
figure 9

The broadband encoding stochastic (BEST) filters act as an encoding neural network (encoder), where the learned weights is the filter responses. The weights are constrained by the filters' structure parameters, and are generated from a pre-trained structure-to-response forward modeling network (FMN). The figure is reprinted from ref. 84 under a CC BY license (Creative Commons Attribution 4.0 International license)

They jointly trained filter response curves (as spectral encoder) and decoder network for spectral reconstruction. Benefited from the development of thin-film filter manufacture industry, they can design various filter response functions that are favored by the decoder. They extended the work in ref. 85 and got impressive results. The developed hardware, broadband encoding stochastic (BEST) camera, demonstrated great improvements on noise tolerance, reconstruction speed and spectral resolution (301 channels). For the future direction, anti-noise optical filters produced from meta-surface is promising with the development of meta-surface theory and industry86.

Increasing filter number

Increasing filter number is a straightforward approach to enhance reconstruction accuracy by providing more encoding information. However, this will inevitably lead to bulky system volume. An alternative way to perform wavelength modulation is using liquid crystal (LC). In this way, changing the voltage will switch LC to a different modulation, thus it is convenient to use multiple modulations by applying different voltages. By fast changing the voltage on LC, multiple wavelength encoding operators can be obtained, which is equivalent to increasing filter numbers.

Based on different responses of the LC phase retarder to different wavelengths, the Compressive Sensing Miniature Ultra-Spectral Imager (CS-MUSI) architecture can modulate the spectra like multiple optical filters. Oiknine et al. reviewed spectral reconstruction with CS-MUSI instrument in ref. 87. They also proposed DeepCubeNet88 that adopted CS-MUSI system to perform 32 different wavelength modulations and used CNN for spectral image reconstruction.

Spectral imaging datasets

Spectral dataset that contains realistic spectral-RGB image pairs are important for data-driven spectral imaging methods, especially for those using deep learning. CAVE89, NUS20, ICVL21 and KAIST30 are the most often used datasets for training and evaluation in spectral reconstruction researches. Other datasets like Harvard22, Hyperspectral & Color Imaging90, Scyllarus hyperspectral dataset91, C2H92 are also available. To promote the research of spectral reconstruction from RGB images, competitions were held on 2018 and 2020, where ICVL-expanded dataset (NTIRE 201861) and larger-than-ever database NTIRE 202025 were provided. We summarize the public available spectral image datasets in the following tables. Table 3 gives an overview of the spectral datasets and Table 4 provides a detailed description of the data.

Table 3 Spectral dataset parameters
Table 4 Description of spectral datasets

Two problems still exist for these datasets: (i) insufficient capacity for extracting high-complexity spatial–spectral feature; (ii) unfixed train-test split. Some datasets don’t provide a fixed train-test split, causing unfair comparison among methods that use different train-test split strategy. Therefore, it is important to have a large but standard database. We hope the database has unparalleled scale, accuracy and diversity to boost future researches.

At present phase when such a giant standard dataset is not available, we think the popular datasets ICVL21, CAVE89, NUS20 and KAIST30 are sufficient for the reconstruction accuracy analysis on both spatial and spectral domain.

Spectral image quality metrics

There are numerous metrics used for performance evaluation in spectral reconstruction, and we refer to ref. 93 for their definition and comparison.

In general, PSNR, structural similarity (SSIM) index and spectral angle map (SAM) are mostly used for amplitude-coded methods, while different metrics like root-mean square error (RMSE) and mean relative absolute error (MRAE) are applied on wavelength-coded methods. As a consequence, it is inconvenient to compare the performance between wavelength-coded and amplitude-coded methods. Therefore, for the convenience of the community to compare different methods, it is necessary to set unified metrics. We think some common metrics are needed for the comparison between the two methods. For example, SSIM, RMSE and RMAE can be employed by both methods at evaluation.

Furthermore, we also need metrics to compare the reconstruction speed. Different works perform spectral reconstruction for different resolution images on various computing devices. We think pixel reconstruction speed is a reliable metric to compare reconstruction speed. It is the average speed on test dataset divided by the the 3D resolution of the data (i.e., total pixels of the spectral data used for testing).

Conclusions and future directions

We have summarized different computational spectral reconstruction methods that adopted deep neural networks, detailing their working principles and deep-learning techniques, under three encoding-decoding modalities: (i) Amplitude-coded. It uses coded aperture for amplitude encoding and is a compressive spectral imaging approach, which exploits compressive sensing theory and iterative optimization process for spectral reconstruction. Based on this feature, some learned reconstruction algorithms are designed to reduce the time consumption for optimization (e.g., unrolled networks), or use deep neural networks to improve the optimization accuracy (e.g., untrained networks). (ii) Phase-coded. It uses DOE to modulate the phase of the input light for each wavelength, and is physically based on Fresnel propagation to expand such phase modulation onto the resultant image. By leveraging creative design of DOE, it enjoys the compactness of the system and improved light throughput. (iii) Wavelength-coded. A common case of wavelength encoding is the RGB image. RGB-to-spectrum is essential in computational graphics, for the benefit of easy-tuning in rendering scenes with spectra on monitors. To extract spectra feature from the RGB data, deep-learning algorithms either map them to a manifold space, or explore the inherent spatial–spectral correlation. As an extension of the RGB-based approaches, multiple self-designed broadband filters for wavelength encoding is developed in recent years. It is more advantageous in the reconstruction precision of the spectra, but the results are also sensitive to the filter fabrication error and imaging noise.

For future directions, extra scene information is expected to promote the reconstruction performance on specific application. In C2H-Net92, object category and position was used as a prior, similar to the famous object detection framework YOLO94. Based on the observation that pixel patches with object information was often more important than background environment, they introduced object category and position into the reconstruction process. Using additional information can also benefit functional applications of spectral imaging. As a later work of C2H-Net, ref. 95 contributes to objection detection using spectral imaging with additional object information.

Additionally, joint encoder-decoder training is also an important direction. Encoder is the hardware layer before the reconstruction algorithm, such as coded aperture, DOE, or optical filter. Simultaneously training the encoder and decoder can provide the decoder with the coding information, thereby improving the performance39,84. However, two problems are waiting to be addressed. (i) Finding more efficient encoding hardware and modeling it into a network layer, such as using DOE to improve the light throughput. CS-MUSI architecture that can replace multiple filters88 is also encouraged to explore. (ii) Overcoming gradient vanishment. Since the hardware layer is the first layer of the whole deep neural network, when gradient propagates back, it is always very small, which in turn confines the possible change of the hardware layer. If the above two problems are elegantly solved, we believe the deep-learning-empowered computational spectral imaging can step further.

The past decade has witnessed a rapid expansion of deep neural networks in spectral imaging. Despite the success of deep learning, it still has a lot of room for further optimization. Reinforcement learning (RL) is a promising technique to improve the performance. To date, it proves useful to employ RL in finding optimal reconstruction network architectures (i.e., neural architecture search, NAS96). With the improvement of computing power, such techniques are promising to increase the performance of learned spectral imaging methods.

Finally, we think transformer-based large-scale deep-learning models have great potential in spectral reconstruction task. Transformer, first applied to the field of natural language processing, is a type of deep neural network mainly based on the self-attention mechanism66. It presents strong representation capabilities and has been widely applied in vision tasks97. However, such large-scale deep neural networks require huge data for training, hence large-than-ever spectral datasets are demanded, as suggested in section “Spectral Imaging Datasets”.