Introduction

Optical holography is a superior tool to retrieve phase and amplitude of light from an intensity image, which bears detailed information of the object as size, shape, and refractive index. Reconstructed object/hologram from an intensity image has important roles in high-security encryption1,2, microscopy3, data storage4, 3D object recognition5, and planar solar concentrator6,7,8. Besides the information gained from intensity images is reversible, and the phase and amplitude information can be used to generate intensity images for numerous applications such as imaging9,10,11, photostimulation12, printing13, optical beam steering14, aberration correction15, display16,17, and augmented reality18. Optical holography enables image formation at different observation planes or through a sample without requiring any focusing elements or mechanical scanning. For generation of a holographic image, fine-tuned phase/amplitude distribution of a hologram is required that increases design duration of a hologram. Considering wide-area implementation of optical holography for both retrieving object/hologram information and generation of a holographic image, a versatile methodology with a short design/optimization duration is highly demanded.

For generation of optical holographic images and object/hologram recovery, there are a variety of algorithms frequently used19,20,21,22,23,24. These algorithms are easy to employ and yield good performance but require iterative optimization. Unfortunately, these algorithms lack full flexibility in design parameters, especially when number of design wavelengths and number of observation planes are more than one. Furthermore, a strong push to convergence for tolerable error may cause the algorithms to yield physically infeasible patterns. In contrast to these algorithms, deep learning correlates an intensity distribution to a hologram without reconstruction of phase and amplitude information from an intensity distribution thanks to its data-driven approach. Deep learning is a superior tool that presents important achievement especially in holography for imaging25,26,27,28,29,30,31, microscopy32,33,34,35, optical trapping36, and molecular diagnostics37. However, for generation of optical holograms which provide holographic images at different observation planes and wavelengths, versatile neural networks are required38,39,40. This issue is weakly addressed in the literature, and proper modalities are demanded for generation of optical holographic images having diverse properties: different observation planes, wavelengths, and figures. Using a comprehensive neural network model, design of holograms could be accelerated that leads to generation of optical holographic images at different observation planes and wavelengths.

In this study, firstly, we present a method to compute color holographic images at multiple observation planes without cross-talk in detail. Later we share, for the first time to our knowledge, a single deep learning architecture that generates (1) multi-color images at a single observation plane, (2) single color three images at multiple observation planes, (3) multi-color images at multiple observation planes. In addition to generation of holographic images, the deep learning model provides retrieval of object/hologram information from an intensity image without requiring phase and amplitude information of an intensity holographic image thanks to statistically retrieval behavior of deep learning. Moreover, we obtain holographic images with twin-image free by eliminating phase and amplitude retrieval. Training the deep learning model with a data set which is a one-time process lasting less than an hour speeds up generation of holographic images and object/hologram recovery from an intensity image down to 2 s.

Methods

A computer-generated hologram (CGH) could be a phase mask or a three-dimensional object where thickness value changes spatially. Thickness distribution of a CGH is also interpreted in a phase distribution with a refractive index value of a holographic plate and wavelength of light. With Fresnel–Kirchhoff diffraction integral (FKDI), intensity distribution of a holographic image at each wavelength of a light source is computed by considering thickness distribution of a CGH. The detail of holographic image calculation for a hologram is discussed in the supplementary document.

We use correlation coefficient \(\rho\) as a metric to evaluate similarity between a ground-truth/ideal holographic image \(I_{Ideal}\) and a designed holographic image \(I_{Designed}\) with the deep learning model/FKDI (see Eq. (1)). The same metric evaluates similarity between a ground-truth hologram \(I_{Ideal}\) and a reconstructed hologram with the deep learning model \(I_{Designed}\). N in Eq. (1) is number of pixels in the ideal image and the designed image; \(I_{Ideal, i}\) and \(I_{Designed, i}\) in Eq. (1) are intensity values of ith pixel of the ideal image and the designed image, respectively. The other terms in Eq. (1) are mean of the ideal image \(\mu _{{I_{Ideal} }}\), mean of the designed image \(\mu _{{I_{Designed} }}\), standard deviation of the ideal image \(\sigma _{I_{Ideal}}\), and standard deviation of the designed image \(\sigma _{I_{Designed}}\). We compute a correlation coefficient for a ground-truth/ideal holographic image \(I_{Ideal}\) and corresponding designed holographic image \(I_{Designed}\) at each observation plane distance d and each wavelength of light \(\lambda\).

$$\begin{aligned} {\rho \left( I_{Ideal},I_{Designed}\right) =\frac{100}{N-1}\sum _{i=1}^{N}\frac{\left( I_{Ideal, i} - \mu _{{I_{Ideal} }} \right) * \left( I_{Designed, i} - \mu _{{I_{Designed} }} \right) }{\sigma _{I_{Ideal}} * \sigma _{I_{Designed}}} }. \end{aligned}$$
(1)

Using spatially varying 8-level thickness profile of a hologram, we tune phase of a light source to form a holographic image. The holographic image is designed with a linearly polarized continuous light at normal incidence. Resolutions of a hologram and a holographic image are 40-by-40. Each pixel of a hologram has a thickness value ranging from 1 to 8 \(\upmu\)m, which is divided into 8 equal discrete steps. Here, we especially chose parameters that are experimentally achievable using large-area fabrication methods41. We evaluate performance of a hologram with a correlation coefficient \(\rho\) in terms of percent with Eq. (1) during organizing thickness distribution of the hologram with the local search optimization algorithm. The local search optimization algorithm sequentially changes thickness of each pixel in the hologram after generating a random hologram distribution to form desired intensity images at selected frequencies of the light and observation plane distances. We used two decision-making criteria: AND and MEAN, which are also called logic operations, in the algorithm to minimize difference between an ideal image (uniform and binary image distribution) \(I_{Ideal}\) and a designed image \(I_{Designed}\). With AND logic operation, we organize a hologram thickness distribution considering increases in values of the correlation coefficient at each wavelength and each observation plane simultaneously. With MEAN logic operation, the hologram thickness distribution is tuned when average correlation coefficient is computed with correlation coefficients at all wavelengths and all observation plane increases. At the end of designing a hologram, we obtained 51,200 optical hologram distributions and corresponding holographic images at each light frequency/observation plane.

In this manuscript, we employed the deep learning model the CHoloNet presented in our previous work42 (see Fig. S1). The CHoloNet demonstrates reconstruction of an object thickness distribution/hologram by using holographic intensity images at all wavelengths of light and observation planes. The framework uses holographic intensity images to train weights of the model and understands how to transform spectral and spatial information of incident light encoded within holographic images to optical holograms. The CHoloNet receives multiple images at different observation planes/colors as inputs and produces a hologram in terms of thickness that can reconstruct input images at predefined depths/colors. The model takes three-channel intensity distributions, two of which belong to spatial size of intensity distributions, and the third is for either different wavelengths, observation planes, or wavelengths plus observation planes. After the training, the network inference time for reconstruction of an object/hologram distribution with reversing one-hot vector operation is almost 2 s on average. The detail of our neural network is presented in the supplementary material.

Results and discussion

The selection of design wavelengths is a crucial step to form holographic intensity images at the image plane without a cross-talk between images at different frequencies of light. For this concern, at first, we inspect spectral bandwidth, which a holographic image presents when a hologram designed for this image is illuminated by broadband light. We design a CGH that images an intensity distribution of letter A at a wavelength of 700 nm as seen in Fig. 1a. This image provides a correlation coefficient of 96.6% with a uniform and binary image of letter A after utilizing Eq. (1). As seen in this image, we generated a clear and almost uniform intensity distribution of letter A which is obtained with a hologram distribution having a thickness profile in Fig. 1b. When the same CGH is illuminated by a uniform broadband light (650–750 nm), the correlation coefficient varies through different illumination wavelengths of light as seen in Fig. 1c, and correlation coefficient peaks at the design wavelength of CGH, which is 700 nm as expected.

Figure 1
figure 1

(a) The intensity image of letter A; (b) Thickness distribution of designed CGH which produces the image as seen in (a); (c) The correlation spectrum of holographic images at wavelengths between 650 and 750 nm obtained with the hologram presented in (b).

In Fig. 2, we present holographic images of letter A under different illumination wavelengths of light between 692 and 707 nm. As seen here beyond and before the design wavelength of the CGH (700 nm), holographic images lose clear shape of letter A, and diffraction orders appear in the images. Especially beyond the design wavelength of the CGH between 701 and 707 nm, diffraction orders in the holographic images are dominant. The diffraction orders can be eliminated by selection of a longer distance between the hologram plane and the image plane. In that case, correlation coefficient at the illumination wavelength will change, and the same hologram may not form the image of letter A. These holographic images are acquired for the selected design parameters, and by considering updated design parameters, the hologram can be re-designed.

Figure 2
figure 2

Variation of holographic images under different illumination wavelengths of light (692–707 nm) obtained with the hologram in Fig. 1b. The design wavelength of the holographic image A is 700 nm.

In Fig. 1c, we see that correlation value is less than 10% when the illumination wavelength shifts 30 nm from the design wavelength of the CGH. With this figure, we understand that the intensity image of letter A is drastically distorted when the wavelength of light shows a 30 nm difference from the design wavelength of the optimized CGH. This wavelength shift is strongly affected by the distance between the CGH plane and the image plane, sizes of the CGH plane and the image plane, size of each pixel in the CGH plane and the image plane, and wavelength of light source. As a result, we conclude that when the design wavelengths of a color holographic image are 30 nm apart, there is no overlap between the images. Therefore, a true multi-color hologram should work beyond 30 nm bandwidth.

At first, we tuned thickness distribution of a CGH for generation of a holographic color image at a single observation plane. The wavelengths of the color image are 670 nm for alphabet image of A, 700 nm for alphabet image of B, and 730 nm for alphabet image of C. The transmitted images from the hologram plane are projected onto a white screen 350 \(\upmu\)m away from surface of the CGH. The thickness optimization of the hologram was performed for four ful scannings of all the hologram pixels which last 3.6 h with the local search optimization algorithm. The holographic images of letters are presented in Fig. 3a–c. The intensity values on pixels of the letter figures are higher than background, and we observe formation of three holographic images with high contrast. For generation of these images, we use the equation of correlation coefficient in Eq. (1) as a cost function with MEAN logic operation. When AND logic operation is employed, the holographic images present a mean correlation of 81.1% (Fig. 4). With MEAN logic operation, we received a higher mean correlation value of 93.3% due to increasing mean of correlation coefficient calculated with correlation coefficients at all design wavelengths. The images of letters A, B, and C show correlation coefficients of 93.8%, 91.8%, and 94.4% in Fig. 4, respectively.

Figure 3
figure 3

Generation of a holographic color image at a single observation plane. Ground-truth images of (a) letter A, (b) letter B, and (c) letter C. The CHoloNet based holographic images of (d) letter A, (e) letter B, and (f) letter C.

Figure 4
figure 4

Change of correlation coefficients at design wavelengths of the holographic color image with two logic operations: AND and MEAN.

Figure 5
figure 5

Correlation spectra of letter images A, B, and C when illumination wavelength spans between 650 and 750 nm.

When the designed hologram is illuminated by wavelength of light at 670 nm, we see low correlation coefficients of 36.8% and 15.3% between letters A and B and letters A and C on the image plane of letter A, respectively (Fig. 5). However, cross-talk in Fig. 3a is not visible even though there is a correlation of 36.8% between letters A and B, and we observe only formation of letter A without formation of letters B and C. The ideal, uniform, and binary holographic image of letter A shows correlation coefficients of 34.8% and 12.0% with letters B and C, respectively. These high values are due to the fact that a great portion of the intensity image pixels is formed with zero intensity values. Moreover, the spatial similarity of the letters leads to a correlation which is expected. Therefore, correlation coefficients of 36.8% and 15.3% between letters A and B and letters A and C are not surprising, respectively. A similar situation occurs for letters A and C when illumination wavelength of light is 700 nm. There is no sign for formation of letters A and C in Fig. 3b, but there are 51.8% and 38.8% correlation coefficients between letters B and A and letters B and C in Fig. 5, respectively. The ideal, uniform, and binary holographic image of letter B shows a correlation coefficient of 44.7% with letter C. Correlation coefficients between letters C and A and C and B are 54.2% and 15.7% in the same figure when the same hologram is illuminated with light at a wavelength of 730 nm, respectively. However, there is no sign for formation of letters A and B on the image plane of letter C in Fig. 3c.

While tuning thickness profile of the hologram for holographic images of letters A, B, and C, we stored intensity images and holograms at each optimization attempt. With the collected data set, we tuned weights of the CHoloNet to reconstruct object information from an intensity color holographic image. The CHoloNet yields 99.7% accuracies with the training and the validation data sets. We reconstruct a hologram with the CHoloNet, which shows a correlation coefficient of 99.9% with the ground-truth hologram. The reconstructed hologram by the CHoloNet is high-fidelity object information obtained within 2 s. When the reconstructed hologram is illuminated with a light source at 670 nm, 700 nm, and 730 nm, we see holographic three images as presented in Fig. 3d–f. These three holographic images of letters are obtained with the reconstructed hologram by using FKDI and Eqs. (S1–S3) (see equations in the supplementary material). These holographic images present correlation coefficients of 99.9% with the ground-truth images. By using the CHoloNet, we reconstruct a hologram structure that provides a color image at a single observation plane within 2 s. We proved generalization ability of the CHoloNet by using this data set and pointed out our discussion for the results (see the supplementary material). This holographic structure behaves color filter to simultaneously achieve a full-color holographic image with no cross-talk. Moreover, we see twin image-free holographic intensity images due to elimination of recovery of phase and amplitude information from the holographic intensity images.

Next, we reconstructed a hologram structure that images three different figures forming at different observation planes (Fig. 6). The holographic images are arranged in order from letter A (350 \(\upmu\)m), letter B (400 \(\upmu\)m) to letter C (450 \(\upmu\)m), where letter A is the nearest to the hologram plane and letter C is the farthest to the hologram plane. The CHoloNet provides accuracies of 99.7% with the training and the validation data sets, and the reconstructed hologram with the CHoloNet has a correlation coefficient of 98.5% with the ground-truth hologram. Later, the reconstructed hologram is illuminated with a light source emitting at a single wavelength of 700 nm. With FKDI, we perform a numerical calculation for holographic intensity images seen at 700 nm and three observation planes. As seen in Fig. 6, we obtained clear and high contrast holographic images of letters formed at the same observation planes. We see formation of letter images A, B, and C at 350 \(\upmu\)m, 400 \(\upmu\)m, and 450 \(\upmu\)m, respectively. Moreover, holographic images cannot be formed out of predefined observation planes (Fig. 7). In this figure, we observe background correlation coefficients for images A and B and images A and C at the observation plane of image A, 350 \(\upmu\)m away from the hologram plane. The same situation happens at the observation planes of images B and C. These three holographic images are almost the same as the ground-truth images and present correlation coefficients of 99.9% with the ground-truth images.

Figure 6
figure 6

The CHoloNet based monochrome holographic images at a wavelength of 700 nm and different observation planes.

Figure 7
figure 7

Variation of correlation coefficient for CHoloNet based monochrome holographic images with different observation planes.

Figure 8
figure 8

Variation of the CHoloNet based holographic images with wavelength and observation plane distance.

Figure 9
figure 9

Correlation coefficient change of the CHoloNet based holographic images with (a) observation plane and (b) wavelength.

Lastly, the CHoloNet provides us reconstruction of a hologram that generates a color holographic image at three observation planes (Fig. 8). The letters A, B, and C are seen 390 \(\upmu\)m, 370 \(\upmu\)m, and 350 \(\upmu\)m away from the hologram plane, respectively. Using the training data set of 51200 color images and holograms, the CHoloNet yields accuracies of 99.4% with the training and the validation data sets. The hologram designed with the CHoloNet has a correlation coefficient of 98.9% with the ground-truth hologram. When the reconstructed hologram is illuminated with a light source emits at three wavelengths: 670 nm, 700 nm, and 730 nm, a color holographic image is seen at the aforementioned observation planes in Fig. 8. The letter A is formed at 390 \(\upmu\)m away from the hologram plane at a wavelength of 670 nm due to encoded spatial and spectral information into the reconstructed hologram in the same figure. In the meantime, we see formation of letter B at a wavelength of 700 nm and an observation plane of 370 \(\upmu\)m. The same hologram images letter C at a wavelength of 730 nm and an observation plane of 350 \(\upmu\)m as seen in Fig. 8. These holographic images present high correlation coefficients of 99.9% with the ground-truth images. The formation of images is so sensitive to the design parameters, and the images are seen at the predefined observation planes (Fig. 9a). Figure 9a shows that letters A, B, and C are only generated at the observation plane of 390 \(\upmu\)m, 370 \(\upmu\)m, and 350 \(\upmu\)m, respectively. When the same hologram is illuminated by the broadband light source at between 630 and 770 nm, we observe formation of holographic images only at the predefined light wavelengths (Fig. 9b), and we have cross-talk free color holographic images at multiple observation planes.

The deep learning model used in this study decodes three-dimensional information of an object from an intensity-only recording. Our results qualitatively and quantitatively with correlation coefficient demonstrate effectiveness of the CHoloNet framework for formation of holographic images having different properties as wavelength, figure, and display plane. The CHoloNet does not expand a possible solution set that is constrained by physics. Instead, the CHoloNet finds the best solution within the data set considering desired intensity distribution. Compared to exhaustive search algorithms, we create holograms by using the CHoloNet. The developed neural network architecture may boost design duration of a holographic image obtained by a hologram displayed on a spatial light modulator or a digital micro-mirror device. This work can be further improved to observe holographic images for multiple observers at oblique viewing circumference. Also, if phase modulation capability of a hologram is extended with increasing number of thickness values in a hologram, clarity of holographic images improves. We believe the CHoloNet can be used to retrieve imaginary (phase) and real (intensity) parts of light from an intensity-only holographic image. The model we develop may be used for hyperspectral image generation, which will increase amount of data transfer rate. Similarly, the method can be applied to control optical microcavity arrays or quantum dot arrays that are placed at different longitudinal and lateral positions43.

Conclusion

In this work, we present color holographic image generation with a 30 nm wavelength step to obtain cross-talk free images. By using a single deep learning architecture, the CHoloNet, we demonstrate holographic image formation not only at different frequencies but also at different observation planes. The CHoloNet enables us to obtain phase-only holographic structures to perform generation of holographic intensity images in 2 s. Moreover, thanks to data-driven statistical retrieval behavior of deep learning, we reconstruct hologram/object information from holographic images without requiring phase and amplitude information of recorded holographic intensity images. Compared to the iterative optimization methodologies, our neural network produces holograms with several orders of magnitude faster and up to an accuracy of 99.9%. The reconstructed holograms give superior quality reproduced intensity patterns. We believe our work inspires a variety of fields as biomedical sensing, information encryption, and atomic-level imaging that benefit from frequent accommodation of optical holographic images and object/hologram recovery.