Accelerated one-step generation of full-color holographic videos using a color-tunable novel-look-up-table method for holographic three-dimensional television broadcasting

A color-tunable novel-look-up-table (CT-NLUT) for fast one-step calculation of full-color computer-generated holograms is proposed. The proposed method is composed of four principal fringe patterns (PFPs) such as a baseline, a depth-compensating and two color-compensating PFPs. CGH patterns for one color are calculated by combined use of baseline-PFP and depth-compensating-PFP and from them, those for two other colors are generated by being multiplied by the corresponding color-compensating-PFPs. color-compensating-PFPs compensate for differences in the wavelength between two colors based on their unique achromatic thin-lens properties, enabling transformation of one-color CGH pattern into those for other colors. This color-conversion property of the proposed method enables simultaneous generation of full color-CGH patterns, resulting in a significant reduction of the full color-CGH calculation time. Experimental results with test scenario show that the full color-CGH calculation time of the proposed CT-NLUT has been reduced by 45.10%, compared to the conventional NLUT. It has been further reduced by 96.01% when a data compression algorithm, called temporal redundancy-based NLUT, was used together, which means 25-fold reduction of its full color-CGH calculation time. Successful computational and optical reconstructions of full color-CGH patterns confirm the feasibility of the proposed method.

Scientific RepoRts | 5:14056 | DOi: 10.1038/srep14056 not only by reducing the number of object points to be calculated, but also by reducing the CGH calculation time itself. 30 video frames for test 3-D video scenario which an airplane flying over the clouds in the sky are generated using the 3DS MAX. In the experiments, CGH patterns for the R-color are initially calculated, and color-compensation processes are then applied to them to generate two other CGH patterns for the G-& B-color. That is, these G-& B-color CGH patterns are generated from the pre-calculated R-color CGH patterns just by being multiplied with their respective color-compensating-PFPs, compensating differences in wavelength between the R and G, and the R and B, respectively. Figure 1 shows intensity and depth images of the 1 st , 10 th , 20 th and 30 th frames for test scenario. Here, the resolution of 3-D video frames of test scenario is assumed to be 500 × 400 × 256 pixels, and each CGH pattern to be generated is assumed to have the resolution of 2,000 × 2,000 pixels, in which each pixel size is given by 10 μm × 10 μm. The horizontal and vertical discretization steps of less than 30 μm (100 mm × 0.003 = 30 μm) are chosen since the viewing-distance is assumed to be 100 mm here. Thus, to fully display fringe patterns, the PFP must be shifted by 1,500 pixels (500 × 3 pixels = 1,500 pixels) horizontally, and 1,200 pixels (400 × 3 pixels = 1,200 pixels) vertically, and the total resolution of the 2-D PFP becomes 3,500 (2,000 + 1,500) × 3,200 (2,000 + 1,200) pixels 35 . However, in the proposed CT-NLUT method, the 1-D sub-PFP is used and its resolution becomes 3,500 (2,000 + 1,500) × 1 pixels 44 . Performance analysis of the proposed method. As mentioned above, the total calculation time required for generation of full-color CGH patterns can be shortened by three consecutive processes of video data compression, single-color CGH and three-color CGH pattern calculations. The proposed CT-NLUT method, which is composed of four PFPs of a baseline-PFP, a depth-compensating-PFP and two color-compensating-PFPs, can calculate the single-color CGH pattern using the fast C-NLUT algorithm 43 , as well as directly generate three-color CGH patterns from this pre-calculated single-color CGH pattern. It means that the computational performance of the proposed method has been optimized in those processes of single and three-color CGH calculations, and its computational performance can be further enhanced in case a video data compression algorithm is employed.

Results
For comparative performance analysis of the proposed method with those of the conventional methods, experiments are carried out for two cases with or without using a video data compression algorithm. In the first case, performances of the proposed CT-NLUT method are comparatively analyzed with those of the conventional NLUT method without using a video data compression algorithm. In the second case, those experiments are carried out under the condition that a video data compression algorithm called temporal redundancy-based NLUT (TR-NLUT) is employed. In the first case, the conventional NLUT method separately calculates CGH patterns for each color. But, in the proposed CT-NLUT method, only the R-color CGH pattern is calculated using the fast C-NLUT algorithm 43 , and G-& B-color CGH patterns are then generated from this calculated R-color CGH pattern by being multiplied with their respective color-compensating-PFPs. Table 1 shows the average number of calculated object points of the conventional NLUT and proposed CT-NLUT methods for the first case. For generation of three-color CGH patterns, the conventional method performs CGH calculation operations for 600,000 (200,000 × 3) object points. In other words, 600,000 times of multiplication, shifting and adding processes are carried out in the conventional method 35 In the proposed method, however, input object points are divided into three groups according to R, G and B grey levels of each object point. As seen in Table 1 , 8,478, 9,169 and 182,353 object points are found to have the same grey levels in three, two and one colors, respectively.
The proposed method calculates the R-color CGH patterns at first. G-and B-color CGH patterns are then generated by two color-compensation processes for the object points having the same grey levels in all color components, one color-and one grey level-compensation processes for the object points having the same grey levels in two color components, and two color-and two grey level-compensation processes for the object points having the different grey levels in all color components.
The total number of CGH calculation operations of the proposed method for generating three-color CGH patterns becomes 574,131, which is the sum of 200,000, 256 and 373,875 (9,169 + 182,353 × 2) operations for generation of the R-color CGH pattern, color-compensation and grey level-compensation, respectively. That is, the numbers of CGH calculation operations of the proposed method have been reduced, compared to those of the conventional method. Moreover, as discussed in the chapter of 'Method' , color-compensation is operated on the depth-by-depth basis, which means color-compensation operations can be done for all object points on the same depth at once simply by being multiplied with respective color-compensating-PFPs. Grey level-compensation operations are rather done on the point-by-point basis, but these are also simple multiplication processes unlike the conventional method requiring a series of multiplication, shifting and adding processes. This computational superiority of the proposed method over the conventional method both in calculations of single and three-color CGH patterns, allow a great reduction of the overall full-color CGH calculation time. Table 2 show comparison results on the average calculation time per one object-point values of the NLUT and CT-NLUT methods, which are estimated to be 60.67 ms and 42.06 ms, respectively. It means that the proposed method has obtained 30.67% reduction of the calculation time per one object-point, compared to those of the conventional method. As the number of object points having different grey levels in all colors decreases, corresponding numbers of color-and grey-level compensation processes decreases correspondingly, which results in a great reduction of the CGH calculation time. Now, the calculation time per one object-point of the proposed method can be further reduced just by decreasing the number of object points to be calculated for the single and three-color CGH patterns using one of the video data compression algorithms [15][16][17][18]40 . Thus, in the second case, the TR-NLUT algorithm 15 is employed in both NLUT and CT-NLUT methods, which are called TR/NLUT and TR/ CT-NLUT, respectively. That is, input 3-D video data are compressed with the TR-NLUT algorithm, and then single and three-color CGH patterns for those compressed object points are calculated with each of the NLUT and CT-NLUT methods. Unlike the first case, the proposed TR/CT-NLUT calculates the three-color CGH patterns only for the compressed video data, so that its computational performance expects to be further enhanced. Here, the TR/CT-NLUT method looks optimized in all three processes of video data compression, single and three-color CGH calculations. For the performance comparison, three-color CGH patterns for the compressed object data are also calculated using the conventional TR/ NLUT method. Table 1 also shows the average number of calculated object points of the TR/NLUT and TR/CT-NLUT methods. As seen in Table 1 18,655 × 2)), for test scenario. These results show that the numbers of CGH calculation operations of the TR/CT-NLUT of the second case have been significantly reduced by 85.78% compared to those of the CT-NLUT of the first case. Table 2 shows comparison results on the average calculation time per one object-point of the TR/ NLUT and TR/CT-NLUT methods. Those values are calculated to be 12.30 ms and 2.55 ms, respectively. It means that calculation time per one object-point values of the proposed TR/CT-NLUT have been found to be reduced by 79.27% compared to those of the conventional TR/NLUT method. Table 3 shows detailed compositions of the calculation time per one object-point of the conventional NLUT, TR/NLUT and proposed CT-NLUT, TR/CT-NLUT methods. Here, the calculation time per one object-point are composed of the preprocessing time for video data compression and calculation times for generation of the R-color as well as both of the G-and B-color CGH patterns. As seen in Table 3, the total full-color CGH calculation time of the conventional NLUT has been calculated to be 60.67 ms, which is the sum of the R-color CGH calculation time of 20.22 ms and G-& B-color CGH calculation time of 40.45 ms. In other words, the total full-color CGH calculation time of the conventional NLUT equals three times of the R-color CGH calculation time.
The total full-color CGH calculation time of the CT-NLUT has been, however, reduced down to 42.06 ms, and it consists of 18.12 ms and 23.94 ms for the R-color CGH calculation and for both G-& B-color CGH calculations, respectively. Here, a reduction of the R-color CGH calculation time from 20.22 ms to18.12 ms has been resulted from the fact that the CT-NLUT method calculated the R-color CGH pattern using the fast C-NLUT algorithm. In addition, the CGH calculation time for both G-& B-color has been found to be much less than two times of that for the R-color. It is because the proposed method needs no other separate CGH calculation processes for both of the G-& B-color, instead it requires only one-step color-and grey level-compensation processes.
In the second experiment, the total full-color CGH calculation time of the TR/NLUT method has been estimated to be 12.30 ms, which is composed of three processing times such as 0.07 μs for the preprocessing, 4.10 ms for the R-color CGH calculation and 8.19 ms for G-& B-color CGH calculations. Among them, the preprocessing time looks too small to be ignored, compared to the CGH calculation time. Since the TR/NLUT calculates the CGH patterns only for the compressed object data, it's R-color CGH calculation time has been reduced down to 4.10 ms from 20.22 ms of the conventional NLUT method. Thus, the G-& B-color CGH calculation time has been also calculated to be 8.19 ms, which is two times of the R-color CGH calculation time just like the case of the NLUT method.
On the other hand, in the proposed TR/CT-NLUT method, its total three-color CGH calculation time has been estimated to be the lowest value of 2.55 ms, which is composed of 0.07 μs, 1.56 ms and 0.99 ms for the preprocessing, R-color CGH calculation and G-& B-color CGH calculation, respectively. Here, the smallest CGH calculation times for the R-color as well as for the G-& B-color have been resulted from the fact that the fast C-NLUT algorithm is used for calculating the R-color CGH pattern only for the compressed object data, and simple one-step color-and grey level-compensation operations are employed for generation of the G-and B-color CGH patterns in the proposed TR/CT-NLUT method. In  brief, the total full-color CGH calculation times of the proposed CT-NLUT and TR/CT-NLUT methods have been reduced down to 69.33% and 4.20%, respectively, on the average, compared to that of the conventional NLUT method. In addition, Fig. 2 shows the frame-based calculation time per one object-point variations of the conventional and proposed methods. As discussed above, calculation time per one object-point values of the CT-NLUT apparently appear to be much reduced, compared to those of the NLUT in all frames and scenarios.
As seen in Fig. 2, NLUT, TR/NLUT and CT-NLUT and TR/CT-NLUT methods, respectively, have the same calculation time per one object-point for the first frames. But, the first-frame calculation time per one object-point of the CT-NLUT and TR/CT-NLUT has been much reduced compared to those of the NLUT and TR-NLUT because full-color CGH patterns are calculated with the fast one-step process. Table 2 also shows comparison results on the memory capacity required in the conventional and proposed methods. As seen in Table 2, the memory capacity of each of the conventional NLUT, TR/ NLUT and proposed CT-NLUT, TR/CT-NLUT methods have been calculated to be 8.01 GB (3,500 × 3,20 0 × 8 bit × 768 = 8.01 GB) and 54.69 KB (3,500 × 1 × 8 bit × 8 = 54.69 KB), respectively, which means that 1.54 × 105-fold reduction of the memory capacity has been achieved in the proposed method, compared to that of the conventional method due to the fact that only eight 1-D sub PFPs are pre-calculated and stored in the proposed method 35,44 . Computational reconstruction of full-color CGH patterns. To confirm the feasibility of the proposed method in the practical application, 3-D video images have been computationally reconstructed from the full-color CGH patterns generated with the proposed TR/CT-NLUT method. Figure 3(a) shows computationally reconstructed 3-D images of the 1 st , 10 th , 20 th and 30 th video frames, respectively. All object images have been successfully reconstructed both in color and resolution.

Conclusions
We proposed a new TR/CT-NLUT method, to fast calculate full-color holographic videos of 3-D scenes with a one-step calculation process based on its unique achromatic thin-lens property. Experimental results with test 3-D video scenarios show that the full-color CGH calculation time of the proposed TR/ CT-NLUT method has been reduced by 95.58%, on the average, compared to that of the conventional NLUT method, which means 25-fold reduction of the full-color CGH calculation time. In addition, the memory capacity of the proposed method has been also reduced by 2.93 × 10 5 -fold compared to the conventional NLUT. Successful experimental results on the computational and optical reconstruction  of full-color CGH patterns generated with the proposed method finally confirm the feasibility of the proposed method in the practical application fields. Figure 4 shows an overall block-diagram of the proposed CT-NLUT method, which largely consists of four steps. First, the input full-color 3-D video frames are divided into three-color components (R, G and B). Second, the CGH pattern for the reference color (R) image is generated by combined use of baseline-PFP and depth-compensating-PFP. Third, CGH patterns for each of the G-& B-color are simultaneously generated just by multiplying the corresponding color-compensating-PFPs to the calculated R-color CGH pattern. Finally, full-color 3-D object images are reconstructed from these three-color CGH patterns generated with the proposed method. Of course, the input 3-D video data have to be practically compressed by using a data compression algorithm, and then input to the CGH calculation process of the proposed method of Fig. 4.

Methods
As mentioned above, in the NLUT method, a 3-D object is approximated as a set of discretely sliced depth image planes having different depth. Then, only the fringe patterns for the center-located object points on each depth plane, which are called PFPs, are pre-calculated and stored.
Therefore, the unity-magnitude PFP of the R-color (T R ) for the object point (x 0 , y 0 , z p ) which is positioned on the center of an image plane with a depth of z p , T R (x, y; z p ), can be defined as Eq. (1) 35 where λ R represents the R-color wavelength. Here, if the wavelength difference between the R and G-color is set to be 1/Δ λ RG = 1/λ G -1/λ R , then the G-color PFP, T G can be expressed by Eq. where λ G denotes the G-color wavelength. Equation (2) reveals that the G-color PFP, T G can be simply generated by multiplication of the PFP for the wavelength difference between the R and G-color, T ΔRG , which is called color-compensating-PFP for the G-color, to the R-color PFP, T R . Likewise, the B-color PFP, T B can be generated by Eq. (3), where T ΔRB represents the PFP for the wavelength difference between the R-and B-color, which is called color-compensating-PFP for the B-color.
The CGH patterns for the object points located on each depth plane can be obtained just by shifting their PFPs according to the dislocated values from the center to those object points and adding them all, therefore the R-color CGH pattern of a 3-D scene, I R (x, y) can be expressed by Eq. (4) 35 where a Rp , N z and I Rp denote the intensity value of the R-color object point located at (x p , y p , z p ), the number of object points for the depth plane z and the R-color CGH pattern for the p th object point, respectively.
The G-color CGH pattern can be calculated by using Eqs. (2) and (4), which is given by Eq.