Abstract
The multifrequency temporal phase unwrapping (MFTPU) method, as a classical phase unwrapping algorithm for fringe projection techniques, has the ability to eliminate the phase ambiguities even while measuring spatially isolated scenes or the objects with discontinuous surfaces. For the simplest and most efficient case in MFTPU, two groups of phaseshifting fringe patterns with different frequencies are used: the highfrequency one is applied for 3D reconstruction of the tested object and the unitfrequency one is used to assist phase unwrapping for the wrapped phase with high frequency. The final measurement precision or sensitivity is determined by the number of fringes used within the highfrequency pattern, under the precondition that its absolute phase can be successfully recovered without any fringe order errors. However, due to the nonnegligible noises and other error sources in actual measurement, the frequency of the highfrequency fringes is generally restricted to about 16, resulting in limited measurement accuracy. On the other hand, using additional intermediate sets of fringe patterns can unwrap the phase with higher frequency, but at the expense of a prolonged pattern sequence. With recent developments and advancements of machine learning for computer vision and computational imaging, it can be demonstrated in this work that deep learning techniques can automatically realize TPU through supervised learning, as called deep learningbased temporal phase unwrapping (DLTPU), which can substantially improve the unwrapping reliability compared with MFTPU even under different types of error sources, e.g., intensity noise, low fringe modulation, projector nonlinearity, and motion artifacts. Furthermore, as far as we know, our method was demonstrated experimentally that the highfrequency phase with 64 periods can be directly and reliably unwrapped from one unitfrequency phase using DLTPU. These results highlight that challenging issues in optical metrology can be potentially overcome through machine learning, opening new avenues to design powerful and extremely accurate highspeed 3D imaging systems ubiquitous in nowadays science, industry, and multimedia.
Similar content being viewed by others
Introduction
Many imaging systems, such as fringe projection profilometry (FPP)^{1,2,3}, optical interferometry^{4,5}, synthetic aperture radar (InSAR)^{6,7}, Xray crystallography^{8}, and magnetic resonance imaging^{9}, make use of the phase to produce the physiological and physical information of the measured objects. For instance, in FPP, the phase is proportional to the surface profile; in optical interferometry, the phase can be exploited to infer profile, fast displacement, and vibration of the object’s surface. In these existing imaging methods and systems, it generally need to perform the arctangent function for phase retrieval thus resulting in the wrapped phase with 2π phase jumps, so the operation of phase unwrapping is necessary to eliminate the phase ambiguities and convert the wrapped phases into the absolute ones^{10,11,12,13,14,15}.
Numerous phase unwrapping algorithms have been proposed and can be divided into two categories with regard to the working domains: spatial phase unwrapping (SPU)^{10,11} and temporal phase unwrapping (TPU)^{12}. Under the assumption of spatial continuity, SPU calculates the relative fringe order of the center pixel on a single wrapped phase map by analyzing the phase information of its neighboring pixels, thus it cannot successfully measure discontinuities and isolated objects. Conversely, TPU approaches can realize pixelwise absolute phase unwrapping via the temporal analysis of more than one wrapped phase maps with different frequencies even under the conditions of truncated or spatially isolated areas. Currently, there are three representative approaches to TPU: multifrequency (hierarchical) approach (MFTPU), multiwavelength (heterodyne) approach, and numbertheoretical approach. We have analyzed and discussed the unwrapping success rate and antinoise performance of these TPU algorithms in a comparative review, revealing that the MFTPU approach provides the highest unwrapping reliability and best noiserobustness among others^{12}.
The subsequent content of this paper will be focused on the MFTPU approach, with an emphasis on the application of highspeed FPP^{16,17}. In such a context, to improve the measurement efficiency, it is necessary to make MFTPU as reliable as possible while using a minimum number of projection patterns^{18}. For the simplest and most efficient case in MFTPU, two groups of phaseshifting fringe patterns with different frequencies are used: the highfrequency one is applied for 3D reconstruction of the tested object and the unitfrequency one is used to assist phase unwrapping for the wrapped phase with high frequency. The final measurement precision or sensitivity is determined by the number of fringes used within the highfrequency pattern, under the precondition that its absolute phase can be successfully recovered without any fringe order errors. However, due to the nonnegligible noises and other error sources in actual measurement, the frequency of the highfrequency fringes is generally restricted to about 16, resulting in limited measurement accuracy^{12}. On the other hand, using an additional intermediate set of fringe patterns (totally 3 sets of phaseshifting patterns) can unwrap the phase with higher frequency or higher success rate^{18}. As a result, the increased number of required patterns reduces the measurement efficiency of FPP, which is not suitable for measuring dynamic scenes.
In this work, we demonstrated that a trained deep neural network can greatly improve the ability of TPU compared with conventional MFTPU. This learningbased framework uses only two (one unitfrequency, one highfrequency) wrapped phases calculated using 3step phaseshifting fringe patterns as input, and directly outputs an unwrapped version of the same phase map with high reliability. Deep learning^{19} is a method based on the representation of data in machine learning for data analysis and prediction and have been applied to various fields such as automatic drive, face recognition, and mechanical translation, where they have produced results that surpass the performance of traditional algorithms and are comparable or superior in some cases to human experts. Recently, machine learningbased methods have been further successfully applied to solving challenging problems in computational imaging^{20,21,22,23,24} and the analysis of nanostructures devices^{25,26,27}, such as phase retrieval^{20}, lensless onchip microscopy^{21}, fringe pattern analysis^{22}, computational ghost imaging^{23,24}, and the assist design of electromagnetic nanostructures^{26}.
Inspired by the great successes of deep learning techniques for these fields, here we adopt deep neural networks to beat the TPU problem, which can substantially improve the unwrapping reliability compared with MFTPU even in the presence of different types of error sources. To validate the proposed approach, we recover the absolute phases of various tested objects by projecting fringe patterns with different frequencies, such as 1, 8, 16, 32, 48, and 64, all of which demonstrate the successful removal of phase unwrapping errors arising from the intensity noise, low fringe modulation, intensity nonlinearity, and motion artifacts. Furthermore, as far as we know, our method was demonstrated experimentally that the highfrequency phase with 64 periods can be directly and reliably unwrapped from one unitfrequency phase, facilitating highaccuracy highspeed 3D surface imaging with use of only 6 projected patterns without exploring any prior information and geometric constraint. These results highlight that machine learning is able to potentially overcome challenging issues in optical metrology, and provides new possibilities to design powerful highspeed FPP systems.
Methods
Phaseshifting profilometry (PSP)
In a typical FPP system, sinusoidal fringebased FPP methods are more prevalent to a great variety of practical applications and can be generally divided into two main categories for phase extraction: Fourier transform profilometry (FTP)^{28} and Phaseshifting profilometry (PSP)^{29}. Numerous dynamic 3D measurement techniques have been developed based on FTP, which have the advantage to provide the phase map utilizing only a single highfrequency fringe pattern^{16,30}. How, suffering from frequency band overlapping problem, these methods generally yield coarse wrapped phase with low quality which limits its measurement precision for dynamic 3D acquisition. In addition, not just limited to Fourier transform, the windowed Fourier transform (WFT) and the wavelet transform (WT) can also be applied for the phase retrieval and enhancing 3D measurement accuracy even in the case of complex surfaces and depth discontinuities^{31}. Different from FTP, PSP can realize pixelbypixel phase measurements with higher accuracy unaffected by ambient light, but it needs to project at least three fringe patterns to obtain a phase map theoretically^{29}. In this work, the standard 3step phaseshifting fringe patterns with shift offset of 2π/3 are adopted and represented as
where \({I}_{n}^{p}({x}^{p},{y}^{p})\,(n=0,1,2)\) represent fringe patterns to be projected, f is the frequency of fringe patterns. After projected onto the object surfaces, the deformed fringe patterns captured by the camera can be described as
where A(x, y), B(x, y), and Φ(x, y) are the average intensity, the intensity modulation, and the phase distribution of the measured object. According to the leastsquares algorithm, the wrapped phase ϕ(x, y) can be obtained as^{32,33,34}:
Due to the truncation effect of the arctangent function, the obtained phase ϕ(x, y) is wrapped within the range of (−π, π], and its relationship with Φ(x, y) is:
where k(x, y) represents the fringe order of Φ(x, y), and its value range is from 0 to N − 1. N is the period number of the fringe patterns (i.e., N = f). In FPP, the core challenge for the absolute phase recovery is to obtain k(x, y) for each pixel in the phase map quickly and accurately.
Multifrequency temporal phase unwrapping (MFTPU)
In temporal phase unwrapping (TPU), the wrapped phase ϕ(x, y) is unwrapped with the aid of one (or more) additional wrapped phase map with different frequency. For instance, two wrapped phases ϕ_{h}(x, y) and ϕ_{l}(x, y) are both retrieved from phaseshifting algorithms by using Eq. (3), ranging from −π to π. It is easy to find that the two absolute phases Φ_{h}(x, y) and Φ_{l}(x, y) corresponding to ϕ_{h}(x, y) and ϕ_{l}(x, y) have the following relationship:
where f_{h} and f_{l} are the frequency of highfrequency fringes and lowfrequency fringes. Based on the principle of MFTPU, k_{h}(x, y) can be calculated by the following formula:
Since the fringe order k_{h}(x, y) is integer, ranging from 0 to f_{h} − 1, Eq. (6) can be adapted as
where Round() is the rounding operation. When f_{l} is 1, there will be no phase ambiguity so that Φ_{l}(x, y) is inherently an unwrapped phase. Theoretically, for MFTPU, this singleperiod phase can be to directly assist phase unwrapping of ϕ_{h}(x, y) with relatively higher frequency. However, the phase unwrapping capability of MFTPU is greatly constrained due to the influence of noise in practice. Assuming phase errors in the wrapped phase maps ϕ_{h}(x, y) and Φ_{l}(x, y) are Δϕ_{h}(x, y) and Δϕ_{l}(x, y) respectively, from Eq. (6) we have:
Let \(\Delta {\phi }_{{\max }}=\,{\max }(\Delta {\phi }_{h}(x,y),\Delta {\phi }_{l}(x,y))\), from Eq. (8) we can find the upper bound of Δk(x, y):
To avoid errors in determining the fringe orders, from Eqs. (7) and (9) we have:
Subsequently, we can confirm the boundary of \(\Delta {\phi }_{max}(x,y)\):
Notably, Eq. (11) defines the range of Δϕ_{max} where the absolute phase can be correctly recovered. Otherwise, error will occur in determining the exact k_{h}(x, y). In MFTPU, since the frequency of the lowfrequency fringes is fixed to 1, it can be found from Eq. (11) that the higher the frequency of the highfrequency fringes, the narrower the range of Δϕ_{max}, and the worse the reliability of the phase unwrapping. Consequently, for a normal FPP system, MFTPU can only reliably unwrap the phase with about 16 periods due to the nonnegligible noises and other error sources in actual measurement. Thus, it generally exploits multiple (>2) sets of phases with different frequencies to hierarchically unwrap the wrapped phase step by step, and finally arrives at the absolute phase with high frequency instead of only using the phase with a single period. Obviously, MFTPU, which consumes additional time for projecting patterns with intermediate frequencies, is not a good choice to realize highspeed, highprecision 3D shape measurement based on FPP.
Deeplearning based temporal phase unwrapping (DLTPU)
Aiming at this problem, we choose to use the deep neural networks (DNN) to overcome the limitations of MFTPU, and the specific diagram of the proposed method is shown as in Fig. 1. The input data of the network are the two wrapped phases of the single period and high frequency, which is the same as the twofrequency TPU. To realize the highest unwrapping reliability, we adopt the residual network as the basic skeleton of our neural network^{35}, which can speed up the convergence of deep networks and improve network performance by adding layers with considerable depth. Then, we introduce the multiscale pooling layer to downsampling the input tensors, which can compress and extract the main features of the tensors for reducing the computation complexity and preventing the overfitting. Correspondingly, it is inconsistent for the tensors sizes in the different paths after the processing of the pooling layer. Therefore, upsampling blocks will be used to make the sizes of the tensors in the respective paths uniform (see Supplementary Section 1 for details)^{36}. In summary, our network mainly consists of convolution layers, residual blocks, pooling layers, upsampling blocks, and concatenate layers. To maximize the efficiency of the model, after repeatedly adjusted the hyperparameters of the network (number of layers and nodes), we found that in the whole network the number of residual blocks for each path should be set to 4, and the basical filter numbers of the convolution layers should be 50. The tensor data of each path in the network will be performed 1, 1/2, 1/4, and 1/8 downsampling operations by adopting pooling layers with different scales respectively, and then different numbers of upsampling blocks will be adopted to make the sizes of the tensors in the corresponding paths uniform. Besides, it has been found that implementing shortcuts between residual blocks contributes to making the convergence of the network more stable. Furthermore, to avoid overfitting as the common problem of the deep neural network, L2 regularization is adopted in each convolution layer of residual blocks and upsampling blocks instead of all convolution layers of the proposed network, which can enhance the generalization ability of the network.
Although the purpose of building the network is to achieve phase unwrapping and obtain the absolute phase, there is no need to directly set the absolute phase as the network’s label. Since Φ_{h}(x, y) is simply the linear combination of k_{h}(x, y) and ϕ_{h}(x, y) according to Eq. (4), Φ_{h}(x, y) can be obtained immediately if k_{h}(x, y) is known. Once k_{h}(x, y) is set as the output data of the network, the purpose of our network is to implement semantic segmentation^{37}, which is a pixelwise classification. It is easy to understand that the complexity of the network will be greatly reduced so that the loss of the network will converge faster and more stable, and the prediction accuracy of the network is effectively improved. Different from the traditional SPU and TPU that the phase unwrapping is performed by utilizing the phase information solely in the spatial or temporal domain, it should be noted that our proposed method based on deep neural network is able to learn feature extraction and data screening, thus can exploit the phase information in the spatial and temporal domain simultaneously, providing more degrees of freedom and possibilities to achieve significantly better unwrapping performance (refer to Supplementary Section 3 for details).
Then, using Eq. (4), Φ_{h}(x, y) is obtained and converted into 3D results after phasetoheight mapping. In preparation for phasetoheight mapping, the projection matrices of the camera and projector need to be obtained through system calibration^{38,39}. Besides, in order to speed up the reconstruction, we suggest phasetoheight mapping to be implemented with a graphics processing unit^{40} or several lookup tables^{41}, which can greatly save the time cost of the 3D reconstruction.
Results
Quantitative comparison with MFTPU
In the first experiment, to verify the actual performance of the proposed DLTPU, the trained DNN models for phase unwrapping with different highfrequency fringes are utilized to make predictions on the testing dataset (200 image pairs) (refer to Supplementary Section 2 for details), and MFTPU is also implemented for comparison. In order to quantitatively analyze the accuracy of phase unwrapping for DLTPU and MFTPU, the phases with different high frequences are independently unwrapped by the two algorithms, and the average error rates for phase unwrapping on the testing dataset are calculated and plotted against f_{h} in Fig. 2(a). It should be noted that these results are calculated only by comparing the differences between the obtained phases and the label’s phases for each valid point from the testing dataset (refer to Supplementary Section 2 for identifying the valid points). The label’s phases can be correctly acquired as the ‘groundtruth’ phase by exploiting multiple sets of phases with different frequencies to hierarchically unwrap the wrapped phase step by step. It can be seen from Fig. 2(a) that with the increase of f_{h} the reconstructed phases of MFTPU are completely obviated, with a substantial increase of phase unwrapping error rate from 0 to 12.71%. The result shows again that MFTPU cannot successfully unwraps a phase map when f_{h} ≥ 16 due to the nonnegligible noises and other error sources in actual measurement. However, our approach always provides acceptable results, with more than 95% of all valid pixels being properly unwrapped. These experimental results confirm that compared with MFTPU our method can achieve much better unwrapping results and decrease the phase unwrapping errors by almost an order of magnitude.
In order to reflect the specific performance of DLTPU and MFTPU more intuitively, the 3D reconstruction results after phase unwrapping for a representative sample on the testing dataset are illustrated and compared in Fig. 2(b), and the phase unwrapping error rates can be obviously seen in the background. It can be found from Fig. 2(b) that our approach provides the smallest phase unwrapping errors and the significant improvement of phase measurement quality with the period number f_{h} as expected. It can be further observed that the fringe order errors are mostly concentrated on the dark regions and object edges where the fringe quality is low. Different from MFTPU, phase unwrapping errors caused by the low signaltonoise ratio (SNR) region of phases is significantly reduced by using DLTPU. For these low SNR region, the remaining phase errors have the characteristics of accumulation and can be easily further corrected by some compensation algorithm for fringe order errors^{42,43,44} (refer to Supplementary Section 4 for details of these compensation algorithms). Consequently, the trained models can substantially decrease error points to provide better phase unwrapping results (even f_{h} = 64) and lower error rates, which demonstrates the capability and reliability of DLTPU for phase unwrapping.
Performance analysis under different types of phase errors
Intensity noise
In the following series of experiments, we will further verify the superiority of DLTPU in the presence of different types of phase errors. In highspeed 3D measurement, the quality of the fringe patterns is poorer than that of the static measurement because it is projected and captured with limited exposure time. To emulate the practical measurement conditions, we measure a standard ceramic plate using DLTPU (f_{h} = 32) but artificially adjust the camera’s exposure time to 39 ms, 20 ms, 15 ms, and 10 ms. To better analyze and compare the reliability of the accuracy results for phase unwrapping, the absolute phase map obtained using the 12step phaseshifting algorithm and combining with a highly redundant multifrequency temporal phase unwrapping strategy (with different frequencies including 1, 8, 16, and 32) can serve as the reference phase. Next, the error rate of phase unwrapping and the variance of the phase error \({\sigma }_{\Delta {\phi }_{h}}\) for different approaches are easily calculated by making a comparison between the unwrapped phase and the reference phase for each valid point.
Obviously, as the exposure time decreases, the quality of the phase measurement drops significantly presented in Fig. 3(a,b). Since the exposure time is a key factor affecting the speed and quality of phase measurement, the shorter the exposure time the algorithm can withstand, the faster the measurement can be achieved with six projection patterns in FFP. Therefore, a more robust phase unwrapping method is essential to eliminate the phase ambiguity introduced by reduced exposure times and make phase unwrapping correct. In Fig. 3(c,d), it can be found that DLTPU can always provide higher success rate of phase unwrapping and lower phase error \({\sigma }_{\Delta {\phi }_{h}}\) compared with MFTPU, making it more appropriate for the highspeed 3D shape measurement applications.
Low fringe modulation
Another attractive attribute of DLTPU is its good tolerance to noise that can significantly suppress phase unwrapping errors in lowfringemodulation areas, which frequently appear in practical measurement for the surfaces of complex objects, like the tested object shown in Fig. 4(a,b). For the lowmodulation logo region, conventional MFTPU results provide spinous results teemed with significant deltaspike artifacts, as shown in Fig. 4(c). In contrast, the DNN approach successfully overcomes the lowSNR problem and produces smooth measurement results with negligible errors, as shown in Fig. 4(d). This experimental result confirms once again that DLTPU can provide superior capability and stability of phase unwrapping for suppressing unwrapping errors caused by low fringe modulation.
Intensity nonlinearity
In this section, we test the proposed DLTPU under different degrees of intensity gamma distortion. The gamma distortion, or so called intensity nonlinearity, is a common error source in FPP due to the nonlinear response of the commercial projector, introducing highorder harmonics to the projected fringe patterns. The intensity of the fringes with the gamma distortion can be expressed as
where γ represents the nonlinearity parameter of projector that means the nonlinear response of the commercial projector. Then, we choose an industrial workpiece of metal as the measured object to validate the resistance of DLTPU to the gamma distortion. A set of fringe patterns with different nonlinearity intensities, ranging from 0.5 to 1.5, are generated using Eq. (12) and projected onto the measured object in Fig. 5(a). It can be found from the 3D results shown in Fig. 5(b) that MFTPU cannot provide acceptable phase unwrapping results even under lowlevel gamma distortions. On the contrary, DLTPU is able to achieve a close to ideal phase unwrapping result even when γ is 0.8. It should be also noticed that, when γ is as low as 0.5 or as high as 1.5, both of the two approaches can produce meaningful results since the phase errors artificially introduced is much larger than the “safe line” without triggering phase unwrapping errors, so that the success/error rate of unwrapping is about fiftyfifty. In Fig. 5(c–e), for phase unwrapping with different high frequencies (such as 32, 48 and 64) under different degrees of intensity gamma distortion, the statistics curves of phase unwrapping for MFTPU are shown as the solid lines, and the results are significantly improved by using DLTPU as shown by dashed lines. These results verify that our method can significantly reduce the fringe order errors of phase unwrapping and produce highquality absolute phases even under a certain degree of gamma distortion in the FPP system.
Application to highspeed 3D surface imaging
Finally, our system, which can project and capture the fringe images at the speed of 25 Hz, is applied to imaging some classical dynamic scenes for fast 3D reconstruction: objects with fast translation movement and rapid rotatory motion. In Fig. 6(a), a standard ceramic plate, fixed on precise displacement platform, is performed to periodic translational movement at the speed of 1.25 cm/s. In traditional MFTPU, it is more much difficult to recovery the highfrequency absolute phase using only one unitfrequency phase in Fig. 6(c) due to the unavoidable noises in actual measurement. Therefore, to guarantee a stable phase unwrapping success rate for the highfrequency phase, three sets of phaseshifting fringe patterns, socalled MFTPU (3f) in which the frequency of the second set of fringe patterns is 8, are used to achieve highaccuracy but inefficient phase unwrapping. When measuring dynamic scenes, the relative motion between the object and the phaseshifting fringe patterns sequentially projected will cause motion artifacts and thus introduce additional phase errors into the initial phase map which is nonnegligible and becomes more severe because of projecting more patterns as presented in Fig. 6(c). However, without the assistance of additional patterns, it illustrates the reliability and efficiency of DLTPU from Fig. 6(c) that the trained models can still achieve better phase unwrapping results. We try to take one crosssection on the 3D results of the ceramic plate to compare DLTPU with MFTPU and MFTPU (3f). From the comparison results shown in Fig. 6(d), it can be found that our approach provides the highest unwrapping reliability and best noiserobustness compared with other methods.
And then, for measuring the rapid rotatory motion, the statue of David rotates in a counterclockwise direction at the rotation rate of 3 rpm as shown in Fig. 6(b). Undoubtedly, in Fig. 6(e), the experiment yielded a result similar to that of the fast translational motion. It can be found from these results that the 3D profile information with high quality of the ceramic plate and the David statue are accurately acquired during the entire movement of the tested objects, again demonstrating the unwrapping stability of the proposed method for implementing highprecision, fast absolute 3D shape measurement.
Discussion
In this work, we have demonstrated that a trained deep neural network can greatly improve the ability of TPU with highfrequency fringes acquired by a common FPP system. This highperformance TPU (socalled DLTPU) can be achieved based on a deep neural network after appropriate training. Compared with MFTPU, DLTPU can effectively recover the absolute phase from two wrapped phases with different frequencies by exploiting both spatial and temporal phase information in an integrated way. It can substantially improve the reliability of phase unwrapping even when highfrequency fringe patterns are used. We have further experimentally demonstrated for the first time, to our knowledge, that the highfrequency phase obtained from 64period 3step phaseshifting fringe patterns can be directly and reliably unwrapped from one unitfrequency phase, facilitating highaccuracy highspeed 3D surface imaging with use of only 6 projected patterns without exploring any prior information and geometric constraint. After that, various experiments have been designed to access the phase unwrapping capability of the proposed approach under the conditions of intensity noise, low fringe modulation, and intensity nonlinearity. Experimental results have verified that TPU using deep learning provides significantly improved unwrapping reliability to realize the absolute 3D measurement for objects with complex surfaces. Besides, for the applications to highspeed FPP, it has also been observed that the deep learningbased approach is much less affected by motion artifacts in dynamic measurement and can successfully reconstruct the surface profile of the moving and rotating objects at high speed. These results highlight that machine learning is able to potentially overcome challenging issues in optical metrology, and provides new possibilities and flexibilities to design more powerful highspeed FPP systems. Although the TPU and FPP have been the main focus of this research, we envisage that the similar deep learning framework might also be applicable to other 3D surface imaging modalities, including, e.g., stereo vision^{45}, DIC^{46}, spatialtemporal stereo^{47}, spatialtemporal correlation^{48}, among others.
References
Gorthi, S. S. & Rastogi, P. Fringe projection techniques: whither we are? Opt. Lasers Eng. 48, 133–140 (2010).
Geng, J. Structuredlight 3d surface imaging: a tutorial. Adv. Opt. Photonics 3, 128–160 (2011).
Feng, S. et al. High dynamic range 3d measurements with fringe projection profilometry: a review. Meas. Sci. Technol 29, 122001 (2018).
Vest, C. M. Holographic interferometry. New York, John Wiley Sons, Inc. 476 (1979).
Gahagan, K. et al. Measurement of shock wave rise times in metal thin films. Phys. review letters 85, 3205 (2000).
Bamler, R. & Hartl, P. Synthetic aperture radar interferometry. Inverse problems 14, R1 (1998).
Curlander, J. C. & McDonough, R. N. Synthetic aperture radar, vol. 396 (1991).
Momose, A. Demonstration of phasecontrast xray computed tomography using an xray interferometer. Nucl. Instruments Methods Phys. Res. Sect. A: Accel. Spectrometers, Detect. Assoc. Equip. 352, 622–628 (1995).
Haacke, E. M. et al. Magnetic resonance imaging: physical principles and sequence design, vol. 82 (1999).
Su, X. & Chen, W. Reliabilityguided phase unwrapping algorithm: a review. Opt. Lasers Eng. 42, 245–261 (2004).
Flynn, T. J. Twodimensional phase unwrapping with minimum weighted discontinuity. JOSA A 14, 2692–2701 (1997).
Zuo, C., Huang, L., Zhang, M., Chen, Q. & Asundi, A. Temporal phase unwrapping algorithms for fringe projection profilometry: A comparative review. Opt. Lasers Eng. 85, 84–103 (2016).
Schofield, M. A. & Zhu, Y. Fast phase unwrapping algorithm for interferometric applications. Opt. Lett. 28, 1194–1196 (2003).
Pritt, M. D. Phase unwrapping by means of multigrid techniques for interferometric sar. IEEE Transactions on Geosci. Remote. Sens. 34, 728–738 (1996).
Chavez, S., Xiang, Q.S. & An, L. Understanding phase maps in mri: a new cutline phase unwrapping method. IEEE transactions on medical imaging 21, 966–977 (2002).
Su, X. & Zhang, Q. Dynamic 3d shape measurement method: a review. Opt. Lasers Eng. 48, 191–204 (2010).
Zhang, S. Highspeed 3d shape measurement with structured light methods: A review. Opt. Lasers Eng. 106, 119–131 (2018).
Zhang, M. et al. Robust and efficient multifrequency temporal phase unwrapping: optimal fringe frequency and pattern sequence selection. Opt. Express 25, 20381–20400 (2017).
LeCun, Y., Bengio, Y. & Hinton, G. Deep learning. Nat. 521, 436 (2015).
Sinha, A., Lee, J., Li, S. & Barbastathis, G. Lensless computational imaging through deep learning. Opt. 4, 1117–1125 (2017).
Rivenson, Y., Zhang, Y., Günaydın, H., Teng, D. & Ozcan, A. Phase recovery and holographic image reconstruction using deep learning in neural networks. Light. Sci. & Appl. 7, 17141 (2018).
Feng, S. et al. Fringe pattern analysis using deep learning. Adv. Photonics 1, 025001 (2019).
Shimobaba, T. et al. Computational ghost imaging using deep learning. Opt. Commun. 413, 147–151 (2018).
Lyu, M. et al. Deeplearningbased ghost imaging. Sci. reports 7, 17865 (2017).
Kiarashinejad, Y., Abdollahramezani, S., Zandehshahvar, M., Hemmatyar, O. & Adibi, A. Deep learning reveals underlying physics of lightmatter interactions in nanophotonic devices. arXiv preprint arXiv:1905.06889 (2019).
Kiarashinejad, Y., Abdollahramezani, S. & Adibi, A. Deep learning approach based on dimensionality reduction for designing electromagnetic nanostructures. arXiv preprint arXiv:1902.03865 (2019).
Hemmatyar, O., Abdollahramezani, S., Kiarashinejad, Y., Zandehshahvar, M. & Adibi, A. Full color generation with fanotype resonant hfo _2 nanopillars designed by a deeplearning approach. arXiv preprint arXiv:1907.01595 (2019).
Su, X. & Chen, W. Fourier transform profilometry:: a review. Opt. Lasers Eng. 35, 263–284 (2001).
Zuo, C. et al. Phase shifting algorithms for fringe projection profilometry: A review. Opt. Lasers Eng. 109, 23–59 (2018).
Takeda, M. & Mutoh, K. Fourier transform profilometry for the automatic measurement of 3d object shapes. Appl. Opt. 22, 3977–3982 (1983).
Huang, L., Kemao, Q., Pan, B. & Asundi, A. K. Comparison of fourier transform, windowed fourier transform, and wavelet transform methods for phase extraction from a single fringe pattern in fringe projection profilometry. Opt. Laser Eng. 48, 141–148 (2010).
Srinivasan, V., Liu, H.C. & Halioua, M. Automated phasemeasuring profilometry of 3d diffuse objects. Appl. Opt. 23, 3105–3108 (1984).
De Groot, P. Derivation of algorithms for phaseshifting interferometry using the concept of a datasampling window. Appl. Opt. 34, 4723–4730 (1995).
Surrel, Y. Design of algorithms for phase measurements by the use of phase stepping. Appl. Opt. 35, 51–60 (1996).
He, K., Zhang, X., Ren, S. & Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, 770–778 (2016).
Shi, W. et al. Realtime single image and video superresolution using an efficient subpixel convolutional neural network. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 1874–1883 (2016).
Long, J., Shelhamer, E. & Darrell, T. Fully convolutional networks for semantic segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 3431–3440 (2015).
Li, Z., Shi, Y., Wang, C. & Wang, Y. Accurate calibration method for a structured light system. Opt. Eng. 47, 053604 (2008).
Zhang, Z. A flexible new technique for camera calibration. IEEE Transactions on pattern analysis machine intelligence 22 (2000).
Feng, S., Chen, Q. & Zuo, C. Graphics processing unit–assisted realtime threedimensional measurement using speckleembedded fringe. Appl. Opt. 54, 6865–6873 (2015).
Liu, K., Wang, Y., Lau, D. L., Hao, Q. & Hassebrook, L. G. Dualfrequency pattern scheme for highspeed 3d shape measurement. Opt. Express 18, 5229–5244 (2010).
Zheng, D., Da, F., Kemao, Q. & Seah, H. S. Phaseshifting profilometry combined with graycode patterns projection: unwrapping error removal by an adaptive median filter. Opt. Express 25, 4700–4713 (2017).
Zuo, C. et al. Micro fourier transform profilometry (μ ftp): 3d shape measurement at 10,000 frames per second. Opt. Lasers Eng. 102, 70–91 (2018).
Yin, W. et al. Highspeed 3d shape measurement using the optimized composite fringe patterns and stereoassisted structured light system. Opt. Express 27, 2411–2431 (2019).
Lazaros, N., Sirakoulis, G. C. & Gasteratos, A. Review of stereo vision algorithms: from software to hardware. Int. J. Optomechatronics 2, 435–462 (2008).
Pan, B. Digital image correlation for surface deformation measurement: historical developments, recent advances and future goals. Meas. Sci. Technol 29, 082001 (2018).
Zhang, L., Curless, B. & Seitz, S. M. Spacetime stereo: Shape recovery for dynamic scenes. In Proceedings of the IEEE conference on computer vision and pattern recognition, II–367 (2003).
Harendt, B., Große, M., Schaffer, M. & Kowarschik, R. 3d shape measurement of static and moving objects with adaptive spatiotemporal correlation. Appl. Opt. 53, 7507–7515 (2014).
Acknowledgements
This work was supported by National Natural Science Foundation of China (61722506, 61705105, 11574152), National Key R&D Program of China (2017YFF0106403), Final Assembly “13th FiveYear Plan” Advanced Research Project of China (30102070102), Equipment Advanced Research Fund of China (61404150202), The Key Research and Development Program of Jiangsu Province (BE2017162), Outstanding Youth Foundation of Jiangsu Province (BK20170034), National Defense Science and Technology Foundation of China (0106173), “333 Engineering” Research Project of Jiangsu Province (BRA2016407), Fundamental Research Funds for the Central Universities (30917011204), China Postdoctoral Science Foundation (2017M621747), Jiangsu Planned Projects for Postdoctoral Research Funds (1701038A), National Science Center Poland (NCN) (2017/25/B/ST7/02049), Polish National Agency for Academic Exchange (PPN/BEK/2018/1/00511), Faculty of Mechatronics Warsaw University of Technology statutory funds.
Author information
Authors and Affiliations
Contributions
C.Z. proposed the idea. W.Y. and S.F. developed the theoretical description of the method and built the architecture of deep learning to perform phase unwrapping. C.Z., W.Y. and S.F. performed experiments. C.Z. and W.Y. analyzed the data. C.Z. and Q.C. supervised the research. All authors contributed to writing the manuscript.
Corresponding authors
Ethics declarations
Competing interests
The authors declare no competing interests.
Additional information
Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Supplementary information
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Yin, W., Chen, Q., Feng, S. et al. Temporal phase unwrapping using deep learning. Sci Rep 9, 20175 (2019). https://doi.org/10.1038/s41598019562223
Received:
Accepted:
Published:
DOI: https://doi.org/10.1038/s41598019562223
This article is cited by

A review on 3D measurement of highly reflective objects using structured light projection
The International Journal of Advanced Manufacturing Technology (2024)

Phase unwrapping based on deep learning in light field fringe projection 3D measurement
Optoelectronics Letters (2023)

Deep learning in optical metrology: a review
Light: Science & Applications (2022)

Optical metrology embraces deep learning: keeping an open mind
Light: Science & Applications (2022)

Fullyautomated global and segmental strain analysis of DENSE cardiovascular magnetic resonance using deep learning for segmentation and phase unwrapping
Journal of Cardiovascular Magnetic Resonance (2021)
Comments
By submitting a comment you agree to abide by our Terms and Community Guidelines. If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate.