Introduction

Spatial sampling frequency is the key factor in three-Dimensional shape measurement, which is limited by spatial resolution of sensor. Once the image resolution and the parameters of objective lens are fixed in traditional imaging technology, it is no way to enhance spatial sampling frequency of traditional imaging system. However, partial slope factor of complex surface is often beyond the capacity of spatial sample, which produces the sampling information loss in the region of steep surface. Take the involute gear for example, which is widely used in mechanical engineering. To ensure the stability of gear drive and the uniformity of load distribution, it is necessary to conduct three-Dimensional shape measurement to analyze geometric error such as pressure angle, thickness, addendum and dedendum. While 3D shape measurement is confronted with the challenge of the continuous increasement in the slope factor of the involutes profile. The limitation of spatial sampling frequency causes the unwrapped phase gap. The defect often manifests as these holes and fractures in the reconstruction surface. The same phenomenon occurs in other concave and convex parts such cam, hemispheroid and inner bore.

There are some traditional strategies to measure the steep surface. Many researchers make use of coded structured light to enhance sub-pixel accuracy1,2,3,4. Fang Ji proposed an adaptive Canny edge detection method with two phases to distinguish the light stripes accurately5. Kai Liu embedded a period cue into the projected pattern to solve the depth ambiguity which caused by increasing the number of pattern stripes6. Zhan Song designed a gridline pattern to improve decoding accuracy of 3D sensing system7. Rosario Porras-Aguilar optimized grey-level projection patterns with a minimal gradient to reduce phase error near the border of the stripe8. However, since the reflectance variation on measured surface produces the ambiguity of intensity pattern, the sub-pixel accuracy estimations for structured light in above coding technologies could not avoid random errors of the ambiguity in 3D measurement9. In addition, focus variation technology is also a common way of measuring steep surface. Reinhard Danzl exploited the small depth of focus of an optical system with vertical scanning to enhance the maximum measurable slope angle10. Lewis Newton analyzed the measurement process control parameters to improve the quality of the focus variation measurement results11. Yukitoshi proposed a phase-shifting method merging into focus variation technology to analyze the contrast distribution by projecting grating pattern12. However, since the performance of focus variation measurement is significantly affected by the control precision of instrument vertical scanning process, the vertical scanning error along the optical axis restricts the measurement capability for the steep Surface. Imaging from different angle is an effective method to void steep surface in 3D measurement13,14,15,16. Although we can bypass the problem by employing the multiple view imaging structure, it is necessary to tackle the problem head on by enhancing imaging resolution of the traditional imager. Some researchers proposed interpolation algorithms to improve the measurement accuracy by up-sampling original image17,18,19. Wenzhe proposed algorithm for the accurate measurement of 3D cardiac function by reconstructing a super-resolution image within an expectation maximization framework17. Yang presented an iterative refinement approach to up-sample the original low-resolution image18. Peter proposed a method for super-resolution 3D laser scanning by which the length of the line segments along the camera rays can be tightened19. We pursue a goal that imaging resolution is improved to enhance spatial sampling frequency on the steep surface more effectively.

Compressed sensing (CS) as a signal processing technology for under-sampling and reconstructing a signal has been widely applied on medical imaging, remote sensing and industrial inspection over the past decade, since the signal may be recovered with far fewer samples than the Nyquistā€“Shannon sampling theorem requires20,21,22,23,24. Marco in Rice University designed a single-pixel camera based on the spatial light modulator (SLM) with CS theory25. SLM is employed to provide the measurement matrices by being imposed the modulation pattern. The modulated light is then focused onto a single photon detector. Single photon detector integrates the inner product of measurement matrices and the measured object to yield a set of sampled values. The set of sampled values are used to recover the object image via minimization. As a result, the single-pixel camera can image the scene with higher spatial sampling frequency than the single photon detector26. An alternative advantage is that the single-pixel camera is robust in the presence of noise, since one or more measurements can be lost without corrupting the entire reconstruction in CS theory27,28. This paper are inspired by previous works on single-pixel imaging technology and CS theory29. We try to use DMD camera, which is merged with CS theory, to improve the three-Dimensional shape measurement for steep surface. DMD camera can process the spatial array of incident rays before image formation. This modulation improves the quality of phase-shifting fringe image by reshaping waveform. The compensation of phase error and the suppression of noise are beneficial to three-Dimensional shape measurement.

The paper is organized as follows: Sect.Ā 2 describes DMD camera architecture and alignment method. The measurement matrices for reconstruction of the sinusoidal stripe images are designed to measure three-dimensional shape in Sect.Ā 3. Subsequently, experiments for 3D measurement are implemented to verify the performance of our method in Sect.Ā 4. Finally, this research is concluded to discuss the advantage and limitation in Sect.Ā 5.

DMD camera system

We constructed an imaging system called DMD camera that was applied on 3D shape measurement and noise removal30,31. The difference between DMD camera and single-pixel camera is mainly that the single photon detector is replaced by array charge coupled device (CCD). The schematic of DMD camera is shown in Fig.Ā 1a. The system is composed of CCD, DMD, 6-DoF stage, image processor and four imaging lenses (Lens1, Lens2, Lens3 and Lens4). The pixel of CCD collects the reflected photons from the corresponding segment of DMD. 6-DoF assemble stage with piezoelectric actuators is conducted to adjust the angular and linear displacement of CCD according to the feedback from image processor. Zemax optics software is employed to analyze image quality. FigureĀ 1b is the spot diagram of DMD camera system. The maximum radius of elliptical Airy spot is 1.894um. Imaging distortion on x-axis is inevitable due to Scheimpflug condition. FigureĀ 1c,d are respectively the distortions on x-axis and y-axis. The distortion on x-axis is obviously bigger than the distortion on y-axis. And the linear growth is consistent with previous theoretical derivation in Ref30.

Figure 1
figure 1

DMD imaging system. (a) The architecture of DMD camera. (b) The spot diagram of DMD camera system. (c) The distortions on x-axis. (d) The distortions on y-axis.

The paraxial magnification of Lens1 can be scanned fromā€‰Ć—ā€‰0.1 toā€‰Ć—ā€‰10. Piezoelectric actuators (Thorlabs NanoMax320) are provided for fine adjustment with 20Ā nm resolution. DMD (Discovery 4100) provides 8bpp and a resolution of 1920ā€‰Ć—ā€‰1080; DMD mirror element is 7.6ā€‰Ć—ā€‰7.6 microns. CCD (Teli-BU30) allows for 8 bits per pixel (bpp) of precision in RAW mode and a resolution of 768ā€‰Ć—ā€‰576; CCD pixel is 7.4ā€‰Ć—ā€‰7.4 microns in size.

ToĀ utilizeĀ CSĀ imaging technology,Ā it is necessary that every CCD pixelĀ in the region of interest (ROI) isĀ assigned toĀ multiple DMDĀ mirrors.Ā Consequently, DMDĀ isĀ dividedĀ intoĀ anĀ arrayĀ of segments,Ā all of which containsĀ Nā€‰Ć—ā€‰NĀ mirrors (N be integer).Ā As shown in Fig.Ā 2, a closed-loop system composed of piezoelectric actuators, CCD, Lens1, DMD and image processor is constructed to achieve accurate alignment between DMD and CCD, before DMD camera is on the working state. The alignment between DMD and CCD involves six degree of freedom (DoF) adjustments including three linear displacements on the horizontal axis x, the vertical axis y, the optical axis z and three angular displacements on pitch angle, yaw angle, roll angle. DMD can be regarded as one grating by programming the state of mirror array to form black and white line patterns. Likewise, CCD can be regarded as the alternative grating by using the subsampling technique. In other words, the pixels of the original image will be selected with the period of \(f\), and unselected pixels will be interpolated by the neighbor selected pixel. The programming grating pattern is focused onto the subsampling grating pattern by Lens1 to product optical superposition phenomenon called moirĆ© fringe. MoirĆ© fringe image can be expressed as:

$$ I_{m}^{i} (x,y) = I\left( {f \cdot Floor(\frac{x}{f}) + i,f \cdot Floor(\frac{y}{f}) + i} \right) $$
(1)

where function \(Floor[ \bullet ]\) is to get the integer towards minus infinity. \((x,y)\) is the CCD pixel coordinate. \(I(x,y)\) is the original image captured by CCD. The symbol \(f\) denotes the period of grating in CCD. Shifting phase in the \(i{\text{th}}\) moirĆ© image is \(2i\pi /f\), \(i\)ā€‰=ā€‰1, 2, ā€¦, \(f\). By observing the moirĆ© fringe image, the roll, the pitch and the yaw angle are adjusted to obtain the intermediate state that the moirĆ© fringe parallels the original grating fringe.

Figure 2
figure 2

The closed-loop system for accurate alignment between CCD and DMD.

MoirƩ fringe image is used for phase calculation and neighbor interpolation in the image processor to obtain the moirƩ phase image and the profile-varying periodic fringe image respectively. The moirƩ phase image can be described by:

$$ \varphi (x,y) = \arctan \left( {\frac{{\sum\nolimits_{i = 1}^{f} {I_{m}^{i} \cdot \sin (2i\pi /f)} }}{{\sum\nolimits_{i = 1}^{f} {I_{m}^{i} \cdot \cos (2i\pi /f)} }}} \right) $$
(2)

By observing the moirƩ phase image, the paraxial magnification and z displacement along the optical axis are adjusted to make moirƩ phase distribution uniform. At this state, the frequency difference between the CCD and DMD equals to zero.

The neighbor interpolation can be described by:

$$ I_{s} (x,y) = I\left( {\frac{f}{2N} \cdot Floor(\frac{2Nx}{f}) + Mod(2Nx,f),} \right.\left. {\frac{f}{2N} \cdot Floor(\frac{2Ny}{f}) + Mod(2Ny,f)} \right) $$
(3)

where function \(Mod(divident,divisor)\) is to get the remainder. The symbol \(N*N\) is the mirror number in the segment of DMD. By observing the subsampling image, x-axis and y-axis displacements are adjusted to eliminate the initial phases of moirƩ fringe. Image processor can calculate the required displacements in 3-axis directions of PZT. PZT receives the feedback and execute the 3-axis adjustments until accurate alignment between the segment of DMD and the pixel of CCD is completed. The accurate alignment approach using the four-step procedure was discussed in detail in Ref32.

Method

As an indispensable factor of CS theory, measurement matrix is employed to acquire the under-sampling image. FigureĀ 3 is the procedure of image acquisition in DMD camera. We first choose an orthogonal square matrix with uniform distribution such as Hadamard matrix, Bernoulli random matrix and Gaussian random matrix to ensure the restricted isometry property (RIP). The orientation of measurement vector is determined by comparing the horizontal gradient \(g_{x}\) and vertical gradient \(g_{y}\) of CCD pixel in original object image. If the measurement vector is x orientation (\(g_{x} (x,y) < g_{y} (x,y)\)), measurement matrix \(\Phi_{H}\) is obtained by randomly selecting M columns in the orthogonal square matrix. Alternatively, if the measurement vector is y orientation (\(g_{x} (x,y) > g_{y} (x,y)\)), measurement matrix \(\Phi_{V}\) is obtained by randomly selecting M rows in the orthogonal square matrix. The DMD pattern sequence is represented by

$$ DMD_{m} (u,v) = \left\{ {\begin{array}{*{20}l} {\Phi_{H} \left( {m,\bmod (v,N)} \right)\begin{array}{*{20}l} {} & {g_{x} \left( {\bmod (u,N),\bmod (v,N)} \right) < g_{y} \left( {\bmod (u,N),\bmod (v,N)} \right)} \\ \end{array} } \\ {\Phi_{V} \left( {\bmod (u,N),m} \right)\begin{array}{*{20}l} {} & {g_{x} \left( {\bmod (u,N),\bmod (v,N)} \right) > g_{y} \left( {\bmod (u,N),\bmod (v,N)} \right)} \\ \end{array} } \\ \end{array} } \right. $$
(4)

where \((u,v)\) is the DMD mirror coordinate. The symbol \(m\) denotes the ordinal number of the DMD pattern, \( m = 1,2,3 \ldots {\text{M}}\). CCD pixel collects the measured value \(Y_{x,y} (m)\) when DMD provides the \(m\) th pattern, i.e., DMD camera captures M images after M times exposure. Assuming that \(I_{x,y} (n)\) is the original scene signal of arbitrary pixel (x, y), the measured values \(Y_{x,y} (m)\) can be expressed by

$$ Y_{x,y} (m) = \Phi_{H}^{T} I_{x,y} (n)\quad {\text{or}}\quad Y_{x,y} (m) = \Phi_{V} I_{x,y} (n) $$
(5)

where \(\Phi_{H}^{T}\) is the transpose of the matrix \(\Phi_{H}\). Since 3D measurement is implemented by projecting the structure light pattern with sinusoidal stripes in this research, discrete cosine transform basis is suitable for the sparse representation of the stripe image. Discrete cosine transform matrix \(\Psi\) is size \(N \times N\). The mirror number in the DMD segment is equal to the size of discrete cosine transform matrix \(\Psi\). To reconstruct the original scene signal \(I_{x,y} (n)\), orthogonal matching pursuit (OMP) algorithm is utilized to solve the optimization problem of the L0-norm33.

$$ \overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}}{I}_{x,y} = \arg \min \left\| {I_{x,y} } \right\|_{0} \quad {\text{s.t}}\quad \left\| {Y_{x,y} - \Phi \Psi \overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}}{I}_{x,y} } \right\|_{2} < \varepsilon $$
(6)

where the optimal solution \(\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}}{I}_{x,y}\) is the reconstructed vector of the original scene signal \(I_{x,y} (n)\).

Figure 3
figure 3

The CS-based imaging schematic of DMD camera.

We project the sinusoidal stripe and reconstruct it by applying CS theory on DMD camera. FigureĀ 4 confirms the noise robustness and the enhancement of spatial sampling frequency. FigureĀ 4a is the reconstructed sinusoidal stripe image. Green solid line is the cross section in the x direction. Black line in Fig.Ā 4b is the intensity distributions of the projected sinusoidal stripe in the reconstructed image of DMD camera. Red line in Fig.Ā 4b is the intensity distributions of the projected sinusoidal stripe in the traditional image with N times resolution respectively. The variable reflectivity and the multiple-reflection result in the sinusoidal stripe distortion which can be regarded as the random noise. The comparison shows that CS-based imaging method removes the noise effectively. Black ā€˜ā€‰+ā€‰ā€™ and red ā€˜oā€™ in Fig.Ā 4c are the spatial samples in reconstructed image and the non-reconstructed image respectively. The reconstructed image has N times as many resolutions as the previous image. Obviously, DMD camera can support the high-resolution imaging by mapping one pixel to multiple mirrors. Merged with CS theory, it can capture sinusoidal stripe image with low noise to improve the quality of phase solution. In a word, proposed CS-based imaging method has advantage in 3D shape measurement for the steep surface.

Figure 4
figure 4

Comparative results. (a) The reconstructed stripe image. (b) Intensity distributions of the projected sinusoidal stripe in the reconstructed image of DMD camera and the image of traditional camera with N times resolution. (c) Spatial sampling distribution of the projected sinusoidal stripe in the reconstructed image and the non-reconstructed image.

In this research, four-step phase shifting algorithm is employed to measure the steep surface. The DMD pattern sequence of four projections is identical to ensure the calculation of wrapped phase. The four reconstructed fringe images with step phase shift \(\pi /2\) are expressed as

$$ \left\{ {\begin{array}{*{20}l} {\begin{array}{*{20}l} {\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}}{I}_{x,y}^{1} (n) = a_{x,y} + b_{x,y} \cos [\phi (\frac{n}{N}x,y)]} & {\begin{array}{*{20}l} {} & {} \\ \end{array} } \\ \end{array} } \\ {\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}}{I}_{x,y}^{2} (n) = a_{x,y} + b_{x,y} \cos [\phi (\frac{n}{N}x,y) + \pi /2]} \\ {\begin{array}{*{20}l} {\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}}{I}_{x,y}^{3} (n) = a_{x,y} + b_{x,y} \cos [\phi (\frac{n}{N}x,y) + \pi ]} & {} \\ \end{array} } \\ {\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}}{I}_{x,y}^{4} (n) = a_{x,y} + b_{x,y} \cos [\phi (\frac{n}{N}x,y) + 3\pi /2]} \\ \end{array} } \right.\quad {\text{or}}\quad \left\{ {\begin{array}{*{20}l} {\begin{array}{*{20}l} {\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}}{I}_{x,y}^{1} (n) = a_{x,y} + b_{x,y} \cos [\phi (x,\frac{n}{N}y)]} & {\begin{array}{*{20}l} {} & {} \\ \end{array} } \\ \end{array} } \\ {\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}}{I}_{x,y}^{2} (n) = a_{x,y} + b_{x,y} \cos [\phi (x,\frac{n}{N}y) + \pi /2]} \\ {\begin{array}{*{20}l} {\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}}{I}_{x,y}^{3} (n) = a_{x,y} + b_{x,y} \cos [\phi (x,\frac{n}{N}y) + \pi ]} & {} \\ \end{array} } \\ {\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}}{I}_{x,y}^{4} (n) = a_{x,y} + b_{x,y} \cos [\phi (x,\frac{n}{N}y) + 3\pi /2]} \\ \end{array} } \right. $$
(7)

where \(a_{x,y}\) is the background intensity, and \(b_{x,y}\) is the intensity amplitude. Thus the refined phase on the \(n\) th sub-pixel of point \((x,y)\) is calculated by

$$ \begin{gathered} \begin{array}{*{20}l} {\phi (\frac{n}{N}x,y) = \arctan \left[ {\left( {\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}}{I}_{x,y}^{2} (n) - \overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}}{I}_{x,y}^{4} (n)} \right)/\left( {\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}}{I}_{x,y}^{1} (n) - \overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}}{I}_{x,y}^{3} (n)} \right)} \right]} & {{\text{or}}} \\ \end{array} \hfill \\ \phi (x,\frac{n}{N}y) = \arctan \left[ {\left( {\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}}{I}_{x,y}^{2} (n) - \overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}}{I}_{x,y}^{4} (n)} \right)/\left( {\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}}{I}_{x,y}^{1} (n) - \overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}}{I}_{x,y}^{3} (n)} \right)} \right] \hfill \\ \end{gathered} $$
(8)

where \(\phi (\frac{n}{N}x,y)\) and \(\phi (x,\frac{n}{N}y)\) are a wrapped phase ranging from ā€“Ļ€ to Ļ€. To measure the depth information, it is necessary to unwrap the phase. In this paper, two-frequency temporal phase unwrapping algorithm is employed to obtain the unwrapped phase as follows:

$$ \tilde{\phi }_{x,y} (n) = Mod[\phi_{x,y}^{l + 1} (n) - \phi_{x,y}^{l} (n),2\pi ] $$
(9)
$$ k_{x,y} (n) = Round\{ [l\tilde{\phi }_{x,y} (n) - \phi_{x,y}^{l} (n) - \pi ]/2\pi \} $$
(10)
$$ \overline{\phi }_{x,y} (n) = \phi_{x,y}^{l} (n) + 2\pi \cdot k_{x,y} (n) + \pi $$
(11)

where \(\tilde{\phi }_{x,y} (n)\) is the normalized phase, \(\phi_{x,y}^{l} (n)\) and \(\phi_{x,y}^{l + 1} (n)\) are the wrapped phase with fringe frequency \(P/l\) and \(P/(l + 1)\) respectively, \(P\) is the normalized period. Function \(Mod[ \bullet ]\) is to obtain the remainder when \(\phi_{x,y}^{l + 1} (n) - \phi_{x,y}^{l} (n)\) is divided by 2Ļ€. \(k_{x,y} (n)\) is the sequence number of the unwrapped phase period, function \(Round[ \bullet ]\) is to get the nearest integer. To reduce the phase error and enhance the signal noise ratio (SNR) of the reconstructed form, median filtering is usually performed after Eq. (10). \(\overline{\phi }_{x,y} (n)\) is the ultimate unwrapped phase ranging from 0 to 2kĻ€. The unwrapped phase will be converted to 3D coordinate by the phase-to-depth map. Based on above proposed method, three-dimensional shape measurement for the steep surface can be implemented.

Experiments

To assess the ability to measure steep surface, we took the planes with various slopes as objects in 3D measurement. FigureĀ 5 is the results that the planes with various slopes are measured by using our proposed method. FigureĀ 5a is the experimental setup. The different color planes reconstructed in Fig.Ā 5b refer to the surfaces with various slopes. The depth direction is identical to the primary optical axis of the objective lens. FigureĀ 5c demonstrates the correlation that maximum measurable angle is gradually enhanced with the increasement of parameter N. Under the constraint of root mean square (RMS), the maximum measurable angle \(\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}}{\alpha }\) of sloping plane which is fitted by least square function approximation can be expressed by:

$$ \overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}}{\alpha }_{x,y} = \arg \max \left\| {\alpha_{x,y} } \right\|_{0} \quad {\text{s.t}}\quad \left\| {{\mathbb{Z}}_{x,y} - \overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}}{\mathbb{Z}}_{x,y} } \right\|_{2} < \varepsilon $$
(12)

where \({\mathbb{Z}}_{x,y}\) is the measured depth value at point (x, y), and \(\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}}{\mathbb{Z}}_{x,y}\) denotes the least squares approximating value. As Fig.Ā 5 indicates, we can obtain that parameter N is determined by calculating the slope angle of measured surface on the following measurement. Table 1 shows the errors of the reconstructed plane with the different N. The measurement speed is mainly determined by the number of captured images. The measurement accuracy depends on the spatial resolution. Both of them are related with N. Considering the tradeoff between the captured times and reconstructed accuracy, N is selected from 2 to 5. The angle of the reconstructed sloping plane is 85 degrees. By fitting these planar point clouds using the least-squares method, the maximum surplus error Em is used to verify the reconstructed accuracy on the edge. The standard deviation Ļƒ illustrates that our method efficiently decreases the noise of the steep surface.

Figure 5
figure 5

The experimental results of measuring planes with various angles. (a) The experimental setup. (b) 3D reconstructed planes with various angles. (c) Maximum measurable angle with different parameter N.

Table 1 The errors of the reconstructed plane with the different N.

FigureĀ 6 is the phase distribution of the measured plane obtained by different methods. For comparison, we choose Zhan Songā€™s method that the gridline pattern is designed to encode localization with sub-pixel accuracy and Rosarioā€™s method that generates a series of projection patterns with a minimal gradient in the intensity to reduce errors near the border7,8. The mirror number of 5ā€‰Ć—ā€‰5 is used in our method. Red diagonal is the cross section of phase map. To align diamond-shaped DMD mirror, camera has been rotated 45 degrees clockwise. The errors of reconstructed plane with different methods are shown in Table 2. Ea denotes the difference between the ground truth and the calculated position of fitted plane. These results indicate that proposed method has higher accuracy than other methods.

Figure 6
figure 6

The cross section of phase map in the measurement experiment of the same plane. (a) obtained by Zhan Songā€™s method. (b) obtained by Rosarioā€™s method. (c) obtained by Proposed method.

Table 2 The errors of the reconstructed plane with different methods.

3D shape measurement for the involute gear is an appropriate application of proposed method. Since the tooth profile and the tooth root fillet determine the contact force when one gear is driving another gear, the force analysis demands to measure them accurately. To avoid the occlusion from neighbor teeth, we used the proposed method to measure the tooth profile and the tooth root fillet in Fig.Ā 7b. FigureĀ 7a is the measuring result of the gear teeth with traditional four-step phase shifting profilometry (PSP). The defects on the tooth profile and the tooth root fillet would result in the failure to analyze the force in Fig.Ā 7a. The measuring result in Fig.Ā 7b shows that our proposed method can eliminate those defects.

Figure 7
figure 7

The comparison between the reconstructed gear teeth (a) with traditional four-step PSP and the reconstructed gear teeth (b) with proposed method.

We measured the rabbit model to verify the performance of proposed method in Fig.Ā 8 too. There are many gaps on the edge of the reconstructed rabbit in Fig.Ā 8a. It is difficult to reconstruct the acute characteristics of plaster model. Our method enhances the spatial resolution to improve the reconstructed performance on the acute angle edge. The reconstructed edge becomes plump in Fig.Ā 8b. The comparative results in Figs. 7 and 8 indicate that our proposed method can improve 3D shape measurement for the steep surface to a certain extent.

Figure 8
figure 8

The comparison between the reconstructed rabbit (a) with traditional four-step PSP and the reconstructed rabbit (b) with proposed method.

Four-step phase shifting profilometry requires 4 projecting patterns. To meet the two-frequency temporal phase unwrapping algorithm and four-step phase shifting profilometry, projector will give 8 sinusoidal stripes. For a complete measurement based on our method, 8Ā M captured times will be taken, \(M \le N\). The number N determines the time consumption besides the reconstructed quality of CS, as shown in Table 3. In our experiment, CR (compressive rate) is 0.6 at least, e.g., if N was 5, M would be 3. Totally, camera will capture 24 images to complete one measurement.

Table 3 The captured times with the different N.

Conclusion

In this works, we presented the method that CS theory is used to capture phase shifting images in DMD camera to measure steep surface. Specially, DMD camera system that one CCD pixel can be assigned to multiple DMD mirrors was constructed. Two directional measurement matrices from the same orthogonal square matrix were used to modulate the DMD segments by judging the gradient direction of original image. Thus the sinusoidal stripe images with phase shifting Ļ€/2 were reconstructed to obtain the three-dimensional shape of the measured object. Finally, we measured the involute gear and the rabbit model to demonstrate that 3D shape measurement for the steep surface can be improved by our proposed method. It is obvious that the method in this paper has advantages in the noise robustness and the enhancement of spatial sampling frequency. However, the quality of reconstructed phase shifting images is limited by the compression ratio. To trade off theĀ reconstructedĀ quality against the time consumption, we did not adopt the measurement matrix with the dimension \(M \times N^{2}\) in classical CS theory. Furthermore, our method is able to reconstruct the three dimensional shape of the obtuse angle edge. The acute angle edge is difficult to measure due to the ambiguity of the edge. Our proposed method can be a compensation for other phase shifting profilometry technology, e.g., this method can merge with dual-camera PSP technology to enhance the measurement accuracy of steep surface further. Our group will focus the research on DMD scanning alignment method which would break through the limitation of DMD resolution in the future.