Conic tangents based high precision extraction method of concentric circle centers and its application in camera parameters calibration

A high-precision camera intrinsic parameters calibration method based on concentric circles was proposed. Different from Zhang’s method, its feature points are the centers of concentric circles. First, the collinearity of the projection of the center of concentric circles and the centers of two ellipses which are imaged from the concentric circles was proved. Subsequently, a straight line passing through the center of concentric circles was determined with four tangent lines of concentric circles. Finally, the projection of the center of concentric circles was extracted with the intersection of the straight line and the line determined by the two ellipse centers. Simulation and physical experiments are carried out to analyze the factors affecting the accuracy of circle center coordinate extraction and the results show that the accuracy of the proposed method is higher. On this basis, several key parameters of the calibration target design are determined through simulation experiments and then the calibration target is printed to calibrate a binocular system. The results show that the total reprojection error of the left camera is reduced by 17.66% and that of the right camera is reduced by 21.58% compared with those of Zhang’s method. Therefore, the proposed calibration method has higher accuracy.


Scientific Reports
| (2021) 11:20686 | https://doi.org/10.1038/s41598-021-00300-y www.nature.com/scientificreports/ Then, the coordinates of the center of the ellipse are We establish a world coordinate system whose z-axis is perpendicular to the circle plane and whose origin is at the circle center; then, the circle equation is where r is the radius of the circle and C =   1 0 0 0 1 0 0 0 −r 2   .
We suppose the homography matrix and its inverse are H and Ω, respectively.
According to the principle of projective geometry, Q and Ω satisfy the following relation.
where A, B, C, D, E and F are six quantities related only to the elements of matrix H.
Two circles with radii of r 1 and r 2 are projected into two ellipses, and the central coordinates of the ellipses are (x 1 , y 1 ) and (x 2 , y 2 ), respectively. Two points determine a straight line on the image plane, and the slope k of the straight line is This shows that the centers of ellipses projected from the concentric circles are on a straight line that is not related to the radii of the two circles.
If r 1 > r 2 and r 2 = 0, Eq. (6) can be rewritten as where (x c , y c ) is the pixel coordinate of the circle center. The slope k of the straight line determined by (x 1 , y 1 ) and (x c , y c ) is This shows that the circle center is on the line determined by (x 1 , y 1 ) and (x 2 , y 2 ), and the pixel coordinates of the circle center can be extracted with the constraint.
Error of circle center extraction. As shown in Fig. 1, o-xyz is the camera coordinate system, π 1 is the space plane, and π 2 is the image plane. When the circle on π 1 is projected to π 2 , the image is an ellipse. AB is the diameter of the circle, and Aʹ and Bʹ are the images of point A and point B, respectively. Therefore, the midpoint C of AB is the circle center, of which the image is Cʹ, and the midpoint E of AʹBʹ is the ellipse center. When π 1 and π 2 are parallel, E and Cʹ coincide; otherwise, they do not coincide, which leads to error.
A simulation is carried out to further clarify the error of circle center extraction. The camera intrinsic parameters are f x = f y = 3000, u 0 = 320 and v 0 = 240. The camera external parameters are α = 20°, β = 15°, γ = 5°, x 0 = 20 mm, y 0 = 20 mm and z 0 = 600 mm. The radii of the circles range from 1 to 10 mm. A total of 120 points equally spaced on each circle are taken, and each group of 120 points is fitted to obtain 10 ellipses, as shown in Fig. 2.
As shown in Fig. 3, the x-axis is δx where δx = x c − x ce , and the y-axis is δy, where δy = y c − y ce . We can confirm that: (1) the ellipse centers are collinear and (2) as the radius increases, the coordinate errors between the ellipse centers and the projection of the circle center also increase.
Circle center extraction based on projective invariance. In projective geometry, a straight line is still a straight line after projection. For a point on a straight line, its projection point is still on the projection line. A straight line is tangent to a circle and a projection line is tangent to the projection of the circle. As shown in Fig. 4, l 1 and l 2 are the tangents of c 1 and their intersection is p 1 . l 4 and l 5 are the tangents of c 2 circle and their intersection is p 2 . l 3 is a straight line through p 1 and p 2 . Line l 6 intersects c 1 at points p 3 and p 4 and c 2 at points p 5 and p 6 .
The equations of c 1 and c 2 are      The homogeneous coordinate of the projection of the circle center p ′ 8 is where l ′ 7 is a straight line through the centers of the ellipses of e 1 and e 2 . Multiple p ′ 8 can be obtained by involving more l ′ 6 and then the mean value of the coordinates of multiple p ′ 8 was obtained which is regarded as the projection of circle center.

Circle center extraction experiment and result analysis
Simulation experiment using equal-space points of circles. The intrinsic parameters of the camera remain unchanged. The camera external parameters are α = 0 rad, β = π/3 rad, γ = 0 rad, x 0 = 20 mm, y 0 = 20 mm and z 0 = 600 mm. The radius of c 2 ranges from 1 to 10 mm and the radius of c 1 is twice that of c 2 . A total of 120 points are selected on each circle at equal intervals, and then 20 ellipses are fitted. The pixel coordinates of the projection of the concentric circle center are extracted by the big-ellipse method, the small-ellipse method, the cross ratio method 41 , the nine-point method 44 and our method. As shown in Fig. 5, the errors of the above methods increase with increasing radius of the circle. The error of our method is the smallest, which is 0.135 pixels, and the error of the cross ratio method is the second smallest, which is 0.453 pixels. The error of the large ellipse method is the largest, which is 4.824 pixels. Compared with the errors of the cross ratio method and the large ellipse method, those of the proposed method are 48.12% and 71.72% smaller, respectively.
When the radius of c 2 is 1 mm and that of c 1 is 2 mm, the camera external parameters α = γ = 0 rad, and β ranges from π/12 rad to π/3 rad. The influence of the camera external parameters on the pixel coordinate extraction accuracy of the circle center is shown in Fig. 6. The influence of Gaussian noise with μ = 0 and σ = 0.1 ~ 1.0 is shown in Fig. 7.
According to Fig. 6, only the error of the nine-point method increases with β. When β = π/12 rad, the error of our method is the smallest, which is only 0.007 pixels, while the error of the cross ratio method is 0.168 pixels.  The error of our method is approximately 94.64% lower than that of cross ratio method. When β = π/3 rad, the error of our method is still the smallest, which is 0.282 pixels, while the error of the cross ratio method is 0.168 pixels. Compared with the cross ratio invariant method, the error of our method is 37.74% smaller. According to Fig. 7, when σ = 0.9, the error of our method is 0.065 pixels larger than that of the cross ratio method. In addition, our method is least affected by noise, and the minimum error of our method is 0.066 pixels, which is 67.60% less than that of the cross ratio method.

Projective transformation-based simulations.
A concentric circle template was drawn by CAD software and then projective geometric transformations are carried out, as shown in Fig. 8a. Then, the transition effects of contour edges are obtained by adding Gaussian noise with μ = 0 and σ = 0.8 to the transformed image, which is subsequently smoothed, as shown in Fig. 8b.
The radius of c 2 is 1 mm, and that of c 1 is 2 mm. The coordinates extracted by the cross ratio method, the nine-point method and our method are compared with the corner detection results in Table 1.
According to Table 1, the maximum error of the nine point method is 36.9471 pixels, the minimum error is 0.3297 pixels, the average is 11.6565 pixels and the standard deviation is 10.9788 pixels. The maximum and minimum errors of the cross ratio method are 0.3385 pixels and 0.2113 pixels, and the average error and standard deviation are 0.3039 pixels and 0.0355 pixels, respectively. The maximum and minimum errors of our method are 0.3946 pixels and 0.0355 pixels, respectively. The average error and standard deviation are 0.188 pixels and 0.107 pixels, respectively. The error of our method is the smallest, but the cross ratio method is relatively stable.    Fig. 9.
Because of the large error of the nine-point method, this method is not used. Instead, the corner detection method, cross ratio method, small-ellipse method and our method are studied, and the results are shown in Fig. 10.
According to Fig. 10, compared with the corner method, the maximum error of the small-ellipse method is 28.211 pixels, the minimum error is 2.7074 pixels, the average error is 10.9562 pixels and the standard deviation is 7.5066 pixels. The maximum error of the cross ratio method is 4.8021 pixels, the minimum error is 0.7616 pixels, the average error is 2.2626 pixels and the standard deviation is 1.4764 pixels. The maximum and minimum errors of our method are 3.4 pixels and 0.5657 pixels, and the average error and standard deviation are 1.7491 pixels and 1.0533 pixels, respectively. Therefore, the accuracy of our method is the highest. Compared with those of the cross ratio method, the maximum error, minimum error, average error and standard deviation of our method are 29.20%, 25.72%, 22.70% and 28.66% less, respectively. Compared with the small ellipse method, the accuracy of our method is higher and the maximum error, minimum error, average error and standard deviation are 87.95%, 79.11%, 84.04% and 85.97% less, respectively.
Our method was compared with a geometric-based algorithm 45 and an algebraic-based algorithm 46 to furtherly examine its performance of the feature extraction, both of which are used in newly published articles. The geometric-based algorithm uses the principle that the polar lines intersect at the center of the circles to establish an objective function to extract the center coordinates of the concentric circle. The algebraic-based algorithm obtains the center coordinates of concentric circles by solving the eigenvector of elliptic coefficient matrix. The validation experiments were carried out on a set of ten images and each algorithm was performed five times on images. The mean and standard deviation of the errors of the center coordinate extractions are counted and tabulated in Table 2.
According to Table 2, the feature extraction accuracy of the proposal is higher than that of the other two methods. For u-coordinates, the mean of circle center extraction errors of the geometry based algorithm, the algebra based algorithm and the proposal are 5.5970, 1.4556 and 1.0957 respectively and the results of the two algorithms are 4.5 and 0.36 larger than that of the proposal; the standard deviation are 2.0573, 0.8268 and 0.6839 pixels respectively and the results of the two algorithms are 1.37 and 0.14 pixels larger than that of the proposal.
For v-coordinates, the mean are 1.4809, 0.9615 and 0.9424 pixels respectively and the results of the two algorithms are 0.54 and 0.02 pixels larger than that of the proposal; the standard deviation are 1.1301, 0.6650 and 0.6260 pixels respectively and the results of the two algorithms are 0.5 and 0.04 pixels larger than that of the proposal.
Although the accuracy of our method is high, its accuracy and robustness can be further improved by enhancing the ability of edge detection and eliminating outliers to reduce ellipse fitting error.

Concentric calibration template and its application
Simulation image generation and experiment. Simulation image generation. A rectangular array calibration template consists of m rows and n columns of concentric circles, and the origin is located in the upper left corner. The center coordinate of the first concentric circle in the upper left corner is (x 0 , y 0 ), and the intervals of the center along the x-axis and y-axis are Δx and Δy, respectively. Then, the center coordinates of the ith column and the jth row are where 2 ≤ i ≤ n and 2 ≤ j ≤ m. (16) x i = x 0 + (i − 1) · �x and y j = y 0 + j − 1 · �y Figure 9. Images of the concentric circle template. Result of the simulation experiment. According to Fig. 11, when the radius of c 2 is larger than 2 mm, the errors of the intrinsic parameters are more stable. The error of f x decreases with increasing radius, and the minimum error of our method is 0.0047, the minimum error of Zhang's method is 0.0042; the two are basically the same. The maximum error of our method is 2.045, the maximum error of Zhang's method is 2.164, and the error of our method is 5.50% less than that of Zhang's method. The error of f y also decreases with increasing radius, and the minimum error of our method is 0.0047, the minimum error of Zhang's method is 0.0055 and the error of our method is 12.96% less than that of Zhang's method. The maximum error of our method is 2.069, the maximum error of Zhang's method is 2.166, and the error of our method is 4.48% less than that of Zhang's method. The minimum error of u 0 is 0.115 pixels and that of Zhang's method is 0.055 pixels, which is 52.17% smaller. The maximum error of u 0 is 0.324 pixels and that of Zhang's method is 0.396 pixels, which is 18.18% larger. The minimum error of our method v 0 is 0.246 pixels, the minimum error of Zhang's method is 0.279 pixels, and the error of our method is 11.83% less than that of Zhang's method. The maximum error of our method v 0 is 0.294  www.nature.com/scientificreports/ pixels, the maximum error of Zhang's method is 0.716 pixels, and the error of our method is 0.422 pixels less than that of Zhang's method. According to Fig. 12, with the increase in the intervals between the feature points, the errors decrease. The minimum error of f x of our method is 1.738 and the minimum error of Zhang's method is 0.094. The maximum error of our method is 19.336 and the maximum error of Zhang's method is 30.882, which is 37.39% larger. The minimum error of f y of our method is 1.783 and the minimum error of Zhang's method is 0.0069. The maximum error of our method is 19.374 and the maximum error of Zhang's method is 30.262, which is 35.98% larger. The minimum error of u 0 of our method is 0.113 pixels, and the minimum error of Zhang's method is 0.284 pixels, so the minimum error of our method is half that of Zhang's method. The maximum error of our method is 4.996 pixels, and the maximum error of Zhang's method is 2.519 pixels, so the error of our method is larger than that of Zhang's method. The minimum error of v 0 of our method is 0.048 pixels, and the minimum error of Zhang's method is 0.217 pixels, so the error of our method is 77.88% less than that of Zhang's method. The maximum error of our method is 1.177 pixels, and the maximum error of Zhang's method is 5.939 pixels, so the error of our method is 4.762 pixels less than that of Zhang's method. Generally, when the interval is more than 14 mm, the error of camera intrinsic parameters is small. Therefore, the interval of 14 mm may be appropriate.
According to Fig. 13, when the number of images is 12 to 30, the error of our method is less than that of Zhang's method. As the number of images increases, the error in f x decreases. The minimum error of our method is 0.0052, and the minimum error of Zhang's method is 0.011, so the minimum error of our method is approximately half that of Zhang's method. The maximum error of our method is 0.082, and the maximum error of Zhang's method is 0.101, so the error of our method is 18.81% less. The minimum error of f y of our method is 0.0044, and the minimum error of Zhang's method is 0.0107, so the error of our method is 58.88% less than that of Zhang's method. The maximum error of our method is 0.087, and the maximum error of Zhang's method is 0.103, so the error of our method is 15.53% less than that of Zhang's method, and the error is small when the number of images is between 18 and 30. The minimum error of u 0 of our method is 0.292 pixels, and the minimum error of Zhang's method is 0.287 pixels, so the error of our method is 1.74% less than that of Zhang method. The maximum error of our method is 0.322 pixels, and the maximum error of Zhang's method is 0.311 pixels, so the results of our method are 3.42% larger than that of Zhang's method. The minimum error of v 0 of our method is 0.285 pixels, and the minimum error of Zhang's method is 0.265 pixels, so the result of the method is 7.02% larger than that of Zhang's method. The maximum error of our method is 0.312 pixels, and the maximum error of Zhang's method is 0.302 pixels, so the result of our method is 3.21% larger than that of Zhang's method. www.nature.com/scientificreports/ Generally, when the number of images is between 18 and 27, the calibration error is small. Therefore, 21 images may be appropriate. According to Fig. 14, when the number of feature points is 88, the calibration error is small. The error of f x of our method is 0.01 and is 0.02 for Zhang's method. The error of our method is approximately half that of Zhang's method. The error in f y of our method is 0.0092, and that of Zhang's method is 0.0217, which is 57.6% larger. The error in u 0 of our method is 0.292 pixels and that of Zhang's method is 0.287 pixels. The result of our method is 1.7% larger that of Zhang's method. The error in v 0 of our method is 0.294 pixels and 0.279 pixels for Zhang's method. The result of our method is 5.1% larger that of Zhang's method. Although the error in u 0 and v 0 is slightly larger than that of Zhang's method, the maximum difference is less than 0.015 pixels. Therefore, we carefully infer that when the radius is 3 mm, the number of images is 21, the interval of feature points is 14 mm and the number of feature points is 88, a better calibration result is achieved.
Reprojection error. Table 3 tabulates the reprojection errors of the two methods in the pixel coordinate system. According to Table 3, the reprojection error of our method along the u-axis is 0.26% less than that of Zhang's method, while the projection error of our method along the v-axis is 16.41% less than that of Zhang's method. This means that the calibration accuracy of our method is slightly higher than that of Zhang's method.
Binocular system calibration experiment. Based on the simulation results, the calibration board is made to calibrate a binocular vision system with cameras of MV-EM510M/C. The images of the calibration board collected by the left and right cameras are shown in Fig. 15.
The calibration results of camera intrinsic parameters including distortion parameters, are shown in Table 4. The results of our method are basically consistent with those of Zhang's method, and our method is found to have high stability and reliability. Although the two cameras are of the same model, the intrinsic parameters are    www.nature.com/scientificreports/ not the same. It is necessary to calibrate the parameters separately to ensure the measurement accuracy of the binocular system. The reprojection errors of the two groups of experiments are calculated as shown in Table 5. In general, the proposed method achieves lower reprojection error. The minimum reprojection error for the u-axis is 0.17913 pixels, and that for the v-axis is 0.23204 pixels. However, the left camera reprojection error of Zhang's method for u-axis is 2.55% lower than that of our method in the second experiment. The total projection errors of the left camera and the right camera are also calculated. The total reprojection error of the left camera of our method is 0.4037 pixels, and that of the Zhang method is 0.4903 pixels, which is 17.66% less than that of the Zhang's method. The total reprojection error of the right camera is 0.3136 pixels, and that of the Zhang method is 0.3999 Compared with Zhang's method, the proposed method has a 21.58% smaller value, which shows that the overall accuracy of the proposed method is higher, but there are also a small number of problems of large deviation and discrete results, which may be caused by two reasons.
1. There are inevitable errors in the manufacture of printed calibration plates, such as the flatness error of the plate and the roundness error of the concentric circle. 2. The uniform illumination condition is difficult to guarantee and therefore the consistency of image quality is reduced, which leads to the fluctuation of ellipse fitting error and affects the accuracy of circle center extraction.

Conclusion
In this paper, a camera intrinsic parameter calibration method based on a concentric circle array was proposed without corner extraction. The main work included the following: a high precision extraction method of concentric circle centers based on projective characteristics and geometric constraints was proposed, methods of simulation image creation were explored, and corresponding simulation and physical experiments were carried out. The experimental results showed that the total reprojection errors of the left camera and the right camera were reduced by 17.66% and 21.58% compared with Zhang's method, respectively. Therefore, the proposed calibration method has high accuracy. The calibration accuracy is expected to be further improved by improving the accuracy of edge detection and achieving the accuracy design of the calibration target.