Real-time and ultrahigh accuracy image synthesis algorithm for full field of view imaging system

Abstract

In this paper, we propose a real time, ultrahigh accuracy and full-field-of-view (RUF) algorithm for full field of view (FOV) imaging system. The proposed algorithm combines rough matching and precise matching method to stitch multiple images with the whole FOV in short time and high imaging quality. In order to verify real-time imaging effect of RUF algorithm, we also fabricate a multi-camera imaging system which includes 19 independent cameras. And the experiment result practically illustrates that full-FOV system can achieve good performances under a near-limiting FOV of 360° × 240° with low distortion, meanwhile, optical resolution reaches up to 95 megapixels. 100% registration-accuracy RUF algorithm for imaging in one second can be widely applied to any optical imaging engineering field with large FOV, such as remote sensing imaging, microscopy imaging, monitoring system engineering fields and so on.

Introduction

Optical imaging systems are now widely used in medical, biological and military fields1,2,3. With the expansion of application scope, higher performance indexes of imaging systems are required, such as larger FOV, higher resolution and lower distortion. In previous work, the conventional single-channel imaging system adopts a complex diffraction/refraction hybrid system4. However, the uncorrected second-order spectrum of the optical glass results in a poor image quality. Some scientists make some contributions to the planar imaging system with multiple channels5,6,7,8. However, FOV is small and imaging quality is at a low level. Besides, multi-camera imaging system without image processing technology requires multi-window scanning imaging, which causes great confusion to observation for real-time and large-FOV performance9. To obtain full FOV and high resolution, usually image synthesis algorithms are needed and the system combined with image synthesis algorithms can obtain high resolution10,11,12,13,14,15,16,17. For example, AWARE-2 camera is proposed, which achieves high resolution synthesized image by algorithm. However, it only has horizontal 90°–120° FOVs and cylindrical 60°–70° FOVs, which can’t provide a capacious vision10. Furthermore, the conventional image processing algorithms, such as rotation-clipping-mapping algorithm, often contain relatively large errors and the imaging precision is at a millimeter level11. Besides, conventional large-FOV imaging system with a pixel-level accuracy can’t guarantee a full field-of-view imaging effect12. And High-resolution imaging system based on traditional imaging synthesis algorithm is difficult to realize real-time imaging, which is extremely detrimental to engineering requirements14. Some image processing techniques matching imaging system can’t deal with the distortion, which causes bad distortion15. The imaging system using scanning method is also time-consuming and make a timely judgment on the imaging result16. Also, some imaging systems with image processing technique have no stable robustness and accuracy rate facing complex environment17. These conventional multi-channel imaging systems can’t meet the functional requirements of large FOV, high resolution and low distortion due to the limitation of optical principle. Therefore, an imaging system with the large FOV, high resolution and low distortion is highly demanded.

In this paper, we propose a real time, ultrahigh accuracy and full-field-of-view (RUF) algorithm for full field of view (FOV) imaging system. The proposed algorithm combines rough matching and precise matching method to stitch multiple images with the whole FOV in short time and high imaging quality. In order to verify real-time imaging effect of RUF algorithm, we also fabricate a multi-camera imaging system which includes 19 independent low-cost cameras. And the experiment result practically illustrates that full-FOV system can achieve good performances under a near-limiting FOV of 360° × 240° with low distortion, meanwhile, optical resolution reaches up to 95 megapixels. 100% registration-accuracy RUF algorithm for imaging in one second can be widely applied to any optical imaging engineering field with large FOV, such as remote sensing imaging, microscopy imaging, monitoring system engineering fields and so on.

Theory on the proposed algorithm

System model

Figure 1 illustrates the system model of the proposed algorithm, where multi-aperture imaging system divides the whole object space into several small FOVs (for example, a–g), and each imaging channel captures an image (A–G) with a small FOV. And the captured images are mosaicked by our proposed algorithm to get a large-FOV and high-resolution image.

Figure 1
figure1

The system model of proposed algorithm.

The imaging relationship between the total FOV and a single FOV is obtained via analyzing the imaging principle of large FOV, the total FOV of the imaging system can be expressed as:

$$ TFOV = nFOV - F_{C} , $$
(1)

where TFOV is the total FOV of the imaging system and FOV is a single FOV. And n is the number of system apertures, evidently Fc is the FOV of the overlapping portion. Furthermore, there is a theoretical possibility of infinitely scalable field angles and pixel values, but still maintains subject to low-cost camera facilities and wide-FOV principle.

Real time, ultrahigh accuracy and full-FOV (RUF) algorithm

Based on the above structure, we propose a RUF algorithm for full field of view imaging system. The proposed algorithm combines rough matching and precise matching method. The flowchart for our proposed RUF algorithm is shown in Fig. 2.

Figure 2
figure2

The flowchart for our proposed RUF algorithm.

In this prototype, multi-camera outputs have the natural advantages of parallel-processing so real-time imaging is realized, which meets practical requirements of real-time and ultrahigh-precision (100%) imaging.

The RUF includes four main steps: parallel acceleration using multithreading and GPU technology, extraction and description of feature points, precision optimization (100% precision) and image reconstruction. Benefiting from independent multi-view snapshot, multi-view real-time imaging is accomplished using multithreading technique. AKAZE18 is used to extract feature points, which is implemented using multithreading technology. Feature points in multiple images are extracted independently without affecting each other, so the number of created threads (N) and the number of images (M) satisfy the following relation:

$$ Y = \left\{ {\begin{array}{*{20}c} {M - 1,N > M} \\ {N - 1,N < M,} \\ \end{array} } \right. $$
(2)

where Y is the number of threads that need to be re-created with the exception of the main thread, and a mutex is added to prevent multiple threads from accessing the same image at the same time. Due to the synchronized efficiency, parallel time is significantly less than serial time. Then previous feature points extracted are matched by FREAK descriptors \* MERGEFORMAT 19, RUF combining AKAZE and FREAK yields a RUF feature description.

Precision optimization utilizes two-step method: (1) the proposed variable iteration method for the rough matching. (2) Optimal matching based on equal slopes for the precise matching. We demonstrate it as shown in Fig. 3. In this process, points of interest are matched and the rest of points are discarded, where the best registration result (100% accuracy) is used for following image reconstruction. It is noted that the above two steps all satisfy the symmetry principle. That is, each image and reference image must meet the same condition of feature matching before they will be regarded as good point pairs by default. Conventional matching filtration method usually utilizes the method of constant threshold to judge whether it belongs to a pair of matching points, and only satisfies the requirement of unilateral matching. By constantly changing new fixed threshold, however, some feature matching pairs can be obtained but some error matches still exist (due to the constant threshold). In this paper, we propose a variable iteration algorithm, which aims to solve the problem of mismatches. The theory with our proposed method for the rough filtration is illustrated in Fig. 3a, where the blue rectangle represents the image A, points (B–F) are the points of interest reserved for the primary filtration, blue and white circles are search areas with different radii. In this step, we select a point of interest matched (B) as a center point, which is regarded as a good point of interest with four points (n = 5) around it in a circle of radius R. However, if the center point is only with three points (n < 5) around it, then we need to add one to the radius (R + 1) until the fifth point of interest is found to jump out of the loops, the algorithm core is described as follows:

$$ L = \left\{ {\begin{array}{*{20}l} {R,n \ge 5} \\ {R + 1,n < 5,} \\ \end{array} } \right. $$
(3)

where L represents the circle search length that is a variate, which has good advantage for accuracy filtration over the comparison method. This idea avoids the disadvantage of circle search using constant search radius, which has important implications for ultrahigh-precision imaging. Unideal matching point pairs are removed. The principle of optimal matching based on equal slopes is illustrated in Fig. 3b, where optimal matching based on equal slope is used to reserve the optimal matched point pairs. The expression for the equal slope is as follows:

$$ kn = \frac{yn - yn^{\prime}}{{xn - xn^{\prime}}}, $$
(4)
Figure 3
figure3

The schematic diagram of accuracy optimization for proposed RUF algorithm. (a) The principle of the variable iteration step. (b) Optimal matching based on equal slopes.

where \({k}_{n}\bullet \)kn is the nth slope of the line that matches the points, and (\(xn\), \(yn\)) is the coordinate of the nth matched points, and (\(xn^{\prime}\), \(yn^{\prime}\)) is the coordinate of the nth’ matched points in the reference image. As illustrated in Fig. 3b, point pairs matched (for example, B and B′, C and C′, D and D′, F and F′) are right to be reserved and the wrong pairs (V and V′) are discarded. Then the mapping between each image and reference image is estimated by perspective transformation. The mapping images are defined in a uniform coordinate system, the relationship is given by –the following relational expression:

$$ \left[ {X,Y,Z} \right] = M\left[ {x,y,z} \right], $$
(5)

where [x, y, z] is the coordinate of original image and [X, Y, Z] is the coordinate of new image with transformation, M is the transformation matrix which helps us to obtain the perspective transformation model preparing for the following image reconstruction.

The obtained registration information is used for the image reconstruction. In the process of reconstruction, we utilize a multi-band fusion and exposure compensation method to output large-FOV and high-resolution image. GPU computing makes this process in parallel processing architecture, which allows research to be conducted not only in technical areas but also to provide assistance for practical engineering needs. In RUF algorithm, GPU parallel processing is already realized, so our proposed RUF algorithm enables to output real-time, ultrahigh-precision image with full-FOV.

Experiment

System design and simulation

To get the whole FOV, we first designed and modeled a 19 cameras imaging system are placed in a given position in and simulated the FOVs of the 19 cameras in SolidWorks software platform. Figure 4 shows the FOV model of 19 cameras. Figure 4a shows FOV of a single camera, and 19 small FOVs realize the nondestructive detection without dead-zone, the effective FOV of the camera is 100°(θ = 100°). Figure 4b, c are the overlap models of 19 FOVs from the different viewing angle. In the simulation, the fabricated system reaches 360° and the vertical angle reaches more than 240°.

Figure 4
figure4

FOV of 19 cameras. (a) FOV of a single camera. (b)Overlap model of the view 1. (c) Overlap model of the view 2.

For the global camera layout, we establish a three-dimensional coordinate system to describe the distribution characteristics of 19 imaging channels. Based on SolidWorks software platform, the rotation angles of 19 imaging channels are shown in Table 1. Here i represents the number of channels, θ is the angle of the imaging channel to the X-axis and β is that of the imaging channel to the Z-axis. The imaging channel A is the center of 19 channels, which is regarded as a reference. Then we define the rotation angle around it as counterclockwise (clockwise) to negative (positive).

Table 1 The angle of 19 imaging channels in space (°).

RUF algorithm for full field of view imaging system

To prove the efficiency of the proposed algorithm, we fabricate a multi-camera imaging system consisting of 19 cameras illustrated in Fig. 5a. The spherical frame is made of aluminum. 19 holes was drilled on it to install 19 cameras. The radius of the frame is ~ 130 mm and the thickness is ~ 6 mm. The camera is shown in Fig. 5b, which consists of a lens and an image sensor. And 13-mm-diameter camera has a FOV of 100° and its focal length is ~ 3.5 mm. Specifically, 19 cameras are used for information capture and the 19 images are captured at 30 frames per second. One computer is used to connect the cameras and storage the data, which is really the hardware carrier used to realize real-time imaging.

Figure 5
figure5

The preparation result of the fabricated system (130 mm × 130 mm × 6 mm). (a) The structure of the fabricated system. (b) The composition of single imaging channel.

In the experiment, we use the fabricated system to take the image of a building. Firstly, the fabricated system captured the whole building. Then, the 19 images are synthesized using the proposed algorithm. We note that the algorithm is working on a PC (Intel Core i9-9880H CPU @2.3 GHz/4.8 GHz + RTX2080) equipped with Windows10 operating system and is based on the platform of vs2019 + opencv4.2. Each image has a resolution of 2,592 × 1,944. The 19 images are shown in Fig. 6. The whole-FOV image is shown in Fig. 7. From Fig. 7, we see that the synthesized image has large FOV, high resolution and low distortion. In the experiment, the whole FOV of the image is 360° × 240°. We can see each part of the image is very clear, here optical resolution reaches up to 95 megapixels. As an engineering requirement, real-time imaging is yet realized.

Figure 6
figure6

The 19 images (captured on 13 June 2019 at 15:26).

Figure 7
figure7

The whole-FOV image captured by fabricated system.

We also compare the fabricated system with a conventional fisheye camera with 5-megapixel as shown in Fig. 8. Both systems can obtain a large-FOV image. However, if we compare the details, the performances of the two systems are different.

Figure 8
figure8

The comparison of the fisheye camera and our fabricated imaging system. (a) 360° fisheye image captured by fisheye camera:the regions 1–3 are the enlarged details of the labelled regions in (a). (b) The synthesized image with large FOV, high resolution and low distortion: images 1–3 are the enlarged details of the labelled regions in (b).

Comparing the details of the two images in region 1, the fabricated system demonstrates that part of scene from the bridge bottom to the grille ceiling. It provides accurate information, for example, exactly how many linear grilles (50) are between the 2 elongated white boards on the building ceiling. Also, exactly how many grilles (3) are occupied by one white board, which is also clear to see the details of window on the wall. However, it’s unable to see clearly using the fisheye camera.

Comparing the details of the two images in region 2, the conventional fisheye camera is unable to take the image of the whole scene with uniform exposure. For example, some parts of the scene may be overexposed, and the other parts of the scene may be underexposed due to complex lighting environment in real world. However, the fabricated system avoids this problem. The fabricated system shows an example high-dynamic-range (HDR) image, the sunlight of that varies from bright wall to the dark ceiling and returns to the colorful scene, which is difficult to get uniform exposure using the fisheye camera (see region 2 in fisheye image). From the labeled-yellow region in the fabricated system, we can see characters in the exhibition hall clearly. The comparison illustrates an HDR synthesized image without the partial darkness covers the color information of the object space. This difference mainly benefits from independent exposure of each camera, which more accurately matches human vision.

Comparing the details of the two images in region 3, the conventional imaging system has serious distortion due to large FOV. However, the proposed algorithm synthesizes multiple low-dynamic-range (LDR) images into an HDR image. Each camera has relatively small FOV, which largely reduces the distortion. For instance, the white border of cuboid building in the synthesized image is clearly straight while that in the fisheye image is curved. A combination of system and algorithm provides a competitive advantage over the comparison system. Therefore, our fabricated system can get large-FOV image with low-distortion and high-resolution.

Accuracy of RUF

To prove the accuracy of the proposed algorithm, we did some experiments and illustrated the whole imaging process of RUF image synthesis algorithm.

An example for points of interest detection-filtration-matching is illustrated in Fig. 9a–g. Figure 9a, b are the input images, Fig. 9c, d show that feature points are marked using circles with various colors. Our proposed variable iteration method is used to remove the bad points of interest, and the matching result is illustrated in Fig. 9e, where the blue points of interest are discarded and the red are matched as good matches.

Figure 9
figure9

A near-field scene for RUF imaging, (a,b) original images, (c,d) points of interest are extracted, (e) variable iteration method for the rough matching, (f) optimal matching for the precise matching, (g) the result for image reconstruction.

Due to the introduction of variable thresholds, the matching precision is improved a lot until the good matches are retained (see previous “RUF algorithm for full field of view imaging system” section for details). Considering the computation, the optimal matches based on equal slopes is expected (see previous “RUF algorithm for full field of view imaging system” section for details). The matched result is illustrated in Fig. 9f, the red matched pairs with equal slopes are reserved and used for finding the optimal solution. The result for image reproduction is illustrated in Fig. 9g, which proves that the RUF method has the ability to adapt to complex environment including scale scaling, uneven illumination and rotation. RUF enables real-time, 100% accuracy and high-resolution imaging to be relized.

As illustrated in Fig. 10, the RUF example data is given. In this step, GPU acceleration and multithread technology enables real-time imaging to be possiable, and the matched pairs based on equal slopes can always be found to be the best solution, Fig. 10 illustrates RUF registration information achieves 100% accuracy in a near-field scene. We introduce SSIM20 to quantitatively evaluate image quality. We obtain 2 blue regions of the patchwork areas in the composite picture (see Fig. 9g for blue regions). As illustrated in Table 2, the values of SSIM1 (comparing the left-blue region with identical region in original image a) and SSIM2 (comparing the right-blue region with identical region in original image b) are computed. Image synthesis algorithm is able to adapt to complex environments. Previous sections have proved the adaptability of RUF algorithm to angle, light and scale, different rotation angles, and light conditions also cause SSIM to drop. This decline is predictable and acceptable. The red and purple regions are selected to operate the further verification, Table 2 shows the SSIM value close to split seam can be stable above 0.95 and non-stitched region can be stable above 0.98. The results demonstrate that image quality is clearly improved (SSIMmax = 1). This is because that both images are from the original image (theoretically, SSIM = 1). Due to the distortion, rotation, scaling and other performance effects of the algorithm for the synthesized image, the measured SSIM value must be less than 1, but the stitching accuracy is infinitely close to 100%.

Figure 10
figure10

The line chart of RUF data.

Table 2 The image quality evaluation for near-field scene.

In order to further prove the ultrahigh accuracy of the algorithm, a far-field scene is illustrated in Fig. 11. Figure 11 shows all points of interest are right (100% registration rate). SSIM is introduced to evaluate the image quality, and Fig. 11g demonstrates SSIM indexes of 2 yellow-regions near the seam all are 1.0. Therefore, the proposed algorithm with real-time imaging can achieve ultrahigh accuracy (100%), which is of great significance for the application of algorithms in engineering.

Figure 11
figure11

A far-field scene for RUF imaging, (a,b) original images, (c,d) points of interest are extracted, (e) variable iteration method for the rough matching, (f) optimal matching for the precise matching, (g) the result for image reconstruction.

The processing time of RUF for full-FOV system is described in Table 3. Table 3 shows the required time for each step of full-FOV algorithm (for full-FOV system: 19 images). Table 4 demonstrates the detailed processing time for various number of images (2, 7, and 19), where near-field and far-field scenes correspond to 227 ms and 158 ms respectively. Comparing with serial time, parallel processing improves by an order of magnitude.

Table 3 The time of algorithm step for full-field-of-view imaging system.
Table 4 The detailed processing time for various number of images.

Conclusion

In this paper, we propose a real time, ultrahigh accuracy and full-field-of-view (RUF) algorithm for full field of view (FOV) imaging system. The proposed algorithm combines rough matching and precise matching method to stitch multiple images with the whole FOV in short time and high imaging quality. In order to verify real-time imaging effect of RUF algorithm, we also fabricate a multi-camera imaging system which includes 19 independent cameras. And the experiment result practically illustrates that full-FOV system can achieve good performances under a near-limiting FOV of 360° × 240° with low distortion, meanwhile, optical resolution reaches up to 95 megapixels. 100% registration-accuracy RUF algorithm for imaging in one second can be widely applied to any optical imaging engineering field with large FOV, such as remote sensing imaging, microscopy imaging, monitoring system engineering fields and so on.

References

  1. 1.

    Beck, R. N. Nomenclature for Fourier transforms of spread functions of imaging systems used in nuclear medicine. J. Nucl. Med. 13(9), 704–705 (1972).

    CAS  PubMed  Google Scholar 

  2. 2.

    Jalili, N. & Laxminarayana, K. A review of atomic force microscopy imaging systems: Application to molecular metrology and biological sciences. Mechatronics 14(8), 907–945 (2004).

    Article  Google Scholar 

  3. 3.

    Sabatini, R., Richardson, M. A., Cantiello, M. & Fiorini, P. A novel approach to night vision imaging systems development, integration and verification in military aircraft. Aerosp. Sci. Technol. 31(1), 10–23 (2013).

    Article  Google Scholar 

  4. 4.

    Yang, J. J., Flores, A. & Wang, M. R. Achromatic hybrid refractive–diffractive lens with extended depth of focus. Appl. Opt. 43(30), 5618–5630 (2004).

    ADS  Article  Google Scholar 

  5. 5.

    Kitamura, Y. Reconstruction of a high-resolution image on a compound-eye image-capturing system. Appl. Opt. 43(8), 1719–1727 (2004).

    ADS  Article  Google Scholar 

  6. 6.

    Horisaki, R., Choi, K., Hahn, J., Tanida, J. & Brady, D. J. Generalized sampling using a compound-eye imaging system for multi-dimensional object acquisition. Opt. Express. 18(18), 19367–19378 (2010).

    ADS  Article  Google Scholar 

  7. 7.

    Horisaki, R. & Tanida, J. Compact compound-eye projector using superresolved projection. Opt. Lett. 36(2), 121–123 (2011).

    ADS  Article  Google Scholar 

  8. 8.

    Deng, H., Gao, X., Ma, M., Li, Y. & Zhong, X. Catadioptric planar compound eye with large field of view. Opt. Express. 26(10), 12455 (2018).

    ADS  CAS  Article  Google Scholar 

  9. 9.

    Rubin, M., Bose, K. P. & Kim, S. H. Mo1517 successful deployment and use of third eye panoramic a novel side viewing video CAP fitted on a standard colonoscope. Gastroint. Endosc. 79(5), AB466 (2014).

    Article  Google Scholar 

  10. 10.

    Brady, D. J. et al. Multiscale gigapixel photography. Nature 486(7403), 386–389 (2012).

    ADS  CAS  Article  Google Scholar 

  11. 11.

    Cao, A. et al. Structural design and image processing of a spherical artificial compound eye. Optik. 126(21), 3099–3103 (2015).

    ADS  Article  Google Scholar 

  12. 12.

    Shimizu, S. et al. Moving object detection by mobile Stereo Omni-directional System (SOS) using spherical depth image. Pattern Anal. Appl. 9(2–3), 113–126 (2006).

    MathSciNet  Article  Google Scholar 

  13. 13.

    Jeong, K. H., Kim, J. & Lee, L. P. Biologically inspired artificial compound eyes. Science 312(5773), 557–561 (2006).

    ADS  CAS  Article  Google Scholar 

  14. 14.

    Carles, G., Downing, J. & Harvey, A. R. Super-resolution imaging using a camera array. Opt. Lett. 39(7), 1889–1892 (2014).

    ADS  Article  Google Scholar 

  15. 15.

    Yu, L., Keyi, W. & Gongshu, F. Photometric calibration and image stitching for a large field of view multi-camera system. Sensors. 16(4), 516 (2016).

    Article  Google Scholar 

  16. 16.

    Wang, J. & Wu, J. Wide field of view multifocal scanning microscopy with sparse sampling. J. Biomed. Opt. 21(2), 026008 (2016).

    ADS  Article  Google Scholar 

  17. 17.

    Damian, W. Multi-camera imaging system for UAV photogrammetry. Sensors. 18(8), 2433 (2018).

    Article  Google Scholar 

  18. 18.

    Alcantarilla, P. F., Nuevo, J. & Bartoli, A. Fast explicit diffusion for accelerated features in nonlinear scale spaces. BMVC. 34, 1281–1298 (2013).

    Google Scholar 

  19. 19.

    Alahi, A., Ortiz, R. & Vandergheynst, P. FREAK: Fast retina keypoint. CVPR. 510–517 (2012).

  20. 20.

    Li, X. W., Kim, S. T. & Wang, Q. H. Copyright protection for elemental image array by hypercomplex Fourier transform and an adaptive texturized holographic algorithm. Opt. Express. 25(15), 17076 (2017).

    ADS  Article  Google Scholar 

Download references

Acknowledgements

This work is supported by National Key R&D Program of China under Grant No. 2017YFB1002900, National Natural Science Foundation of China under Grant No. 61927809 and 61975139.

Author information

Affiliations

Authors

Contributions

S.-B.L., L.L. and Q.-H.W. wrote the main manuscript text, R.-Y.Y., J.-H.W. and W.-X.Z. prepared Figs. 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11. All authors reviewed the manuscript.

Corresponding authors

Correspondence to Lei Li or Qiong-Hua Wang.

Ethics declarations

Competing interests

The authors declare no competing interests.

Additional information

Publisher's note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendix

Appendix

We utilize SSIM as quantitative evaluators in this paper. The SSIM is defined as:

$$ SSIM(x,y) = \frac{{\left( {2u_{x} u_{y} + c_{1} } \right)\left( {2\sigma_{xy} + c_{2} } \right)}}{{\left( {u_{x}^{2} + u_{y}^{2} + c_{1} } \right)\left( {\sigma_{x}^{2} + \sigma_{y}^{2} + c_{2} } \right)}}, $$
(6)
$$ c_{1} = \left( {k_{1} L} \right)^{2} ,c_{2} = \left( {k_{2} L} \right)^{2} , $$
(7)

where \(u_{x}\) and \(u_{{\text{y}}}\) is the average of x and y, \(\sigma_{{}}\)is variance and \(\sigma_{{{\text{xy}}}}\) is covariance of x and y, L is the dynamic range of the pixel values, and \(k_{1} = 0.01\), \(k_{2} = 0.03\) by default. The ideal value of SSIM is in the range between ‘0’ to ‘1’, where SSIM = 1 means two identical images completely.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Liu, SB., Wang, JH., Yuan, RY. et al. Real-time and ultrahigh accuracy image synthesis algorithm for full field of view imaging system. Sci Rep 10, 12389 (2020). https://doi.org/10.1038/s41598-020-69353-9

Download citation

Comments

By submitting a comment you agree to abide by our Terms and Community Guidelines. If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate.

Search

Quick links

Nature Briefing

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

Get the most important science stories of the day, free in your inbox. Sign up for Nature Briefing