One of the challenges in differentiating a duplicate hologram from an original one is reflectivity. A slight change in lighting condition will completely change the reflection pattern exhibited by a hologram, and consequently, a standardized duplicate hologram detector has not yet been created. In this study, a portable and low-cost snapshot hyperspectral imaging (HSI) algorithm-based housing module for differentiating between original and duplicate holograms was proposed. The module consisted of a Raspberry Pi 4 processor, a Raspberry Pi camera, a display, and a light-emitting diode lighting system with a dimmer. A visible HSI algorithm that could convert an RGB image captured by the Raspberry Pi camera into a hyperspectral image was established. A specific region of interest was selected from the spectral image and mean gray value (MGV) and reflectivity were measured. Results suggested that shorter wavelengths are the most suitable for differentiating holograms when using MGV as the parameter for classification, while longer wavelengths are the most suitable when using reflectivity. The key features of this design include low cost, simplicity, lack of moving parts, and no requirement for an additional decoding key.
Various optical systems, such as spectral imaging1,2,3, optical coherence tomography (OCT)4, photoacoustic (PA) imaging mechanism5, laser scanning micro-profilometry6, ultraviolet–visible (UV–Vis) spectroscopy7, optical fiber sensor8, and scanning microscopy9, are available. In OCT, low coherence light is employed to obtain 3D images from within optical scattering media10,11. The PA imaging technique uses optical illumination and ultrasound wave detection to visualize optical absorption, which is frequently related to the properties of an object12,13. In UV–Vis spectroscopy, the absorption or reflection properties of a material are compared in the ultraviolet and visible bands of the electromagnetic spectrum14,15. In scanning microscopy, accelerated electrons are used as the source of light16,17. Most of these optical systems have been employed to detect different types of fraud, including counterfeit currencies, pharmaceutical drugs, documents, and artwork. However, one application in which optical systems are not widely used is duplicate hologram detection and classification. Holograms, also known as diffractive optically variable image devices, are optically variable objects18,19. They change appearance when viewed from a different angle or under a different lighting system. Therefore, designing an optical system for detecting and classifying any holograms is challenging and expensive20. Moreover, the portability of an optical system should be considered in this case. One of the methods that can overcome all the aforementioned challenges in the classification of holograms is hyperspectral imaging (HSI). HSI is an evolving pioneering statistical and heuristic spectrometric technology21,22. It is a nondestructive technique that is widely used to examine a broad spectrum of light rather than merely examining the primary colors, i.e., red, green, and blue (RGB) in the pixels of an image23,24. HSI has been used in various field and applications, such as cancer detection25, air pollution monitoring26, remote sensing27, agriculture28, astronomy29, quality control30, environment monitoring31, and vegetable classification32. In an HSI image, every pixel not only contains the primary colors but also the absorption and reflectance data33,34. Each pixel includes spectral information, forming 3D values on 2D images35. This phenomenon is known as the hyperspectral data cube22. HSI data are assumed to be sampled spectrally at more than 20 equally distributed wavelengths. HSI can also be extended beyond the visible spectrum (VIS)36 and into near-infrared37 and far-infrared38 spectra.
At present, nearly all HSI applications are restricted to research laboratories because these instruments are heavy, expensive, and laborious to use35. Sumriddetchkajorn and Intaravanne demonstrated the ability of HSI to verify the authenticity of a credit card by analyzing the hologram printed on it1. They obtained different color spectra of the hologram in the credit card by using white light at different angles. However, the instrument they designed is expensive and not portable. Another study built a portable cylindrical light module that can be attached to a mobile device to verify holograms in currency notes39. Although this instrument is portable, it can verify only a small hologram. Moreover, the quality of the camera will vary among different phone models used to acquire the image, and this condition can considerably affect the analysis results. The major goal of HSI research is to make HSI more affordable, user-friendly, and compact.
Therefore, the current study proposes and demonstrates a portable and low-cost HSI-based housing module to differentiate between original and duplicate holograms. The system consists of a Raspberry Pi 4 processor, a Raspberry Pi version 2 camera module, a thin film transistor (TFT) display, a light-emitting diode (LED) lighting system, and a LED Dimmer. A VIS-HSI technology is also developed to simulate the RGB values captured by the camera into a hyperspectral image. A region of interest (ROI) is selected, and the mean gray value (MGV) acquired from four original holograms is used as a reference and compared with that obtained from four duplicate holograms. A 98% confidence interval (CI) is formed around the MGV of every wavelength and used as a classification criterion. The results demonstrate that the four duplicate holograms are easily differentiated from the original ones. Moreover, the reflectance data of the ROI are measured, indicating that the original and duplicate holograms have different reflective patterns in the longer wavelengths.
Results and discussion
In this study, a Python-based Windows application is developed to capture the real-time image from the Raspberry Pi camera. One specific ROI is extracted from the image that is converted into a hyperspectral image. Finally, the hologram is classified either as an original or a duplicate based on MGV. The ROI consists of the fourth and fifth letters in the word “SECURITY,” i.e., “UR.” This region is selected because it is placed at the center of the hologram, and thus, easily accessible, as shown in Fig. 1. This ROI has a height and width of 0.6 mm and 0.9 mm, respectively. It has a total of 33,810 pixels. This research is a pilot study and hence only four duplicate and four original holograms are used as a reference for numerical analysis provided by K Laser Technology Inc. For the ROI, the MGV is measured in the VIS-HSI band between 400 and 700 nm. MGV is the average measure of the brightness of all the pixels in the image40,41. In a color image, the gray value can have values between 0 and 255. In the past, many studies used MGVs for image classification and detection, because it is one of the reliable classification methods40,42,43,44. Figure 2 shows the mean of the duplicate and original samples between 400 and 700 nm (see Supplement 1 Sect. 3 for the detailed plot of all the samples).
The original and duplicate holograms can be easily differentiated. The RMSE of the MGVs in the shorter wavelength between 400 and 500 nm is 3.4664, while that in the longer wavelength between 600 and 700 nm is 3.1126. However, the RMSE of MGVs in the middle wavelengths is 5.4049. Therefore, based on RMSE, we can infer that the middle wavelengths between 500 and 600 nm are the most suitable for the classification of holograms (see Supplement 1 Sect. 2 for entropy measurement). The samples are classified into two classes: original and duplicate holograms. As shown in Fig. 3, a 98% CI is also calculated around the average MGVs of both classes for each wavelength. CI represents the range in that specific wavelength wherein the MGV of a sample belonging to that specific class will probably fall. Although 95% is the most commonly used CI, 98% CI can be used because the MGVs of the samples fall within a narrow range in this study. On this basis, the hologram will be classified as original if the MGV falls within its class; otherwise, the hologram will be classified as duplicate. Although, the developed method can measure the MGVs from 400 till 780 nm, the MGVs become similar for the duplicate and the original holograms after 700 nm falling under the 98% CI. Hence, in this study the MGVs between 400 and 700 nm were only considered.
The reflectance of the center pixel of all the samples in this study is also calculated using the VIS-HSI algorithm developed in this study. Figure 3 shows the mean reflectance spectrum of the duplicate and original holograms in the visible spectrum (see Supplement 1 Section 3 for individual plots for every sample). In the longer wavelengths, specifically after 580 nm, a clear difference exists between the original and duplicate samples in the reflectance. This result is consistent with those of previous image classification studies that used HSI45,46,47. Classifying images is easier in the long wavelengths than in the middle or short wavelengths in HSI. Lim and Murukeshan calculated the reflectance spectra of original and duplicate banknotes48. A difference between the reflectance of the original and duplicate banknotes was only observed in longer wavelengths above 550 nm. Qin et al. measured the reflectance spectra of grapefruit samples with normal and different diseased skin conditions49. A clear difference existed in the reflectance data only between wavelengths 600 nm and 800 nm, which are in the longer wavelength region. Both the MGVs and the reflectance values can be used to classify the hologram, MGV has been specifically used to classify because the RMSE value in the wavelengths 500 and 550 are greatest in MGV. Therefore, a better classification performance can be achieved by increasing the CI to 90%.
In this study, a stand-alone Python-based Windows application is also developed to classify holograms. To capture images, the Raspberry Pi web camera interface is installed in the Raspberry Pi operating system. The live feed of the Raspberry Pi camera can be directly accessed by the Windows application by clicking the “Start” button, and the camera can also be turned off by clicking the “Stop” button. The “Capture” button is used to capture the current image that will be used for classification. In the application, the frames per second and frame number are displayed. The user must input parameters, such as the wavelength of the narrowband in which the sample will be analyzed and the gain value specific to each narrowband by clicking the “Ok” button. Once the “Analyze” button is clicked, the application will convert the RGB image captured by the Raspberry Pi camera into a hyperspectral image and crop the ROI from the image. Then, the MGV of the sample will be calculated and classified into its respective class based on the narrowband wavelength inputted by the user. The final result will be displayed at the bottom of the application either as “The Hologram is Original” or “The Hologram is Counterfeit,” (to see the application built in this study refer supplement 1 Figure S10).
One of the challenges in differentiating between duplicate and original holograms is reflection. A minor change in lighting angle will create an entirely different reflection pattern. Holograms can be differentiated by capturing and analyzing the reflection data from different incident angles. However, the reflection pattern will vary under different light sources. Therefore, the light source should also be constant. As shown in Fig. 4, a slight change in lighting conditions can lead to a drastic change in the reflection from the final image. Hence, a module must be designed to reduce external light, and lighting conditions must be the same for all holograms.
A module that could accommodate all the electronic components and reduce external light noises was designed in the current study. The design was 3D printed using Ultimaker Cura 3. The final product is presented in Fig. 5. The critical factors considered in this study were minimizing the number of optoelectronic components, reducing the size of the design, and optimizing the positioning of different components. In any image processing method based on hyperspectral or multispectral studies, a spectrometer, an optical head, or a multispectral LED board is typically used, making the module costly and fixed. Hence, the number of components must be reduced to build a low-cost and compact module. Accordingly, only a minimum number of components are utilized in the current study. The whole module can be divided into two units: the processing system and the optical system. The processing unit consists of three components: a Raspberry Pi 4 microprocessor, a Raspberry Pi camera module version 2, and a TFT display unit. Meanwhile, the optical system consists of a LED strip, a diffuser, and a LED dimmer (for the detailed specifications of these components, refer to Supplement 1 Section 1).
The schematic of the imaging system is presented in Fig. 6. The Raspberry Pi 4 computer Model B is used for processing, while the Raspberry Pi camera module version 2 is used to capture the images of a hologram. The processor is connected to an Adafruit Pi TFT 320 × 240 2.8" touch screen to control the processing unit. The optical system consists of a 3000 K chip on board LED strip light, which is fixed onto the design. The LED strip light is connected to a LED dimmer switch to control brightness. The COB LEDs has a uniform spectral response (no cyan dip) across the blue, green, and red color spectrum. However, this lighting system is not even, but rather, a pointed light source. An opal profile diffuser is also used to reduce the transmittance rate and distribute the light source evenly.
Snapshot-based RGB to HSI conversion algorithm
The core concept of the VIS-HSI technology is to endow a common digital camera with the function of a spectrometer, i.e., the image captured by the camera contains spectral information. To achieve this, a relationship matrix between the camera and the spectrometer that can be used to construct VIS-HSI technology must be established. The construction process of this technology is illustrated in Fig. 7. First, a camera (Raspberry Pi Camera) and a spectrometer (Ocean Optics, QE65000) is provided with multiple common targets as reference for the analysis. In the current study, a standard 24-color checker (X-Rite classic) is selected as the target, because it contains the most important colors (blue, green, red, and gray) and other common colors found in nature. To correct camera errors because they may be affected by inaccurate white balance, the standard 24-color card must be passed through the camera and the spectrometer to obtain 24-color patch images (sRGB, 8 bit) and 24-color images, respectively. Then, the 24-color patch image and the 24-color patch reflectance spectrum data are converted into CIE 1931 XYZ color space (for the individual conversion formulae, see Supplement 1 Sect. 4).
In the camera, the image (JPEG, 8 bit) is stored by the sRGB color space specification. Before converting an image from the sRGB color space into the XYZ color space, the respective R, G, and B values (0–255) must be converted into a smaller scale range (0–1), and then the sRGB values are converted into linear RGB values through gamma function conversion. Finally, the conversion matrix is used to convert the linear RGB value into the XYZ values normalized in the XYZ gamut space. In the spectrometer, the reflection spectrum data must be converted into the XYZ color gamut space, the XYZ color-matching functions, and the light source spectrum. The Y value of the XYZ color gamut space is proportional to the brightness; hence, the maximum brightness of the light source spectrum is calculated, and the Y value is standardized to 100 to obtain the brightness ratio (k). Finally, the reflection spectrum data are converted into the XYZ value (XYZSpectrum) normalized in the XYZ color gamut space. Multiple regression is performed to obtain the correction coefficient matrix C for calibrating the camera, as shown in Eq. (1). The variable matrix V is obtained by analyzing the factors that may cause errors in the camera, such as nonlinear response, dark current, inaccurate color separation of the color filter, and color shifting.
Once the camera is calibrated, the corrected X, Y, and Z (XYZCorrect) values can be obtained using Eq. (2). The conversion matrix M is obtained using the spectrometer and the reflection spectrum data (spectrum) of the 24 color blocks. The average root-mean-square error (RMSE) of XYZCorrect and XYZSpectrum is 0.19, which is negligible in this case. The important principal components are identified by performing the principal component analysis (PCA) on RSpectrum. Moreover, multiple regression is performed on the extracted principal component scores, which are combined with the previously obtained results to determine the conversion matrix M.
To convert XYZCorrect into RSpectrum, the dimensions of RSpectrum must be reduced to increase the correlation between each dimension and XYZCorrect. Therefore, RSpectrum is analyzed via PCA to obtain the eigenvectors. The six principal components are used in dimensionality reduction because these six groups of principal components have been able to explain 99.64% of data variability. The corresponding principal component score can be used for regression analysis with XYZCorrect. In the multivariate regression analysis of XYZCorrect and the score, the variable VColor is selected because it has all the listed possible combinations of X, Y, and Z. The transformation matrix M is obtained using Eq. (3), and then XYZCorrect is passed through Eq. (4) to calculate the analogue spectrum (SSpectrum). Finally, the obtained 24-color block analogue spectrum (SSpectrum) is compared with the 24-color block reflection spectrum. The RMSE of each color block is calculated, and the average error is 0.056, which is negligible. The average color difference between the 24-color block analogue spectrum and the 24-color block reflection spectrum is 0.75, suggesting that distinguishing color difference is difficult. When the processed reflection spectrum color is reproduced, the color is reproduced accurately. The VIS-HSI technology constructed from the above process can simulate the reflection spectrum from the RGB values captured by the monocular camera to obtain the VIS hyperspectral images.
By using this method, an RGB image captured by a digital camera can be converted into an HSI image without using a spectrometer, an optical head, or a hyperspectral camera. By eliminating these components, the machine designed in this study is low cost and highly portable but can still differentiate between duplicate and original holograms.
In this study, a portable and low-cost module has been designed to capture, classify, and detect duplicate hologram. This specific design reduces the external light noises which will cause uneven reflection pattern and provides an even light distribution throughout the surface of the hologram. Also, a VIS-HSI algorithm has been built to convert the RGB images captured by the Raspberry Pi camera to a hyperspectral image. A region of interest is selected from the hyperspectral image and the mean grey value is measured and 98% confidence interval is calculated in the visible band between 400 and 700 nm. Based on the MGVs the holograms are classified as either original or duplicate. Finally, a stand-alone Python-based Windows application is also built which is used to control the Raspberry Pi micro-processor and access the real-time feed of the Raspberry Pi camera housed in the portable and low-cost module. Based on the user defined narrow band wavelength values, the application will analyse the hologram and display which class does the sample belong to. The future scope of this study is to use same methodology to classify the counterfeit currencies from the original currency. The same design could also be used to design a NIR-HSI conversion algorithm and a low cost NIR-HSI module can be developed.
The datasets used and/or analyzed during the current study available from the corresponding author on reasonable request.
Sumriddetchkajorn, S. & Intaravanne, Y. Hyperspectral imaging-based credit card verifier structure with adaptive learning. Appl. Opt. 47, 6594–6600 (2008).
Polak, A. et al. Hyperspectral imaging combined with data classification techniques as an aid for artwork authentication. J. Cult. Herit. 26, 1–11 (2017).
Lim, H.-T. & Murukeshan, V. M. Hyperspectral imaging of polymer banknotes for building and analysis of spectral library. Opt. Lasers Eng. 98, 168–175 (2017).
Marques, M. J. et al. Sub-surface characterisation of latest-generation identification documents using optical coherence tomography. Sci. Justice 61, 119–129 (2021).
Dal Fovo, A., Tserevelakis, G. J., Klironomou, E., Zacharakis, G. & Fontana, R. First combined application of photoacoustic and optical techniques to the study of an historical oil painting. Eur. Phys. J. Plus 136, 757 (2021).
Zhang, H. et al. Materials and technologies to combat counterfeiting of pharmaceuticals: Current and future problem tackling. Adv. Mater. 32, 1905486 (2020).
Martins, A. R., Talhavini, M., Vieira, M. L., Zacca, J. J. & Braga, J. W. B. Discrimination of whisky brands and counterfeit identification by UV–Vis spectroscopy and multivariate data analysis. Food Chem. 229, 142–151 (2017).
Kang, D.-H. & Hong, J.-H. A study about the discrimination of counterfeit 50,000 won bills using optical fiber sensor. J. Korean Soc. Manuf. Technol. Eng. 21, 15–20 (2012).
Shaffer, D.K. Forensic document analysis using scanning microscopy, in Proceedings of the Scanning Microscopy 2009. 398–408 (2009).
Peng, C. et al. Fingerprint anti-counterfeiting method based on optical coherence tomography and optical micro-angiography. Acta Photonica Sin. 48, 0611001 (2019).
Marques, M. J., Green, R., King, R., Clement, S., Hallett, P., & Podoleanu, A. Non-destructive identification document inspection with swept-source optical coherence tomography imaging, in Proceedings of the European Conference on Biomedical Optics. EW4A. 6 (2021).
Gao, R., Xu, Z., Ren, Y., Song, L. & Liu, C. Nonlinear mechanisms in photoacoustics: Powerful tools in photoacoustic imaging. Photoacoustics 22, 100243 (2021).
Hosseinaee, Z., Le, M., Bell, K. & Reza, P. H. Towards non-contact photoacoustic imaging. Photoacoustics 20, 100207 (2020).
Saif, F., Yaseen, S., Alameen, A., Mane, S. & Undre, P. Identification and characterization of Aspergillus species of fruit rot fungi using microscopy, FT-IR, Raman and UV–Vis spectroscopy. Spectrochim. Acta A Mol. Biomol. Spectrosc. 246, 119010 (2021).
Schilling, C. & Hess, C. Real-time observation of the defect dynamics in working Au/CeO2 catalysts by combined operando Raman/UV–Vis spectroscopy. J. Phys. Chem. C 122, 2909–2917 (2018).
Mukundan, A., Tsao, Y.-M., Artemkina, S. B., Fedorov, V. E. & Wang, H.-C. Growth mechanism of periodic-structured MoS2 by transmission electron microscopy. Nanomaterials 12, 135 (2022).
Mukundan, A. et al. Optical and material characteristics of MoS2/Cu2O sensor for detection of lung cancer cell types in hydroplegia. Int. J. Mol. Sci. 23, 4745 (2022).
Michaloudis, I., Kanamori, K., Pappa, I. & Kehagias, N. U (rano) topia: Spectral skies and rainbow holograms for silica aerogel artworks. J. Sol-Gel Sci. Technol. 5, 1–12 (2022).
Bessmel’tsev, V., Vileiko, V. & Maksimov, M. Method for measuring the main parameters of digital security holograms for expert analysis and real-time control of their quality. Optoelectron. Instrum. Data Process. 56, 122–133 (2020).
Ay, B. Open-set learning-based hologram verification system using generative adversarial networks. IEEE Access 10, 25114–25124 (2022).
Jiménez-Carvelo, A. M., Martin-Torres, S., Cuadros-Rodríguez, L. & González-Casado, A. 6-Nontargeted fingerprinting approaches. In Food Authentication and Traceability (ed. Galanakis, C. M.) 163–193 (Academic Press, 2021).
Vasefi, F., MacKinnon, N. & Farkas, D. L. Chapter 16—hyperspectral and multispectral imaging in dermatology. In Imaging in Dermatology (eds Hamblin, M. R. et al.) 187–201 (Academic Press, 2016).
Khan, M. H. et al. Hyperspectral imaging-based unsupervised adulterated red chili content transformation for classification: Identification of red chili adulterants. Neural Comput. Appl. 33, 14507–14521 (2021).
Faltynkova, A., Johnsen, G. & Wagner, M. Hyperspectral imaging as an emerging tool to analyze microplastics: A systematic review and recommendations for future development. Microplast. Nanoplast. 1, 1–19 (2021).
Tsai, C.-L. et al. Hyperspectral imaging combined with artificial intelligence in the early detection of esophageal cancer. Cancers 13, 4593 (2021).
Chen, C.-W., Tseng, Y.-S., Mukundan, A. & Wang, H.-C. Air pollution: sensitive detection of PM2.5 and PM10 concentration using hyperspectral imaging. Appl. Sci. 11, 4543 (2021).
Hou, W. et al. An algorithm for hyperspectral remote sensing of aerosols: 3. Application to the GEO-TASO data in KORUS-AQ field campaign. J. Quant. Spectrosc. Radiat. Transf. 253, 107161 (2020).
Lu, B., Dao, P. D., Liu, J., He, Y. & Shang, J. Recent advances of hyperspectral imaging technology and applications in agriculture. Remote Sens. 12, 2659 (2020).
Mukundan, A., Patel, A., Saraswat, K.D., Tomar, A., Kuhn, T. Kalam Rover, in AIAA SCITECH 2022 Forum.
Lu, Y., Saeys, W., Kim, M., Peng, Y. & Lu, R. Hyperspectral imaging technology for quality and safety evaluation of horticultural products: A review and celebration of the past 20-year progress. Postharvest Biol. Technol. 170, 111318 (2020).
Stuart, M. B., McGonigle, A. J. & Willmott, J. R. Hyperspectral imaging in environmental monitoring: A review of recent developments and technological advances in compact field deployable systems. Sensors 19, 3071 (2019).
Ishida, T. et al. A novel approach for vegetation classification using UAV-based hyperspectral imaging. Comput. Electron. Agric. 144, 80–85 (2018).
Bishop, M. P. & Giardino, J. R. 1.01—Technology-driven geomorphology: Introduction and overview. In Treatise on Geomorphology 2nd edn (ed. Shroder, J. F.) 1–17 (Academic Press, 2022).
Ozdemir, A. & Polat, K. Deep learning applications for hyperspectral imaging: A systematic review. J. Inst. Electron. Comput. 2, 39–56 (2020).
Schneider, A. & Feussner, H. Chapter 5—Diagnostic procedures. In Biomedical Engineering in Gastrointestinal Surgery (eds Schneider, A. & Feussner, H.) 87–220 (Academic Press, 2017).
Özdoğan, G., Lin, X. & Sun, D.-W. Rapid and noninvasive sensory analyses of food products by hyperspectral imaging: Recent application developments. Trends Food Sci. Technol. 111, 151–165 (2021).
Chandrasekaran, I., Panigrahi, S. S., Ravikanth, L. & Singh, C. B. Potential of near-infrared (NIR) spectroscopy and hyperspectral imaging for quality and safety assessment of fruits: An overview. Food Anal. Methods 12, 2438–2458 (2019).
Dong, X. et al. A review of hyperspectral imaging for nanoscale materials research. Appl. Spectrosc. Rev. 54, 285–305 (2019).
Soukup, D. & Huber-Mörk, R. Mobile hologram verification with deep learning. IPSJ Trans. Comput. Vis. Appl. 9, 1–6 (2017).
Guerriero, S. et al. Tissue characterization using mean gray value analysis in deep infiltrating endometriosis. Ultrasound Obstet. Gynecol. 41, 459–464 (2013).
Arslan, H., Ozcan, U. & Durmus, Y. Evaluation of mean gray values of a cat with chronic renal failure: Case report. Arquivo Brasileiro de Medicina Veterinária e Zootecnia 73, 438–444 (2021).
Alcázar, J. L., León, M., Galván, R. & Guerriero, S. Assessment of cyst content using mean gray value for discriminating endometrioma from other unilocular cysts in premenopausal women. Ultrasound Obstet. Gynecol. 35, 228–232 (2010).
Lakshmanaprabu, S., Mohanty, S. N., Shankar, K., Arunkumar, N. & Ramirez, G. Optimal deep learning model for classification of lung cancer on CT images. Future Gener. Comput. Syst. 92, 374–382 (2019).
Frighetto-Pereira, L., Menezes-Reis, R., Metzner, G.A., Rangayyan, R.M., Azevedo-Marques, P.M., & Nogueira-Barbosa, M.H. Semiautomatic classification of benign versus malignant vertebral compression fractures using texture and gray-level features in magnetic resonance images, in Proceedings of the 2015 IEEE 28th International Symposium on Computer-Based Medical Systems. 88–92 (2015).
Li, J., Rao, X. & Ying, Y. Detection of common defects on oranges using hyperspectral reflectance imaging. Comput. Electron. Agric. 78, 38–48 (2011).
Deng, X., Huang, Z., Zheng, Z., Lan, Y. & Dai, F. Field detection and classification of citrus Huanglongbing based on hyperspectral reflectance. Comput. Electron. Agric. 167, 105006 (2019).
Sun, Y., Wei, K., Liu, Q., Pan, L. & Tu, K. Classification and discrimination of different fungal diseases of three infection levels on peaches using hyperspectral reflectance imaging analysis. Sensors 18, 1295 (2018).
Lim, H.-T., Matham, M.V. Instrumentation challenges of a pushbroom hyperspectral imaging system for currency counterfeit applications, in Proceedings of the International Conference on Optical and Photonic Engineering (icOPEN 2015). 658–664 (2015).
Qin, J., Burks, T. F., Kim, M. S., Chao, K. & Ritenour, M. A. Citrus canker detection using hyperspectral reflectance imaging and PCA-based image classification method. Sens. Instrum. Food Qual. Saf. 2, 168–177 (2008).
This research was supported by the National Science and Technology Council, The Republic of China under the Grants NSTC 111-2221-E-194-007. This work was financially/partially supported by the Advanced Institute of Manufacturing with High-tech Innovations (AIM-HI) and the Center for Innovative Research on Aging Society (CIRAS) from The Featured Areas Research Center Program within the framework of the Higher Education Sprout Project by the Ministry of Education (MOE), and Kaohsiung Armed Forces General Hospital research project MAB108-091 in Taiwan.
The authors declare no competing interests.
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
About this article
Cite this article
Mukundan, A., Tsao, YM., Lin, FC. et al. Portable and low-cost hologram verification module using a snapshot-based hyperspectral imaging algorithm. Sci Rep 12, 18475 (2022). https://doi.org/10.1038/s41598-022-22424-5