Abstract
Railway transportation has experienced significant growth worldwide, offering numerous benefits to society. Most railway accidents are caused by wheelset faults so it’s significant to monitor wheelset conditions. Therefore, we need to collect wheelset images, repaint them, extract laser stripe centerlines, construct 3D contour, and measure their geometric parameters to judge the wheelset’s conditions. Deep learning can fulfill the tasks satisfyingly because it’s adaptable, robust, and generalize compared with traditional methods. To train the deep learning models effectively, we need rich and high-quality wheelset datasets. So far, there are no applicable public train wheelset datasets available, which greatly hinders the research on train wheelsets. Thus we construct a publicly available Wheelset Laser Image Dataset (WLI-Set). WLI-Set consists of four sub-datasets, Original, Inpainting, Segmentation, and Centerline. The dataset contains abundant annotated multiline laser stripe images that can facilitate the research on train wheelsets effectively.
Similar content being viewed by others
Background & Summary
Railway transportation has experienced rapid growth in recent decades, significantly benefiting society1. According to statistics from the US Department of Transportation, the railway freight volume in the United States reached 26 trillion ton-miles in 2019. During the period from 2006 to 2014, there was a remarkable 50% growth in Australia railway freight2. Mainland China has constructed no fewer than 37 900 km of high-speed railway lines crisscross the country since 2008. The network is under quick expansion, and is expected to double again by 20353.
With the development of railway construction, the freight capacity of railways has been significantly enhanced. Freight transport is progressively shifting towards heavy-haul train. Investigations indicate that wheelsets of heavy-haul train exhibit faster and more severe worn than regular train. As the primary interface between the train and the track, wheelsets play a crucial role in maintaining stability. Wheelset-related failures, such as oversized wheel dimensions, not only cause damage to railway infrastructure and the vehicle’s structure itself but also pose significant hazards to railway transportation safety. The Eschede4 train derailment accident resulted in 101 fatalities and 88 injuries, making it the deadliest railway accident in Germany. The accident was caused by the rupture of one of the wheels due to metal fatigue, which ultimately led to the derailment of the train. In 2009, the United Kingdom (UK) rail infrastructure manager, Network Rail, spent £32 million on recovering from faults5.
By measuring the wheelsets’ geometric parameters, such as the height and thickness of flange shown in Fig. 1, we can timely detect the condition of wheel wear, when these parameter values exceed the safety range, it can be concluded that the wheelset has worn out. And the parameters indicate wheelsets’ lifecycle. Hence, the real-time measurement of wheelset geometric parameters has become a hot topic research among experts and scholars. Conventional measurement methods6,7 rely on manual measurements and regular maintenance for assessing wheelsets’ geometric parameters. However, this approach is labor-intensive, resource-consuming, and lacks precision, making it inadequate to meet the requirements of modern railway transportation. Currently, the most popular approach is image-based measurement methods. To be specific, these methods utilize industrial cameras to capture images of the wheelsets, and employ image processing algorithms to refine the images, then we can decouple the geometric parameters of the wheelsets from these refined images. Recent studies have indicated that the most commonly employed method for capturing images of wheelsets is by using lasers and Charge-coupled device(CCD) cameras. Dubnishchev et al.8 utilized the chord principle to accomplish the measurement of wheelset axle support and wheel diameter with non-contact laser emitters and a camera. Zhang et al.9 achieved high-precision measurement of wheelsets with two-dimensional laser displacement sensors (2D-LDS). Zheng et al.10 employed three 1D-LDS instead of a 2D-LDS and applied the three points principle for wheelset measurement.
However, we cannot directly decouple the geometric parameters from the captured images, this is because wheelset laser images are captured in outdoor conditions, diffuse reflection of light and environmental interference result in significant region adhesion (spots) and partial fractures in the laser stripe images. These imaging defects significantly impact the measurement of geometric parameters and surface analysis of the wheelset. It is necessary to remove any spots and fractures to proceed the measurement. With the development of computer vision, researchers started utilizing deep learning to accomplish this task. Zhang et al.11 proposed a real-time system for measuring the geometric parameters of wheelsets based on laser sensor, CCD and image processing. Ji et al.12 proposed the Recurrent Similarity Mapping Network (RSM-Net) to restore captured laser stripe images and achieved excellent results on industrial images. Kim13 utilized multiple successive image processing algorithms to measure the lateral displacement of wheelsets in a non-contact manner.
The trained deep learning model can perform real-time repair of the wheelset laser images, generating inpainted images that can be used to calculate geometric parameter for analysis. Training deep learning models requires abundant datasets, which providing vital foundation for image processing algorithms. However, to the best of our knowledge, there is currently no publicly available dataset of wheelset laser images for trains. Possible reasons for this phenomenon include: (1) Laser wheelset image data may contain sensitive information about railway systems; (2) Building and maintaining such a dataset requires significant human and financial resources; (3) Specialized equipment and technical expertise are needed. Therefore, we have constructed the Wheelset Laser Image Dataset (WLI-Set), which consists of four subsets, and each subset comprises 2710 laser stripe images of the wheelset. The dataset includes original captured images, inpainted images, segmented ground truth images and centerline images, providing support for the complete pipeline of processing algorithms in wheelset laser images.
Specifically, we adopt the three points principle10 to document wheelset laser images. Differing from Zheng, we utilize only two laser emitters and substitute a positioning sensor in place of the emitter right below the wheel. The area array CCD camera and its corresponding multi-line laser emitter are installed on both sides of the track. When the wheel passes the positioning sensor, the CCD cameras take pictures containing multiple laser-line at the same time. The captured images have a resolution of 1280 × 1024 pixels, with large areas of blank space. To improve the image processing results, researchers cropped the images into 800 × 672 pixels, which serve as the original image section of this dataset. After that, researchers annotated the original images by Photoshop. More specifically, our annotation team employed pixel-level brushes and performed the following steps: (1) Erased the light spots and subtracted them from the original image to obtain segmented spot defects, labeling their RGB values as 1. (2) Repaired the broken light stripes and subtracted them from the original image to obtain segmented fracture defects, labeling their RGB values as 2. (3) Added the two defect images together to form the defect segmentation ground truth (GT) image. Following that, researchers conducted comparative captures under low-light conditions, ensuring strict consistency in train status, equipment, and installation position. This yielded a set of repaired images, serving as GT for the complete wheelset laser images. Additionally, researchers used the industrial software HALCON to extract the centerline image of the laser wheelset.
Methods
The dataset proposed in this paper consists of four parts: origin, label, inpainted, and centerline. The original images, label images, and inpainted images can be used to train and validate the deep learning model for repairing wheelset images, the centerline images can be used to test and compare the effectiveness of centerline extraction algorithms. These models and algorithms are the core components of the wheelset parameter measurement system. The process of constructing this dataset is illustrated in Fig. 2. We obtained the original images using industrial cameras, then annotated the “fracture” and “spot” regions in the original images, and manually eliminated them to obtain the “inpainted” images. Finally, the centerline images were extracted from the inpainted images. According to the pipeline used to create this dataset, this paper will sequentially introduce the production process from the following three parts.
Original image
We employed a real-time measurement approach by installing the camera systems on both the inside and outside of the track. When a wheel passes over the system, the sensors simultaneously trigger the inner and outer cameras to capture images, each camera takes a single image. This set of actions repeats when the next wheel passes, and continues until all wheels have passed through the system. The camera system is primarily composed of three components: laser emitter, sensor, and area scan industrial camera. The industrial camera we use is MV-CU120-10GM/GC(NPOE). This camera uses a Sony IMX304 CMOS chip, which has low noise, high resolution, and excellent images, with a maximum frame rate of 9.4 fps at full resolution. This makes real-time transmission of uncompressed images possible. Before each capturing session, we calibrated the camera relied on14 for extrinsic and intrinsic calibration. In Fig. 3, the area array CCD camera of the wheelset monitoring equipment and its corresponding multi-line laser sensor array are installed on the left and right sides of the wheelset. The laser emitter emits multiple structured-light rays. When a wheel passes through, the light rays project stripes onto the surface of the wheelset, and at this moment, the CCD camera captures an image, recording a grayscale image of the multi-line laser stripes. When the entire train passes through the sensor equipment, these cameras quickly takes hundreds of photos of the wheelset shape reflected by the multi-line laser and transmits them to the terminal device for real-time storage. However, a single camera can only capture one image when a wheel passing through, resulting in a low sampling frequency. To address the issue, we collected images of heavy-haul freight trains from multiple routes in China, including Fuzhou, Chengdu, Suzhou, and Shenyang, to ensure a sufficient scale of dataset.
Still, the captured image size is 1024 × 1280, which is too large and contains redundant black blank parts. As the available feature information is concentrated in a certain region of the image, we cropped the photo to a size of 800 × 672, only containing the geometric and semantic features of stripes.
Defect label
Since the equipment is installed in an outdoor environment, the imaging process is affected by environmental interference and reflected light, resulting in images containing small laser-stripe smudging(spot) and missing(fracture). These imaging defects significantly impact the measurement of geometric parameters and surface analysis of the wheelset, it is necessary to remove these defects and calculate the geometric parameters from them. To assist researchers in training deep learning models capable of removing imaging defects, we provide label images in our dataset. As marked in Fig. 4, we divide these defects into “spots” and “fractures”. Our annotation methodology was carefully designed to display defects of stripes. All pictures are annotated manually.
In order to ensure pixel-level annotation, we adopt a special annotation method that differs from most segmentation datasets: extract first and then annotate it. We use Photoshop CS6 to erase spots and connect the fractures of laser-stripes. Photoshop can provide brushes as small as one pixel in size. For large spots, we use a 10–15 pixel brush to apply the background color, and for the fine edges of the spots, we use a 1 pixel brush. The width of the light strip is between 5–13 pixels, and the width of the crack part is basically 8–9 pixels, so we use an 8-pixel brush to supplement the fractures. In order to ensure that the spots are completely applied and the fractures are meticulously connected, we require researchers to annotate for no less than 3 minutes. The annotated images will undergo unified quality control to ensure the pixel-level accuracy of the labels.
Subsequently, the method of image subtraction g(x, y) = |f1 (x, y)−f2 (x, y)| was employed to subtract the processed image obtained from Photoshop (referred to as PS) from the original image. In the given formula, (x,y) represents the position of a pixel within an image, f1, f2 and g denote the pixel sets of the original image, modified image, and defective image, respectively. This step yielded two separate images: one containing only the spots, with pixel values limited to (0, 0, 0) and (255, 0, 0), and the other containing only the fractures, with pixel values limited to (0, 0, 0) and (0, 255, 0). Then we use code to annotate the pixels with the value (255, 0, 0) as 1, the pixels with the value (0, 255, 0) as 2, and the remaining pixels as 0.
Finally, by adding the two classes of labeled images together, the complete label dataset is obtained, as demonstrated in Fig. 5. The formula we utilize is as follows: p(x, y)=q1 (x, y) + q2 (x, y), where p represents the label image, q1 and q2 represent the spot defect image and the fracture defect image respectively.
Inpainted image and centerline extraction
It is difficult to collect ideal defect-free images in an industrial environment,therefore, we manually removed the spots and supplemented the missing parts of the fractures using Photoshop. In Fig. 6, Both the spot and fracture parts have been removed, and the resulting image, after centerline extraction, can now be used to measure the geometric parameters of the wheelset. The inpainted images provided in our dataset can be used to evaluate the performance of the deep learning model, and the centerline images can be used to test and compare the effectiveness of centerline extraction algorithms.
Due to the limitations of the hardware equipment, the structure-light projected by the laser typically has a width of 3 to 12 pixels. This is not applicable for precise measurement of wheelsets’ geometric parameters. To use it in subsequent measurement, it is necessary to extract the center of the light stripe at the single-pixel or even sub-pixel level. The ideal brightness distribution of the laser stripe’s cross-section follows a Gaussian model. Therefore, the brightest pixels on the cross-section can be extracted, forming the centerline of the laser stripe. We use the HALCON (21.11) software developed by German MVTec Company for the centerline extraction step. The core algorithm for centerline extraction in HALCON is the line_gauss operator. Its main function is to extract lines from images, and the extracted results belong to sub-pixel accurate eXtended Line Descriptions (XLD) contours. At this point, the pixel RGB values in the image are (0, 0, 0) and (255, 255, 255). Similarly, we annotated (0, 0, 0) as 0 and (255, 255, 255) as 1, resulting in a dataset of 2710 centerline images. Then, we added the label images to the extracted centerline images, resulting in the centerline images shown in the bottom row of Fig. 7. As shown in Fig. 7, we have the inpainted image and the centerline image. Experts can calculate the geometric parameters of the wheelset from this fine and precise contour line.
Data Records
The dataset15 were made available from https://doi.org/10.6084/m9.figshare.25040747.v3. This dataset comprises four zip files: Origin.zip, Label.zip, Inpainted.zip and Centerline.zip. The \emph{Origin} comprises 2710 origin monochrome laser-stripe images. The Label comprises a collection of 2710 strip defect GT images that we meticulously annotated. To facilitate the use in training deep learning models, the RGB values in ‘label’ images are 0, 1, and 2, making it difficult to distinguish them visually. So, we provide label images with richer colors to enhance visual appeal, the spot labels are painted red in images and the fracture labels are painted green. the Inpainted represents the inpainted GT images we obtained under relatively ideal conditions, and the Centerline are extracted centerline images by a industrial software, we also provide colorful centerline images to assist researchers in visualizing. As shown in Fig. 8, the repository structure of our dataset is presented. All these images are in JPG format.
Technical Validation
Reliability of annotation
This paper validates the reliability of image annotations from three aspects: annotation accuracy, dataset distribution balance check, model validation, and comparison.
Annotation accuracy
To ensure the accuracy of segmentation masks in the dataset, the reviewers manually inspected all Label image data to verify if the annotations corresponded appropriately to the image content. If any annotation errors were identified, they were corrected accordingly. The examining results indicate that there were no errors in the annotations, and they perfectly matched the image content.
Balance check
The purpose of the dataset distribution balance check is to ensure a relative balance in the sample quantities across different categories within the dataset, thus avoiding the model’s excessive preference for certain classes. After inspection, each category consists of 2710 samples. Within the “spot” category, the pixel value differences between samples do not exceed 500, accounting for approximately 3.4% of the total. Within the “fracture” category, the pixel value differences between samples do not exceed 200, accounting for approximately 2.5% of the total. This suggests that the data annotations exhibit distribution balance.
Model validation and comparison
There are differences between the prevailing annotation methods16 and our method, so we conducted experiments to validate the feasibility of our annotation approach. Some advanced and suitable methods were employed for laser stripe image defect segmentation, and they were individually tested on our dataset. We consider that when all methods exhibit either excessively superior or excessively poor performance on the dataset, it indicates the unreliability of the dataset annotation methodology.
A total of 2,110 pairs from the multi-line laser stripe image defect segmentation dataset were used for model training, while an additional 600 pairs were used for model validation. The input resolution was set to 800 × 672, consistent with the resolution of the images in the dataset. The model training was performed with a batch size of 4 and an epoch of 60. The training optimization was conducted using the Adam method with a learning rate of 1 × 10−2. The cross-entropy loss function was employed for the training process.
Mean Intersection-over-Union(mIoU) is a commonly used accuracy metric for image segmentation models. The higher the value, the better the segmentation performance. IoU represents the segmentation accuracy of a specific segmentation category, while mIoU represents the average accuracy across all segmentation categories. The calculation of mIoU is as follows:
Here, k + 1 represents the number of segmentation classes (k represents foreground classes, and 1 represents the background). nij denotes the number of pixels belonging to class i that are misclassified as class j, while nii represents the number of pixels belonging to class i that are correctly predicted.
The segmentation model used in the experiment is as follows: Unet17, BiSenetV118, ICnet19, AGUnet20, DFAnet21, Lednet22, Unet3+23, BisenetV224, Cgnet25.
From Table 1, it can be observed that the segmentation performance for the fractured regions is generally superior to that of the light spot regions. The differences in IoU and mIoU between the models do not exceed 15%. This dataset demonstrates its reliability across the majority of classical segmentation models. Notably, several methods based on the U-Net model have shown promising results on this dataset. Researchers can start exploring methods to improve segmentation accuracy by leveraging the U-Net model. As shown in Fig. 9, we randomly selected a subset of model outputs demonstrated on the dataset. They exhibit consistency, while different models showcase distinct effects in certain details.
Usability validation of inpainted dataset
For the purpose of repairing laser stripe images, it is crucial to obtain an accurate ground-truth dataset. However, it is difficult to obtain such a dataset under practical industrial conditions. The primary source of spots comes from the laser undergoing diffuse reflection in the air with suspended dust particles. The main source of fractures in the stripe occurs when the laser is deflected by rapidly moving train wheels. Therefore, we captured images of wheel pairs from a low-speed unloaded train in the same conditions during dark nighttime. These images provide flawless ground-truth laser stripe images.
In reality, the shape of the wheelset differs between low-speed unloaded and high-speed loaded conditions. Therefore, the images obtained through this method can only serve as an auxiliary for repairing the fringe image, rather than being directly used for wheelset 3D measurement and reconstruction. Compared to fringe defects, the minor deformations of the wheelset can be ignored, making them suitable as reference images for repairing defective stripes. We refer to these types of images as “inpainted images”.
Reliability of centerline extraction
In the process of extracting the centerline, we use the centerline extraction algorithm that comes with the HALCON software. Some traditional methods26,27,28,29 for centerline extraction can also be applied to our dataset. HALCON is based on these methods but has been improved, making it the most efficient centerline extraction software currently used in the industry. The HALCON version we use is 21.11, and we have written a code that can be used to extract the centerline.
Error rate
In real-world applications, the maximum operating speed of heavy-haul freight trains is approximately 100 km/h. Our camera systems captured the wheelset images in this dataset while the train was traveling under a speed of 70 km/h. We ensured the image quality to control measurement errors below the standard error values specified by the national regulations. After calculations, the diameter error of the wheels was found to be less than 0.6 mm, and the errors in flange thickness and flange height were below 0.3 mm.
Code availability
No custom code was used during this study for the curation and validation of the dataset.
References
Chen, Y., Li, Y., Niu, G. & Zuo, M. Offline and online measurement of the geometries of train wheelsets: A review. IEEE Transactions on Instrumentation Meas. 71, 1–15 (2022).
Bernal, E., Spiryagin, M. & Cole, C. Onboard condition monitoring sensors, systems and techniques for freight railway vehicles: A review. IEEE Sensors J. 19, 4–24 (2019).
Jones, B. Past, present and future: The evolution of chinas incredible high-speed rail network (2021).
NDR. Zugunglück in eschede vor 25 jahren: Hunderte gedenken der opfer (2023).
Cornish, A., Smith, R. A. & Dear, J. Monitoring of strain of in-service railway switch rails through field experimentation. Proc. Inst. Mech. Eng. Part F: J. Rail Rapid Transit 230, 1429–1439 (2016).
Xing, Z., Chen, Y., Wang, X., Qin, Y. & Chen, S. Online detection system for wheel-set size of rail vehicle based on 2d laser displacement sensors. Optik 127, 1695–1702 (2016).
Torabi, M., Mousavi, S. M. & Younesian, D. A high accuracy imaging and measurement system for wheel diameter inspection of railroad vehicles. IEEE Transactions on Ind. Electron. 65, 8239–8249 (2018).
Dubnishchev, Y. N., Belousov, P. Y., Belousova, O. & Sotnikov, V. Optical control of the radius of a wheel rolling on a rail. Optoelectronics, Instrumentation Data Process. 48, 75–80 (2012).
Zhang, Q. et al. An efficient method for dynamic measurement of wheelset geometric parameters. IEEE Transactions on Instrumentation Meas. 72, 1–11 (2023).
Zheng, F., Zhang, B., Gao, R. & Feng, Q. A high-precision method for dynamically measuring train wheel diameter using three laser displacement transducers. Sensors 19, 4148 (2019).
Zhang, Z., Shao, S. & Gao, Z. A novel method on wheelsets geometric parameters on line based on image processing. 2010 International Conference on Measuring Technology and Mechatronics Automation 1, 257–260 (2010).
Ji, Z.-Y., Han, M.-H., Song, X.-J. & Feng, Q.-B. Recurrent similarity mapping network oriented to laser stripe image inpainting. ACTA ELECTONICA SINICA 50, 1234 (2022).
Kim, M.-S. Measurement of the wheel-rail relative displacement using the image processing algorithm for the active steering wheelsets. Int. j. syst. appl. eng. dev 6, 114–121 (2012).
Kruger, L. E., Wohler, C., Wurz-Wessel, A. & Stein, F. In-factory calibration of multiocular camera systems. In Optical Metrology in Production Engineering, vol. 5457, 126–137 (SPIE, 2004).
Song Wli-set: A comprehensive laser image dataset for real-time measurement of wheelset geometric parameters. Figshare https://doi.org/10.6084/m9.figshare.25040747.v3 (2024).
Cordts, M. et al. The cityscapes dataset for semantic urban scene understanding. In Proceedings of the IEEE conference on computer vision and pattern recognition, 3213–3223 (2016).
Ronneberger, O., Fischer, P. & Brox, T. U-net: Convolutional networks for biomedical image segmentation. In Medical Image Computing and Computer-Assisted Intervention–MICCAI 2015: 18th International Conference, Munich, Germany, October 5-9, 2015, Proceedings, Part III 18, 234–241 (Springer, 2015).
Yu, C. et al. Bisenet: Bilateral segmentation network for real-time semantic segmentation. In Proceedings of the European conference on computer vision (ECCV), 325–341 (2018).
Zhao, H., Qi, X., Shen, X., Shi, J. & Jia, J. Icnet for real-time semantic segmentation on high-resolution images. In Proceedings of the European conference on computer vision (ECCV), 405–420 (2018).
Oktay, O. et al. Attention u-net: Learning where to look for the pancreas. In Medical Imaging with Deep Learning. https://openreview.net/forum?id=Skft7cijM (2018).
Li, H., Xiong, P., Fan, H. & Sun, J. Dfanet: Deep feature aggregation for real-time semantic segmentation. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 9522–9531 (2019).
Wang, Y. et al. Lednet: A lightweight encoder-decoder network for real-time semantic segmentation. In 2019 IEEE international conference on image processing (ICIP), 1860–1864 (IEEE, 2019).
Huang, H. et al. Unet 3+: A full-scale connected unet for medical image segmentation. In ICASSP 2020-2020 IEEE international conference on acoustics, speech and signal processing (ICASSP), 1055–1059 (IEEE, 2020).
Yu, C. et al. Bisenet v2: Bilateral network with guided aggregation for real-time semantic segmentation. Int. J. Comput. Vis. 129, 3051–3068 (2021).
Wu, T., Tang, S., Zhang, R., Cao, J. & Zhang, Y. Cgnet: A light-weight context guided network for semantic segmentation. IEEE Transactions on Image Process. 30, 1169–1179 (2020).
Zhang, L., Zhang, Y. & Chen, B. Improving the extracting precision of stripe center for structured light measurement. Optik 207, 163816 (2020).
Yang, H., Wang, Z., Yu, W. & Zhang, P. Center extraction algorithm of linear structured light stripe based on improved gray barycenter method. In 2021 33rd Chinese Control and Decision Conference (CCDC), 1783–1788 (IEEE, 2021).
De Jong, K. A. An analysis of the behavior of a class of genetic adaptive systems. (University of Michigan, 1975).
Sampson, J. R. Adaptation in natural and artificial systems (john h. holland) (1976).
Acknowledgements
This work is supported by National Natural Science Foundation of China (No.52175493 and No.51935002) and the National Key Research and Development Program of China under Grant (2022YFB3206000).
Author information
Authors and Affiliations
Contributions
Y.S. conceived the study. Y.S. and X.G. designed and planned the experiments. Z.J., X.G. and Q.F. carried and supervised the experiments. Y.S. and Z.J. composed the manuscript. Y.H. and S.Y. revised the manuscript. All authors read and approved the manuscript.
Corresponding author
Ethics declarations
Competing interests
The authors declare no competing interests.
Additional information
Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Song, Y., Ji, Z., Guo, X. et al. A comprehensive laser image dataset for real-time measurement of wheelset geometric parameters. Sci Data 11, 462 (2024). https://doi.org/10.1038/s41597-024-03288-y
Received:
Accepted:
Published:
DOI: https://doi.org/10.1038/s41597-024-03288-y