Abstract
Barchans are crescent-shape dunes ubiquitous on Earth and other celestial bodies, which are organized in barchan fields where they interact with each other. Over the last decades, satellite images have been largely employed to detect barchans on Earth and on the surface of Mars, with AI (Artificial Intelligence) becoming an important tool for monitoring those bedforms. However, automatic detection reported in previous works is limited to isolated dunes and does not identify successfully groups of interacting barchans. In this paper, we inquire into the automatic detection and tracking of barchans by carrying out experiments and exploring the acquired images using AI. After training a neural network with images from controlled experiments where complex interactions took place between dunes, we did the same for satellite images from Earth and Mars. We show, for the first time, that a neural network trained properly can identify and track barchans interacting with each other in different environments, using different image types (contrasts, colors, points of view, resolutions, etc.), with confidence scores (accuracy) above 70%. Our results represent a step further for automatically monitoring barchans, with important applications for human activities on Earth, Mars and other celestial bodies.
Similar content being viewed by others
Introduction
Barchans are dunes of crescent shape with horns pointing downstream, which are formed mainly under one-directional flows and when the amount of available sand is limited1. These bedforms are frequently found on Earth (both in aquatic and eolian environments) and Mars, having in common the same morphology, but presenting different scales2: they are much larger and slower on Mars, where the scales are up to one kilometer for the length and millenniums for the turn-over time (although their average length has recently been found3 to be of the order of 200 m, and the turn-over time on the north pole to be of the order of a century4), than under water, whose scales are tens of centimeters and minutes5. On terrestrial deserts, the scales are up to hundreds of meters and years. However, it is not always easy to clearly identify barchans and measure their dimensions based on images, since they are organized in barchan fields in which they migrate over long distances while interacting with each other6,7,8,9,10,11,12,13. In addition, the fluid flow often presents seasonal variations, affecting the morphology of dunes14,15,16. Therefore, it is common to observe barchans that touch each other (colliding dunes), that are highly asymmetric, and that shed small barchans, for instance. Despite these difficulties, barchan dunes are still a bedform of much interest since they can be used to deduce information about the atmosphere of planets and moons based on satellite images, such as the existence of an atmosphere that is or has been capable of mobilizing sediments (otherwise there would not exist barchans), the mean direction of winds, and even the flow strength (from stability analyses)5. In the particular case of Martian barchans, these inferences represent in some cases mean winds that have blown over the last millenniums (given the turn-over time of large Martian barchans), something that the satellites and sensors in locus cannot measure.
Over the last decades, with better resolution satellites being launched and orbiting Earth and also Mars, dunes on both planets have been monitored with reasonable accuracy17,18,19. In particular, the detection of barchans on Earth and Mars based on satellite images have been largely employed. The first works detecting barchans used non-machine learning detection17,20,21,22, meaning that the dunes were identified, classified and measured (morphology and, sometimes, displacement) with an algorithm specially written for these purposes, instead of being trained for identification; however, with the advance of Machine and Deep Learnings (ML and DL, respectively), automatic detection based on computer training has become more common23,24. As pointed out by Rubanenko et al.25, those works, based on Support Vector Machine26 or R-Vine classifiers, were accurate for automatically detecting and classifying, but not for segmenting and outlining individual barchans.
Recently, Rubanenko et al.25 made use of Mask R-CNN (Regional Convolutional Neural Network)27, which detects objects while simultaneously generating a segmentation mask, for automatically detecting, classifying and outlining barchan dunes on Mars and Earth. To minimize false detection of barchans and derive the trends for wind and sand transport, they focused their training on isolated barchan dunes, which they carried out for 1076 images and surpassed 70% of accuracy (mean average precision mAP, see section “Methods”). Afterward, they applied the Mask R-CNN to 137,111 images of the Martian surface extracted from a CTX global mosaic from the Mars Reconnaissance Orbiter (MRO) Context Camera (CTX) dataset. With those images, they mapped large regions of the Martian surface and found that around 60% of dune fields on the northern hemisphere are covered with barchans, while only 30% of fields on the southern hemisphere are covered with barchans. Finally, they applied the same training to satellite images from Earth and obtained reasonable accuracy (which can be improved by inverting the image colors or performing new training). Later, Rubanenko et al.3 explored the barchans identified and outlined in Ref.25 for probing the assumption that m-scale ripples found on Mars are the result of a hydrodynamic instability. Their measurements showed that the lengths of both small barchans and m-size ripples decrease with increasing the atmosphere density, following, in addition, a power-law predicted by a hydrodynamic analysis, which corroborates the initial assumption.
Although recent works, in especial Rubanenko et al.3,25, increased the accuracy in detecting and outlining individual barchans in satellite images, variations in the crescent shape (due to barchan-barchan interactions and seasonal winds) and the existence of other types of dunes that are also curved (parabolic dunes, for instance) hinder the automatic detection and outline in many cases. In particular, to the best of the authors’ knowledge, current trained networks do not detect successfully groups of interacting barchans, mainly when they are touching or superposing each other (barchan-barchan collisions). In addition, because previous works were conducted for single images, it remains to be proved that CNNs (Convolutional Neural Networks) can track the detected barchans and update their outline along a sequence of frames (or movie). The automatic detection of barchans undergoing complex interactions (such as barchan-barchan collisions) is relevant for many reasons. One of them is that it opens the possibility for updating the number of barchans on the surface of planets (Mars, for example), which were possibly underestimated in previous works (since they computed isolated barchans only), and determining their location, orientation and concentration25. This information can be useful for estimating the direction and strength of local winds, determining the regimes of sand transport and accumulation, and estimating the effects of global changes on Earth based on the dynamics of dunes28. Other reason is the possibility for predicting the future of barchan fields based on barchan-barchan interaction maps, such as those from Assis and Franklin11 (or, in the same way, deducing the ancient past of such fields), or the interaction of dunes with dune-size obstacles based on the corresponding maps, such as shown in Assis et al.29: by training a CNN with interacting patterns measured in laboratory, the trained CNN might predict the same kind of interaction in the field based on satellite images. Here again, this information can be used for estimating desertification as an effect of climate change28, and predicting if constructions are under imminent threat of being overtaken by sand30. Another application would be the yearly monitoring of dune motion for estimating the sand cover on Earth and testing climate models.
In this paper, we inquire into the automatic detection, classification, outline, and tracking of barchans in different environments by carrying out experiments and exploring with DL the images of both individual and groups of barchans. We made use of the existing python library YOLO (You Only Look Once) for training a neural network with images from experiments where complex interactions took place between dunes, and, afterward, did the same for satellite images from Earth and Mars. We show, for the first time, that the trained network can identify, classify, outline, and track dunes interacting with each other in different environments, using different image types (contrasts, colors, points of view, resolutions, etc.), with confidence scores (estimated accuracy for each detected object) within 70 and 90% and mean average precision that reaches 99%. The trained CNN opens new possibilities for updating the number of barchans on the surface of planets (by considering also those undergoing complex interactions) and predicting the future of barchan fields (based on barchan-barchan interaction maps11 and satellite images). Our results represent a step further for automatically monitoring barchans and understanding their dynamics, with important applications for human activities, such as mitigating disasters on Earth and exploring Mars.
Results
After training the network with a certain number of images from the experiments (see section “Methods” for more details), we applied the trained network to identify, classify, outline and track dunes that interacted between each other in other experiments. One advantage of training the CNN with images from experiments with subaqueous barchans is that they contain the whole interaction sequence, allowing the accurate identification of bedforms during labeling. Besides, because in the experiments the flow conditions, grains’ properties, and terrain conditions are well controlled, we can ascertain which is the exact barchan-barchan interaction that is going on. We applied the trained object detection, for instance, to some of the experiments of Assis and Franklin11 (open dataset available31), in which the initially upstream and downstream dunes consisted of grains of different colors. For example, Fig. 1 shows snapshots placed side by side of two barchans as the smaller barchan (red dune, initially upstream) collides with the larger one (white, downstream). They merge and, after some time has elapsed, eject a small barchan whose size is similar to that of the barchan initially upstream, but consisting of different grains (this pattern is know as exchange11). Time instants are shown on the top, and they are numbered from t1 to t7 (also shown on the top). We observe that the trained CNN can detect the objects (in this case, bedforms), which are bounded by white boxes in the figure. Each object has an label assigned (Shape 1 to Shape 3), is classified as Barchan or Not a barchan with an accuracy (confidence score) higher than 0.9 (see the section “Methods” for a description of the computation of the accuracy shown in the images), and the outline is determined (in green lines). The assigned label is kept constant for each object along the images, so that after Shape 2 is absorbed by Shape 1 it disappears, and the new ejected barchan is identified by another label (Shape 3). The identification, classification, outline and tracking work well independent of the color of objects, evincing the ability of the trained CNN in following each object along frames. Interestingly, the trained CNN is able to correctly identify as one single barchan the initially merged dune (t = 98 to 130 s), even if the red grains continue with a barchan-like shape. This is a major result of this training, since object detection codes usually have problems in correctly identifying this kind of interaction11,12,13. This opens new possibilities for computing more accurately the number of barchans appearing in satellite images (since barchans undergoing complex interactions can now be detected and outlined), and also for predicting the future configurations of barchan fields11.
In order to evaluate if positions and outlines are accurately detected and tracked, we computed the length L, width W, horns’ length \(L_h\) and surface area A of barchans. The lengths, width and area were computed as defined in Assis and Franklin11,12, and the mean horn length \(\overline{L}_h\) was computed as the average of both horns. We note that A is the area bounded by the outlines generated by trained CNN, corresponding thus to the surface area of the dune projected in the horizontal plane. For the exchange case of Figs. 1, 2a–d show the time evolution of the aspect ratio W/L, mean horn length \(\overline{L}_h\), longitudinal position \(P_y\), and projected area A, and the time instants corresponding to those of snapshots of Fig. 1 are shown in the panels. We observe that these quantities are in good agreement with the results shown in Assis and Franklin11, in which a conventional (non-machine learning) detection code was used. In particular, Fig. 2d can be compared directly with Fig. 3f of Ref.11, which we show in Fig. 3, and the agreement is perfect (deviations of less than 5%). We applied the same trained CNN to many other experiments, in especial the other cases reported in Assis and Franklin11 (images and movies of which are available in an open repository31), and the results were as good as those of Figs. 1 and 2 (the results for other experiments are available in the Supplementary Information S1, as well as movies showing the tracking of individual dunes along frames). Obtaining results in good agreement with dedicated (non-machine learning) codes implies that the latter are no longer necessary for future experiments with subaqueous dunes, and that AI (Artificial Intelligence) can be successfully used for processing image sequences taken from drones or satellites.
It is worth noting that in some experiments the camera was displaced (it was mounted on a traveling system) to maintain the barchans in its field of view. In those cases, even with the spatial reference changing abruptly between two images, the trained CNN tracked correctly each barchan, as can be seen in some of the movies of the Supplementary Information S1. This can be also seen in Fig. 4, in which the camera was displaced between t = 195 and 300 s (in the tests, it was displaced at some point between those instants in an abrupt way, i.e., the test stopped, the camera was displaced, and the test re-started). Figure 4 shows snapshots side by side of two barchans that interact with each other, and, at some point in time, one of them ejects a small barchan (the fragmentation pattern described in Assis and Franklin11). The dunes are correctly outlined and tracked, even if with the camera displacement the barchans are (wrongly) seen in the frame as in upstream positions with respect to previous frames, showing the CNN robustness. The morphological parameters obtained in this case and others are available in the Supplementary Information S1, and are in agreement with Ref.11.
Having confirmed that the instance segmentation based on YOLOv8 successfully identify, classify, outline, and track each dune appearing on images from experiments, we trained the same CNN using satellite images of eolian dunes on the Martian and Earth’s surfaces following a procedure described in the “Methods” section. After training a given set of images, we used the trained CNN to identify dunes in other images. For instance, Fig. 5 shows a HiRISE image32 of a field of individual barchans on the surface of Mars (\(23.190^\circ\) latitude, \(339.585^\circ\) longitude), where we can observe single barchans migrating over irregular terrains containing craters, while Fig. 6 shows a medium-resolution image (4 m per px) from CTX33 of a field of barchans undergoing complex interactions on the surface of Mars (\(-41.488^\circ\) latitude, \(44.589^\circ\) longitude). As for the experiments, the trained CNN is able to correctly identify, classify and outline dunes in satellite images with confidence scores of the order of 0.9 in the case of single barchans, and with lower accuracy in the case of interacting barchans, using both high- and medium-definition images. We note that the detection is not perfect in Fig. 6, with some dunes undergoing complex interactions not being detected while others are (all barchans are detected). However, we have shown from our experimental dunes (for which we have large datasets) that it is possible to have accurate detections of interacting barchans. For satellite images, datasets of interacting barchans are relatively small (time sequences that show the complete outcome of each interaction being absent) since a single barchan-barchan interaction on Earth takes decades to finish completely (on Mars it can take millenniums). This, added to the fact that the satellite images used are of lower quality than those from our experiments (in terms of spatial resolution and contrast with the background), decreases the detection accuracy of interacting dunes in Fig. 6.
Figure 7 shows an image of a barchan field on Earth (\(24.836^\circ\) latitude, \(51.311^\circ\) longitude, in Qatar), in which the contrast of colors between the dunes and background is poor. However, we observe that the CNN is able to detect and classify dunes with confidence scores of approximately 0.90 (the lower confidence score is 0.88, corresponding to a highly asymmetric barchan that is probably shedding a small dune through one of its horns), and to successfully outline them. Based on the outlines generated by the trained CNN, we measured the main features of all barchans identified in Figs. 5, 6 and 7, which are listed in Tables 1, 2 and 3, respectively.
We used the same procedure for other satellite images of Mars and Earth, as well as aerial pictures of eolian dunes, with different backgrounds (terrains), colors, resolutions, and point-of-view (view in perspective), and the results were as good as those shown in Figs. 5 and 7. In particular, we processed images with medium to low resolutions (15 m per px to 30 m per px) in which groups of barchans were undergoing complex interactions, and the trained CNN was able to correctly identify each dune (examples of instance segmentation of other satellite images are available in the Supplementary Information S1). The mean average precision mAP (definition available in section “Methods”) reached in our CNN training was around 0.90 for the satellite images and 0.99 for the images from experiments (graphics of the evolution of mAP along the epochs are available in the Supplementary Information S1). Finally, we carried out tracking (with detection and outline) in medium- to low-quality satellite images of an eight-year sequence of barchans undergoing dune-dune interactions in the Sahara desert, which we show in the Supplementary Information S1. Although the sequence contains only a small portion of barchan-barchan interactions (given the large timescales involved), the results are good, showing a great potential of the CNN for field tracking and measurements.
Discussion
We carried out training of a single-stage object detection model YOLOv8 (YOLO version 8), together with scrips written in the course of this work to handle data and measure barchan dimensions, for image segmentation and tracking of barchan dunes. Different from previous works, we used a large database of time-resolved images of barchans of different sizes, colors, grain types, and format (camera type), consisting of mono or bidisperse grains (with more than one color), and undergoing different types of interaction11. A small part of these images were used for training, and we afterward employed the trained CNN for processing images that were new to the CNN. With that, we could, besides the identification, classification, outline, and tracking of dunes, measure the time evolution of morphological quantities and compare them with the results from our non-machine learning detection code. For the experiments, the confidence scores were over 0.9, even when dunes of different color underwent different types of interaction. Therefore, for the first time, we showed the ability of a trained CNN to correctly identify, classify, outline and track dunes that undergo complex interactions with each other in a dune field, while previous works relied only on static satellite images for identifying single barchans.
We used the same procedure with satellite or aerial images of barchan fields on Mars and on Earth, with different image types, colors and perspectives, and in those cases the trained CNN identified, classified and outlined dunes with confidence scores above 70%. However, in this case we did not systematically track dunes because of the small number of sequential images: good-quality images on Earth date back 30 years ago only, while dunes take a decade to displace a considerable distance, and on Mars the timescale is much higher (centuries or even millenniums). In the particular case of barchan-barchan interactions, there is no image sequence from satellites showing the entire process (only part of it) since time scales are much higher than those for subaqueous barchans (it would need a century or more of satellite images from Earth, and even more from Mars, to finish the typical barchan-barchan interactions11). We nevertheless carried out tracking (with detection and outline) in medium- to low-quality satellite images of an eight-year sequence of barchans undergoing dune-dune interactions in the Sahara desert, and the result was good (the results are available in the Supplementary Information S1), showing the potential of the technique for field measurements.
Although considerable improvements have been achieved in the automatic detection of barchans undergoing complex interactions, the trained CNN has still an important limitation: when processing images where barchans interact over a terrain (background) that has poor contrast with respect to the dunes, some dunes are not detected, and those detected have lower accuracy (such as happens in Fig. 6). When the contrast with the background is good and image resolution is not poor, the barchans are correctly identified with high accuracy (as in Figs. 1 and 4). However, as can be seen in Figs. 5, 6 and 7 and in those in the Supplementary Information S1, the great majority of barchans is correctly identified, classified, and outlined.
The success in identifying, classifying, outlining, and tracking barchans undergoing complex interactions by using CNN can be employed for dune monitoring, which engenders positive impacts in human activities. For example, it can be used for monitoring the growth and migration of dunes that are burying (or on the verge of burying) human constructions, such as in Florianopolis (Brazil) and Silver Lake (USA)30,34, or for detecting complex barchan interactions on Mars. Besides, it can be explored further: the CNN can be trained on the history of certain barchan-barchan patterns (based on experimental data such as Refs.11,31), and the trained CNN can be afterward applied, for example, for deducing the past history of barchan fields on Mars (based on satellite images openly available). It can be also used for predicting the future of those barchan fields. If (or when) carried out, this would represent a considerable step for understanding the ancient past of Mars, for comprehending the undergoing climate change on Earth28, and for predicting the far future of our planet. Our results represent, therefore, an important step in that direction.
Methods
Experimental setup
The CNN was trained with images from controlled experiments. The experimental setup consisted of a water tank, two centrifugal pumps, a flow straightener, a 5-m-long closed-conduit channel, a settling tank, and a return line, and we imposed a pressure-driven water flow in closed loop following the aforementioned order. The channel had a rectangular cross section 160 mm wide by 2\(\delta\) = 50 mm high and was made of transparent material. It consisted of a 3-m-long entrance section (corresponding to 40 hydraulic diameters), a 1-m-long test section, and a 1-m-long section connecting the test section to the channel exit. With the channel completely filled with water in still conditions, controlled grains were poured inside, forming one or more conical heaps. Afterward, we imposed a specified water flow which deformed each conical pile into a barchan dune and, in the case of multiple piles, the barchan dunes interacted with each other. We used tap water at temperatures within 22 and 30 °C and different populations of grains (sometimes mixed): round glass beads (\(\rho _s\) = 2500 kg/m\(^3\)) with 0.15 mm \(\le \,d\,\le\) 0.25 mm and 0.40 mm \(\le \,d\,\le\) 0.60 mm, angular glass beads with 0.21 mm \(\le \,d\,\le\) 0.30 mm, and zirconium beads (\(\rho _s\) = 4100 kg/m\(^3\)) with 0.40 mm \(\le \,d\,\le\) 0.60, where \(\rho _s\) and d are, respectively, the density and diameter of grains. We used grains of different colors in order to track them during barchan-barchan interactions. A layout and a photograph of the experimental setup are shown in Fig. 8, and are also available in Assis and Franklin11,12.
Top view images of the dunes were acquired with either a high-speed or a conventional camera mounted on a traveling system and placed above the channel. The high-speed camera was of complementary metal-oxide-semiconductor (CMOS) type with maximum resolution of 2560 px \(\times\) 1600 px at 800 Hz, and the conventional camera, also of CMOS type, had a maximum resolution of 1920 px \(\times\) 1080 px at 60 Hz. Both the camera and traveling system were controlled by a computer, and we varied the field of view and the ROI (region of interest) in accordance with the number of dunes in the test and their velocity and those of grains. We mounted lenses of 60 mm focal distance and F2.8 maximum aperture on the cameras and made use of LED (light-emitting diode) lamps branched to a continuous-current source to provide the necessary light while avoiding beating with the frequencies of cameras. More details about the experimental setup can be found in Refs.11,12,13,35,36,37,38 Datasets with the images and results of the experiments are available in open repositories31,39,40,41.
Object detection model using convolutional neural network
We used the python library YOLOv8 (YOLO - You Only Look Once version 8) for carrying out instance segmentation of dunes (objects), in order to identify, classify, outline, and track each object along images of a given time sequence42. YOLOv8 is a single-stage object detection model based on CNN, whose architecture consists of backbone, neck and head, and it is known for fast generating masks while computing in parallel their coefficients43. The backbone (here the CSPDarknet-53) contains the CNN and generates feature maps at different levels of detail, which are passed to the neck. The neck then processes the feature maps and builds feature maps for prediction, which are passed to the head. Finally, the head predicts the classes of objects, their bounding boxes, and their masks, which can be directly used to outline objects. Figure 9 shows a simplified architecture of YOLOv8.
For the automatic dune detection, the following steps were taken sequentially. First, a large database of experimental data was constructed, taking into account binary interactions from previous work. Next, 7455 images were labeled using the CVAT platform (https://www.cvat.ai/) to annotate the images (this process was carried out manually), where we decided when dunes merged or a new dune was ejected based on the continuity of areas covered with grains (examples of object labeling with CVAT are available in the Supplementary Information S1). A validation and training database was then created using the labeled images, in which we used 876 images for validation, and we trained 300 epochs with a batch size equal to 1065. For the training, we made use of the pre-trained model yolov8n.pt, and trained two specific layers: one to detect barchan dunes (Barchan) and the other to detect non-barchan objects (Not a barchan). Finally, a Python code was developed to run the trained model, detect, classify and outline dunes, and analyze their morphology. The training was carried out in a GPU nvidia RTX 2070 using CUDA 12 and cuDNN 8 (CUDA Deep Neural Network library version 8), and we have not used data augmentation. However, images had different resolution, sharpness, and orientation.
The average accuracy of the trained CNN, for a given image dataset, is usually measured by the mean average precision mAP,
where T is the number of categories (segmented trees) and AP is the average precision of segmentation42,44. For a given category i, the average precision is given by the integral of the precision P as a function of the recall R,
where
TP being the true positives, FP the false positives, and FN the false negatives. Finally, the intersection over Union IoU is a measure of how much the detection box overlaps a box containing the real object (ground truth)44:
The estimated accuracy C plotted in Figs. 1, 4, 5, 6 and 7 for each detection box is usually called confidence score, and corresponds to the precision P multiplied by the IoU and by the conditional class probability \(P_c\):
where \(P_c\) indicates if a given class is present in the box. The trained CNN is available in an open repository45.
Satellite images
We used a combination of satellite imagery platforms to collect images of the surfaces of Earth and Mars, which we used to train and apply the single-stage object detection model for identifying, classifying, and outlining dunes in satellite images. On Mars, high-resolution images were obtained from the HiRISE project32, and medium- and low-resolution images from the Global CTX mosaic33. For high resolution images, we made use of the HiView code32 for converting the pixel scale to a real physical unit (m). For terrestrial dunes, images ranging from low to high resolution were obtained from the Google Earth Pro and Copernicus46 platforms.
We trained the CNN following the same procedure as for the experiments (examples of object labeling with CVAT are available in the Supplementary Information S1). In this case, 12395 images were labeled and trained with 300 epochs, and 2376 images were used for validation. The trained CNN is available in an open repository47.
References
Bagnold, R. A. The physics of blown sand and desert dunes (Chapman and Hall, London, 1941).
Hersen, P., Douady, S. & Andreotti, B. Relevant length scale of barchan dunes. Phys. Rev. Lett. 89, 264301. https://doi.org/10.1103/PhysRevLett.89.264301 (2002).
Rubanenko, L., Lapôtre, M. G. A., Ewing, R. C., Fenton, L. K. & Gunn, A. A distinct ripple-formation regime on Mars revealed by the morphometrics of barchan dunes. Nat. Commun. 13. https://doi.org/10.1038/s41467-022-34974-3 (2022).
Chojnacki, M., Banks, M. E., Fenton, L. K. & Urso, A. C. Boundary condition controls on the high-sand-flux regions of Mars. Geology 47, 427–430. https://doi.org/10.1130/G45793.1 (2019).
Claudin, P. & Andreotti, B. A scaling law for aeolian dunes on Mars, Venus, Earth, and for subaqueous ripples. Earth Plan. Sci. Lett. 252, 20–44 (2006).
Hersen, P. et al. Corridors of barchan dunes: Stability and size selection. Phys. Rev. E 69, 011304. https://doi.org/10.1103/PhysRevE.69.011304 (2004).
Hersen, P. & Douady, S. Collision of barchan dunes as a mechanism of size regulation. Geophys. Res. Lett. 32 (2005).
Kocurek, G., Ewing, R. C. & Mohrig, D. How do bedform patterns arise? New views on the role of bedform interactions within a set of boundary conditions. Earth Surf. Process. Landforms 35, 51–63 (2010).
Génois, M., Hersen, P., du Pont, S. & Grégoire, G. Spatial structuring and size selection as collective behaviours in an agent-based model for barchan fields. Eur. Phys. J. B 86 (2013).
Génois, M., du Pont, S. C., Hersen, P. & Grégoire, G. An agent-based model of dune interactions produces the emergence of patterns in deserts. Geophys. Res. Lett. 40, 3909–3914 (2013).
Assis, W. R. & Franklin, E. M. A comprehensive picture for binary interactions of subaqueous barchans. Geophys. Res. Lett. 47, e2020GL089464. https://doi.org/10.1029/2020GL089464 (2020).
Assis, W. R. & Franklin, E. M. Morphodynamics of barchan-barchan interactions investigated at the grain scale. J. Geophys. Res.: Earth Surf. 126, e2021JF006237. https://doi.org/10.1029/2021JF006237 (2021).
Assis, W. R., Cúñez, F. D. & Franklin, E. M. Revealing the intricate dune-dune interactions of bidisperse barchans. J. Geophys. Res.: Earth Surf. 127, e2021JF006588. https://doi.org/10.1029/2021JF006588 (2022).
Parteli, E. J. R., Durán, O., Tsoar, H., Schwämmle, V. & Herrmann, H. J. Dune formation under bimodal winds. Proc. Natl. Acad. Sci. U.S.A. 106, 22085–22089. https://doi.org/10.1073/pnas.0808646106 (2009).
Courrech du Pont, S., Narteau, C. & Gao, X. Two modes for dune orientation. Geology 42, 743–746. https://doi.org/10.1130/G35657.1 (2014). https://pubs.geoscienceworld.org/gsa/geology/article-pdf/42/9/743/3546522/743.pdf.
Gadal, C., Narteau, C., du Pont, S. C., Rozier, O. & Claudin, P. Incipient bedforms in a bidirectional wind regime. J. Fluid Mech. 862, 490–516 (2019).
Bourke, M. C. & Goudie, A. S. Varieties of barchan form in the namib desert and on mars. Aeol. Res. 1, 45–54. https://doi.org/10.1016/j.aeolia.2009.05.002 (2009).
Silvestro, S., Vaz, D. A., Fenton, L. K. & Geissler, P. E. Active aeolian processes on mars: A regional study in arabia and meridiani terrae. Geophys. Res. Lett. 38. https://doi.org/10.1029/2011GL048955 (2011).
Fenton, L. K. Updating the global inventory of dune fields on mars and identification of many small dune fields. Icarus 352, 114018. https://doi.org/10.1016/j.icarus.2020.114018 (2020).
Tsoar, H., Greeley, R. & Peterfreund, A. R. MARS: The north polar sand sea and related wind patterns. J. Geophys. Res. 84, 8167–8180. https://doi.org/10.1029/JB084iB14p08167 (1979).
Tsoar, H. & Parteli, E. J. R. Bidirectional winds, barchan dune asymmetry and formation of seif dunes from barchans: A discussion. Environ. Earth Sci. 75. https://doi.org/10.1007/s12665-016-6040-4 (2016).
Zhang, Z., Dong, Z., Hu, G. & Parteli, E. J. R. Migration and morphology of asymmetric barchans in the central hexi corridor of northwest china. Geosciences 8. https://doi.org/10.3390/geosciences8060204 (2018).
Azzaoui, M. A., Adnani, M., El Belrhiti, H., Chaouki, I. E. & Masmoudi, C. Detection of barchan dunes in high resolution satellite images. Int. Arch. Photogram. Remote Sens. Spatial Inf. Sci. XLI-B7, 153–160. https://doi.org/10.5194/isprs-archives-XLI-B7-153-2016 (2016).
Carrera, D., Bandeira, L., Santana, R. & Lozano, J. A. Detection of sand dunes on Mars using a regular vine-based classification approach. Knowl.-Based Syst. 163, 858–874. https://doi.org/10.1016/j.knosys.2018.10.011 (2019).
Rubanenko, L., Pérez-López, S., Schull, J. & Lapôtre, M. G. A. Automatic detection and segmentation of barchan dunes on mars and earth using a convolutional neural network. IEEE J. Sel. Top. Appl. 14, 9364–9371. https://doi.org/10.1109/JSTARS.2021.3109900 (2021).
Kowalczyk, A. Support vector machines succinctly (Syncfusion Inc, 2017).
He, K., Gkioxari, G., Dollár, P. & Girshick, R. Mask r-cnn. In booktitle2017 IEEE International Conference on Computer Vision (ICCV), 2980–2988. https://doi.org/10.1109/ICCV.2017.322 (2017).
Baas, A. C. W. & Delobel, L. A. Desert dunes transformed by end-of-century changes in wind climate. Nat. Clim. Chang. 12, 999–1006. https://doi.org/10.1038/s41558-022-01507-1 (2022).
Assis, W. R., Borges, D. S. & Franklin, E. M. Barchan dunes cruising dune-size obstacles. Geophys. Res. Lett.50, e2023GL104125. https://doi.org/10.1029/2023GL104125 (2023).
WOODTV8. Silver Lake Dunes swallow up house. howpublished https://www.youtube.com/watch?v=ifxzsMA4IWY (2017).
Assis, W. R. & Franklin, E. M. Experimental data on binary interactions of subaqueous barchans. Mendeley Datahttps://doi.org/10.17632/jn3kt83hzh.3 (2020).
High resolution imaging science experiment. HiRISE Operations Center - University of Arizona. https://www.actgate.com/.
Mars reconnaissance orbiter (mro) context camera (ctx) dataset. howpublishedApplied Coherent Technology (ACT) Corporation. https://www.uahirise.org/.
Gaertner, E. Lake Michigan sand dune threatens to swallow another silver lake cottage. mlive (2017).
Alvarez, C. A. & Franklin, E. M. Birth of a subaqueous barchan dune. Phys. Rev. E 96, 062906. https://doi.org/10.1103/PhysRevE.96.062906 (2017).
Alvarez, C. A. & Franklin, E. M. Role of transverse displacements in the formation of subaqueous barchan dunes. Phys. Rev. Lett. 121, 164503. https://doi.org/10.1103/PhysRevLett.121.164503 (2018).
Alvarez, C. A. & Franklin, E. M. Horns of subaqueous barchan dunes: A study at the grain scale. Phys. Rev. E 100, 042904. https://doi.org/10.1103/PhysRevE.100.042904 (2019).
Alvarez, C. A., Cúñez, F. D. & Franklin, E. M. Growth of barchan dunes of bidispersed granular mixtures. Phys. Fluids 33, 051705 (2021).
Assis, W. R. & Franklin, E. M. Experimental data on barchan-barchan interaction at the grain scale. Mendeley Datahttps://doi.org/10.17632/f9p59sxm4f.1 (2021).
Assis, W. R., Cúñez, F. & Franklin, E. M. Experimental data on barchan-barchan interaction with bidisperse grains. Mendeley Datahttps://doi.org/10.17632/sbjtzbzh9k.1 (2021).
Cúñez, E. A. & Franklin, E. M. Experimental dataset on “Detection and tracking of barchan dunes using artificial intelligence”. Mendeley Datahttps://doi.org/10.17632/8wh3w3y899 (2023).
Yue, X. et al. Improved YOLOv8-seg network for instance segmentation of healthy and diseased tomato plants in the growth stage. Agriculture 13. https://doi.org/10.3390/agriculture13081643 (2023).
Aboah, A., Wang, B., Bagci, U. & Adu-Gyamfi, Y. Real-time multi-class helmet violation detection using few-shot data sampling technique and YOLOv8. In booktitle2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), 5350–5358. https://doi.org/10.1109/CVPRW59228.2023.00564 (2023).
Soylu, E. & Soylu, T. A performance comparison of YOLOv8 models for traffic sign detection in the Robotaxi-full scale autonomous vehicle competition. Multimed Tools Appl.https://doi.org/10.1007/s11042-023-16451-1 (2023).
Cúñez, E. A. & Franklin, E. M. CNN training of experimental images for “Detection and tracking of barchan dunes using artificial intelligence”. Mendeley Data. https://doi.org/10.17632/brgxgtpz92 (2023).
Copernicus eu. Copernicus Browser. https://browser.dataspace.copernicus.eu/.
Cúñez, E. A. & Franklin, E. M. CNN training of satellite images for “Detection and tracking barchan dunes using artificial intelligence”. Mendeley Data. https://doi.org/10.17632/v4yntwdnjk (2023).
Acknowledgements
The authors are grateful to FAPESP (Grant Nos. 2018/14981-7 and 2021/11470-4) and to CNPq (Grant No. 405512/2022-8) for the financial support provided. The authors thank Fernando David Cúnẽz (University of Rochester) for the help with measuring objects in satellite images.
Author information
Authors and Affiliations
Contributions
E.A.C. wrote the numerical scripts, carried out the computations (and CNN training), carried out some of the experiments, and processed the data. E.M.F conceived the work, analyzed the data, was responsible for the funding, and wrote the manuscript. Both authors reviewed the manuscript.
Corresponding author
Ethics declarations
Competing interests
The authors declare no competing interests.
Additional information
Publisher's note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Supplementary Information
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Cúñez, E.A., Franklin, E.M. Detection and tracking of barchan dunes using artificial intelligence. Sci Rep 14, 18381 (2024). https://doi.org/10.1038/s41598-024-67893-y
Received:
Accepted:
Published:
DOI: https://doi.org/10.1038/s41598-024-67893-y
Comments
By submitting a comment you agree to abide by our Terms and Community Guidelines. If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate.