Deep neural networks for active wave breaking classification

Wave breaking is an important process for energy dissipation in the open ocean and coastal seas. It drives beach morphodynamics, controls air-sea interactions, determines when ship and offshore structure operations can occur safely, and influences on the retrieval of ocean properties from satellites. Still, wave breaking lacks a proper physical understanding mainly due to scarce observational field data. Consequently, new methods and data are required to improve our current understanding of this process. In this paper we present a novel machine learning method to detect active wave breaking, that is, waves that are actively generating visible bubble entrainment in video imagery data. The present method is based on classical machine learning and deep learning techniques and is made freely available to the community alongside this publication. The results indicate that our best performing model had a balanced classification accuracy score of \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\approx$$\end{document}≈ 90% when classifying active wave breaking in the test dataset. An example of a direct application of the method includes a statistical description of geometrical and kinematic properties of breaking waves. We expect that the present method and the associated dataset will be crucial for future research related to wave breaking in several areas of research, which include but are not limited to: improving operational forecast models, developing risk assessment and coastal management tools, and refining the retrieval of remotely sensed ocean properties.

Wave breaking is one of the most challenging water wave phenomena to investigate. Despite nearly two centuries of research, the current practical description of the process is still mostly empirical [1][2][3][4] . Precise wave breaking modelling requires explicit knowledge of the wave phase speed and the fluid velocity distribution on the wave crest, which can only be done by numerically solving the Navier-Stokes equations in a framework that currently is too computationally expensive for practical applications 5 . Consequently, current large-scale state-of-the-art wave models rely on statistical approaches (mostly the spectral approach) to represent the waves [6][7][8] . This type of models parameterizes wave breaking as a function of known parameters such as the wind speed 6 , the local wave height to water depth ratio 7 or semi-empirical breaking wave height probability distributions 2,9 . Due to a lack of observed data, the constants involved in these models have been derived from limited datasets that may not adequately represent the natural environment. For example, a recent study has shown that popular surf zone parametric wave breaking models incorrectly represented the fraction of broken waves in their formulations with errors > 50%, despite these models being able to adequately represent surf zone energy dissipation possibly due to parameter tuning 10 . This paper aims to start addressing the data unavailability issue by providing a reliable and reproducible method to detect and track waves that are actively breaking in video imagery data.
Wave breaking directly affects several environmental phenomena. For example, wave breaking is considered as the main driver for air-sea exchanges 11 , being often described as a function of Phillips' 12 �(c)dc parameter. This parameter is defined as the average total length per unit surface area of breaking fronts that have velocities in the range c to c + dc and its moments are assumed to correspond to different phenomena such as whitecap coverage (second moment), rate of air entrainment (third moment), and momentum flux (fourth moment). As previously identified 13 , different interpretations of how data is processed to obtain �(c)dc resulted in differences of 175% in the first moment and 300% in the fifth moment of �(c) 13,14 . This issue does not seem to limit the applicability of Phillips' framework. A recent study, for example, used a wave breaking parameterization fully based on �(c)dc to model whitecap coverage off the coast of California 15 . To the authors' knowledge, however, no study has assessed the impact that errors in wave breaking detection has on measured �(c) distributions and how such errors would impact on the conclusions that have been drawn from these models (for example, the assumption that �(c) is a simple function of wind speed 11 ). More importantly, in the case which new observations Method Model definition. In this study, we have developed a novel method to detect active wave breaking in video imagery data. Similarly to previous studies, we exploit the fact that breaking wave crests generate characteristic white foam patches from bubble entrainment 30,31 . Differently from the vast majority of previous methods, here we make a clear distinction between active wave breaking (that is, visible foam being actively generated by wave breaking) from passive foam and to all other non-wave breaking instances. To the authors' knowledge, only the methods of Mironov and Dulov 30 and Kleiss and Melville 32 have previously attempted to distinguish between active wave breaking and passive foam. Both their methods start similarly to ours (see next paragraph) by identifying bright pixels in a given series of images via thresholding. Next, interconnected pixels are connected in space and time using a nearest neighbor search (Mironov and Dulov 30 ) or by extracting connected contours (Kleiss and Melville 32 ). In both previous methods, the separation between active and passive wave breaking is then done by empirically defining foam expansion direction and velocity limits. The present method aims to remove empirical steps in active wave breaking detection by using data-driven methods and deep neural networks instead of experiment-specific parameters.
Our method starts by naïvely identifying wave breaking candidates in a given input image. This was done by identifying bright (that is, white) pixels in the image using the local thresholding algorithm available from the OpenCV 33 library with its default parameters. Interconnected regions of bright pixels in the image were then clustered using DBSCAN 34 and a minimum enclosing ellipse 35 was fitted to each identified cluster. The DBSCAN step requires the user to define a minimum number of pixels to be clustered (default value is 10 for all data used here) and a maximum distance allowed between pixels (default value of 5 for all data used here). This step resulted in large quantities of bright pixel patches being detected. At this stage, it was not possible to determine whether a given identified cluster of pixels represented active wave breaking, passive foam, or any other instance of bright pixels (for example, birds, boats, coastal structures, sun glints or white clouds). Generically, this step can be thought of a feature extraction step and can be replaced by any other equivalent approach (for example, image feature extractors). To avoid exponential memory consumption growth generated by DBSCAN, the input image may be subdivided into blocks of regular size. This step was easily parallelized with performance increasing linearly with number of computing threads.
The second step of the method consisted of training a deep convolutional neural network to distinguish between active wave breaking and passive foam (and all other non-desired occurrences). This step can be understood as a binary classification step. Figure 1 shows a schematic representation of the present method. From the original images, randomly selected subsets of size 256 × 256 pixels centered on the fitted ellipses were extracted and manually labelled as either active wave breaking (label 1, or positive) or otherwise (label 0, or negative). The present training dataset was generated using raw video imagery data from Guimarães et al. 36 and consisted of 19,000 training and 1300 testing images (see Table 1). The training dataset is further split into 80% training and 20% validation data 37 . The validation dataset is reserved for future hyper-parameter fine-tuning. Note that the present dataset has a class imbalance of ≈ 90% towards the negative label, that is, for each active wave breaking  Figure 1. Schematic representation of a deep convolutional neural network. The input layer consists of an active wave breaking candidate and has shape 256 × 256 × 3 (image height, image width and number of RGB channels). In the case of a grey-scale image, the single grey channel was triplicated. The red dashed rectangle in the analyzed image shows the region of interest which, in this particular case, was equal to the stereo video reconstruction area. The intermediary convolutional layers (or backbones) are interchangeable with different options available (see text for details). The function of the convolutional layers is to extract features from the input image by using convolutions ( 3 × 3 in this example) and max pooling 49 (that is, selecting the brightest pixel in a given window). The last convolutional layer is flattened (that is, turned into a one-dimensional vector) and is connected to two fully-connected (that is, multi-layer perceptron-like) layers and one final classification layer. A 50% dropout (that is, random selection of neurons in a layer) is applied after each fully connected layer. The final classification layer has one unit and uses sigmoid activation with a threshold of 0.5. Table 1. Data characterization summary table. f is the sampling frequency in Hertz, D is the duration of the experiment in seconds, H s is the significant wave height computed from the wave spectrum, T p1 and T p2 are, respectively, the peak wave period of the first and second spectral partitions computed following Hanson and Jensen 50 , D p is the peak wave direction, and U 10 is the wind speed measured or converted (denoted by the * ) to a height of 10 m above the sea surface using Large and Pond's 51 formula. Tr. S. and Ts. S. are the sample size of the train and test datasets, respectively.

Location
Date www.nature.com/scientificreports/ (see our Github code repository https ://githu b.com/caios tring ari/deepw aves for usage guidance). All of these backbones make use of convolutional layers to gradually extract information from the input images. The main differences between them are how many layers they have, the size of the convolution windows, how data is normalized in each layer and how each layer connects to each other (or to previous layers in the stack). For example, VGG16 is 16 layers deep, uses 3 × 3 convolution windows and has no normalization. InceptionResNetV2 is 164 layers deep, uses a mix of 5 × 5 , 3 × 3 and 1 × 1 convolution windows and uses batch normalization 44 to help avoiding overfitting. Residual Networks (namely, ResNet50V2) not only connect adjacent layers but also take into account errors (residuals) from previous layers. In general, more modern architectures (namely, Efficient-Net) are wider (mainly by having parallel convolution windows) as well as much deeper than older architectures (namely, VGG16). The final top layers of the network were fixed for all backbones and consisted of flattening the last convolutional layer, two fully-connected layers with 50% dropout 45 , and a final classification layer with sigmoid activation. The optimization step (training) was done using the Adam 46 implementation of the stochastic gradient descent method 47 and binary cross-entropy was used as the loss function. Note that this step must be computed using a graphics processing unit (GPU) in order to achieve feasible computation times. The models took from three (VGG16) to twelve (EffientNetB5) hours to train using a NVIDIA GTX 1080 GPU with a batch size of sixty-four images. After the neural networks were sufficiently trained, the best performing network (VGG16, see below) was used to classify all the naïvely identified wave breaking candidates and only the events classified as active wave breaking were kept for further analyses. Although VGG16 was chosen for presenting the results, the performance of the other architectures is nearly identical to VGG16 on real-world applications. Finally, note that user-tunable parameters for the neural networks (for example, learning rate and neuron activation thresholds) were kept unchanged from their default values in the TensorFlow library 48 . This implicates that more aggressive parameter optimization could improve the results presented here even further. For sake of brevity, we refer the reader to the project's code repository for guidance on how to select hyper-parameters.
The last step of the method consisted of grouping the active wave breaking events in space and time (note that at this stage bright pixels are only grouped in space). Time-discrete wave breaking events had their bounding ellipse region filled in pixel space (i, j) and were stacked in time (t) resulting in a point-cloud-like threedimensional (i, j, t) structure. The DBSCAN 34 algorithm was then used to cluster these data and obtain uniquely labelled clusters. The two parameters for the DBSCAN algorithm, that is, the minimum number of samples in each cluster ( n min ) and the minimum distance allowed between two samples (eps), were set to be equals to the minimum number of points inside a fitted ellipse among all ellipses ( n min ) and equals to the sampling frequency in Hertz (eps). These values for n min and eps were constant among the analyzed datasets. Note that this final step can be replaced by any other density-based clustering algorithm or other more sophisticated algorithms such as SORT 52 (which is also available on the project's code repository but was not used here).
Note that up to the clustering step, all calculations were done in pixel domain and "time" was obtained from sequential frame numbers. To convert between pixel and metric coordinates, the grids constructed using the stereo-video dataset available from Guimarães et al. 36 were used in the present study. If stereo video data are not available, such conversions could be done knowing the camera position in real-world (that is, metric) coordinates, which is usually done using surveyed ground control points 53,54 . Conversion from sequential frame numbers to time (in seconds) can be done by knowing the sample rate (in frames per second, that is, Hertz) for the data. The total amount of computational time required to process 20 min of raw video recorded at 10Hz with an image size of 5 megapixels is approximately two hours on a modern six-core computer (assuming that a pre-trained neural network is available). Much shorter processing times are achievable for smaller images sizes and higher number of computation threads.
Evaluation metrics. Due to the imbalanced characteristic of the dataset, the classification accuracy score in isolation is not an appropriated metric to evaluated the present test dataset. For instance, a classifier that guesses that all labels are negative (that is, 0) would automatically obtain a high score ( ≈ 90%). To properly assess the performance of the classifier, more robust metrics were defined. These metrics are defined as follows: • True Positives (TF) and True Negatives (TN) are samples that were correctly classified. False Positives (FP) or Type I errors are false samples that were incorrectly classified as true, and False Negatives (FN) or Type II errors are true samples that were classified as false. • Accuracy is the percentage of examples correctly classified considering both classes, that is, Accuracy= TP+TN T+N , where T and N are the total number of positive and negative samples, respectively.
• Precision is the percentage of predicted positives that were correctly classified, that is, Precision = TP TP+FP . • Recall is the percentage of actual positives that were correctly classified, that is, Recall = TP TP+FN • Area under the curve (AUC ) is the area defined by plotting the FP rate against the TP rate (also referred to as receiver operating characteristic curve). This metric indicates the probability of a classifier ranking a random positive sample higher than a random negative sample.
All metrics described above were monitored during the training process and were individually assessed to rank model performance. The training curves shown in Fig. 2a were used to assess when overfitting, that is, decreases in the loss function value for the training dataset that were not reflected in the validation dataset, started to occur. Training epochs after overfitting started were discarded. Here we favor the AUC curves as shown in Fig. 2b to indicate better performing models because AUC is a more robust metric than the classification score and at the same time presents a smooth evolution with training epochs. Finally, a confusion matrix (or table of confusion) as shown in Fig. 2c was plotted for each model to assess Type I and Type II errors.

Results
Classifier performance. From the analysis of all training curves, confusion matrices, and from Table 2, the best performing backbone architecture during training was ResNet50V2 by a considerable margin ( AUC = 0.989 ). These results however, did not translate to the validation dataset ( AUC = 0.873 ). Considering only the validation data, VGG16 was the best performing backbone with AUC = 0.946 . Considering only the test dataset, the best performing model was also VGG16 ( AUC = 0.855 ). Overall, VGG16 was selected as the best performing model and the results presented in the next sections will use this backbone. Other evaluation metrics such as the accuracy score, precision, and recall closely followed the evolution of AUC with training epochs (compare Fig. 2a and b, for example). In general, as the loss value decreased, the number of false positives decreased, which made the precision and recall to increase. This behavior was consistent for all models. Given that different architectures may perform better for other datasets and further optimization could change the model ranking presented here, all pre-trained models and their training metrics evolution are made available in the project's code repository. Finally, it is worth mentioning that larger models (for example, VGG19, ResNET152 and EfficientNetB7) could achieve better results but this was not attempted there due to hardware limitations (that is, these models required more memory than what was available on the NVIDIA GTX 1080 GPU used in this study).   Figure 3 shows the results of the application of the best performing model architecture (VGG16) on La Jument (2018/01/03 09:39) and Black Sea (2011/10/04 09:38) data. Visual inspection of these results confirmed the ability of the neural network to correctly classify the naïvely identified wave breaking candidates and only keep instances that represented active wave breaking. Moreover, the same neural network was able to correctly classify active wave breaking events for the rogue waves seen at La Jument 29 and for the small wind-generated breakers seen in the Black Sea data. This result highlights the ability of the neural network to generalize well on the dataset, which is a difficult result to achieve. From the analysis of the training curves, the averaged classification error (accounting for the imbalance in the data) should be of the order of ≈ 10% , which to the authors' knowledge it was not assessed by other wave breaking detection methods. We strongly recommend that future research should report active wave breaking detection errors when used to assess or further develop models that depend on wave breaking data.   30 were manually classified as true or false and compared to the labels predicted by our method. To the authors' knowledge, these data are the only currently available data that has been classified by both methods as well as manually. Figure 4 shows the result of the compassion between models. On average, our method has relatively ≈ 50% less error then Mironov and Dulov's (2008) method with an averaged absolute reduction in error of ≈ 15% . The results in Fig. 4 are also consistent with the results seen in Table 2 which showed that our model had errors in the order of ≈ 15% when considering the validation and test datasets. Note that all tested neural network architectures performed very similarly with only a slight advantage for InceptionResnetV2. It is also worth mentioning that Mironov and Dulov's (2008) method was designed and optimized to work specifically with data from the Black Sea and it is not guaranteed that it will generalize to other datasets. Our method, on the contrary, has been shown to generalize well for very distinct datasets (see Fig. 3, for example) and with further optimization could achieve even better performance.

Wave breaking statistics. In this section we briefly present examples of wave breaking statistics that can
be directly derived from the classified wave breaking data. For brevity, the analysis is limited to data from the Black Sea because our classifier performed best at this location (classification errors < 10% considering both 2011 and 2013 experiments). Five quantities will be analysed: the wave breaking duration ( T br ), the wave breaking area ( A br ), the major (a) and minor (b) axis of the fitted ellipses (representative of the wave breaking lengths at their maximum during the active part of the wave breaking), and Phillips' distribution �(c)dc . These quantities were obtained directly from space-time clustered wave breaking events with the exception of the cumulative wave breaking area ( A br ) which was calculated from the projections of pixels clustered in the first step of the method to metric coordinates. The results of this analyses are shown in Fig. 5.
The wave breaking duration ( T br normalized by wave peak period ( T p ), Fig. 5a) roughly followed a shifted Gamma probability density function (PDF) and had a mean value of 0.12 and most frequent value (mode) of 0.13. This result shows that the active part of the wave breaking process happens very quickly. The wave breaking area ( A br , Fig. 5b) closely followed a Pareto distribution which indicates that large wave breaking events are relatively rare in the data. The ratio between the major and minor axis of the fitted ellipses (a/b, Fig. 5c) followed a Beta PDF and had a mean of 2.5 and mode of 1.9, which indicates that the ellipses' major axis is approximately double the size of the minor axis. Assuming a negligible wave front angle, the wave breaking area scaling relation from Duncan 56 ( A br /b 2 , Fig. 5d) also followed a Beta PDF and had mean of 0.1 and mode of 0.8, which is consistent with the previously reported value 56 ( 0.1 ± 0.01 ). The wave breaking area (Fig. 5e) showed a quadratic increase with wave breaking event duration, which is trivial but seems not to have been previously directly shown before. Finally, Fig. 5f shows the �(c)dc distributions obtained using our method and considering that the ellipse major axis is representative of the wave breaking length. The observed distributions greatly deviated from the theoretical c −6 relation 12 . Note that all PDF fits shown here were statistically significant at the 95% confidence level using the two-tailed Kolmogorov-Smirnov test. See Table 3 for the description of the parameters of the PDFs presented in this section and the Discussion section for the contextualization of these results and the possible implications that they have for future research.  Table 1

Discussion
We have presented a new method to detect active wave breaking in video imagery data that is robust and easily transferable to future studies. Overall, VGG16 was the best performing architecture which is a surprising result given that VGG is a considerably older architecture that has been superseded by more recent architectures such as ResNets and EfficientNets 43 . Also surprisingly, EfficientNet was one of the worst performers considering the test dataset despite the state-of-the-art results that this architecture achieved in recent years 43 . One explanation could be that given VGG16 is the model with the highest number of parameters (despite being the shallowest network), it adapted better to the characteristics of the present dataset. Another explanation could be that Effi-cientNet is currently overfit on the ImageNet 58 dataset that was used to train this network hence its high reported classification scores. Another aspect to take into consideration is the speed in which the models can predict on new data (inference). While VGG16 was the best performing model, its size slows down inference time. In this regard, MobileNetV2 offers real-time inference speed at 10Hz for small image sizes (for example 512 × 512px ). Consequently, a model based on MobileNetV2 could be used to detect active wave breaking in an operational    www.nature.com/scientificreports/ fashion in, for example, offshore floating wind turbines that are susceptible to wave breaking impacts and used to adjust anchor and cable settings in real-time.
From the analysis of the application of the method on real-world data, it was visually observed that small active wave breaking events were not always detected, particularly for the Black Sea data. There are two possible explanations for this error. The simplest explanation could be that this type of events is under-represented in the training dataset. The solution for this issue consists of adding more data representative of these instances to the training dataset. The second possibility is that the image size becomes too small in the deeper layers of the network which makes it impossible for the network to learn these events (see below for further discussion). A solution for this issue could be to increase the input image size but this was not attempted here due to hardware constraints (that is, memory limitation, as discussed above).
Neural networks have been historically seen as black boxes in which only the final classification outputs are relevant. There has been, however, an increase in interest to understanding how neural networks are learning. One technique that is of particular interest for the present paper is the concept of Gradient-weighted Class Activation Mapping (Grad-CAM) 59 . Briefly, this technique shows which regions of a convolutional layer are more important for the results obtained by the classifier. Figure 6 shows the results of Grad-CAM applied to examples for all unique locations from Table 1 considering VGG16's last convolutional layer. When considering only actively breaking waves (Fig. 6a to d) it is evident that VGG16 closely mimicked how a human would classify these data, that is, it directly searched for the regions of the image that corresponded to active wave breaking. In the case of passive foam (Fig. 6e to h), VGG16 seemed to use larger portions of the image but at the same time focused on the flocculent foam seen in the images as a human classifier would do. In general, these results show that our model truly learned how to classify the images and is not merely guessing the labels.
The promising results presented here indicate that the current method should be extended to an object detection framework. Such a framework would eliminate the need for the image thresholding and DBSCAN steps. This implementation could be done by using a strongly-supervised architecture such as UNet 60 , or by using a weakly-supervised method derived from Grad-CAM, for example. As a final recommendation regarding the machine learning aspect of this paper, we strongly encourage future researchers to add samples to the training dataset that matches their specific research needs instead of blindly applying the provided pre-trained models.
We have also presented examples of wave breaking statistics that can be obtained using the proposed method. In general, the patterns observed here agreed with previously reported scaling factors that support the idea of Figure 6. Results of Grad-CAM 59 for all unique experiment locations described in Table 1  www.nature.com/scientificreports/ wave breaking self-similarity. For example, Fig. 5f directly showed that the scaling parameter A br /b 2 approaches the constant 0.1 value from Duncan's 56 laboratory experiments. Another variable that showed signs of a selfsimilar behavior was the wave breaking area ( A br ) which was very well described by a Pareto distribution and presented a steady quadratic increase with wave breaking duration ( T br ). Extensive research 11,15 has been grounded on the assumption that wave breaking is self-similar, but inconsistencies with this approach have been reported before 14 . Contrarily to other studies 11,31 , however, the �(c) distributions obtained here did not closely match the theoretical c −6 for a sea-state in equilibrium. As reported before 13 , these differences may be due to the fact that here we only considered actively breaking waves for our analysis whereas other studies seem to track the speeds of both actively breaking waves and passive foam 31,32 . Another possibility is that other phenomena not accounted by Phillips' 12 theory (for example, wave modulation 61 ) play important role on wave breaking. A future publication that investigates the mechanisms related to the wave breaking kinematics using the method and data obtained here will soon follow.

Conclusion
We described a novel method to detect and classify actively breaking waves in video imagery data. Our method achieved promising results when assessed using several different metrics. Further, by analyzing the deeper layers of our neural network, we showed that the model mimicked how a human classifier would perform a similar classification task. As an application of the method, we presented wave breaking statistics and breaking wave crest length distributions. Our method can thus be useful for the investigation of several ocean-atmosphere interaction processes and should, in the future, lead to advancements in operational wave forecast models, gasexchange models, and general safety at seas. Finally, we strongly recommend that future research should focus on standardized methods to study wave breaking so that a consistent dataset can be generated and used freely and unambiguously by the community.