When it comes to earthquakes, large, destructive ones dominate the headlines. But seismologists have long known that small quakes, which are created by the near-constant slipping of fault lines and often go unnoticed even by scientists, can illuminate crucial details about all kinds of earthquakes, even really powerful ones.
Now, a team of researchers has used machine-learning and supercomputers to spot millions of these imperceptible quakes — as small as magnitude 0.3 — hiding in the seismological records of southern California, one of the most tectonically treacherous corners of the United States.
The data will allow researchers to improve their understanding of the physical processes that trigger hazardous earthquakes — ultimately boosting hazard-mitigation efforts.
This type of data mining is like gold-mining, says Ken Hudnut, a geophysicist at the US Geological Survey in Pasadena, California, who was not involved with the research. The project can extract ‘gold’ at record efficiencies, and might find riches that no one expected to dig up, he says. The results were published in Science1 on 18 April.
The team’s approach could also be applied elsewhere and to other geological features, such as volcanoes, say the researchers.
The project, called Mining Seismic Wavefields (MSW), began in 2016, and involves researchers at Stanford University in California, the University of Southern California (USC) in Los Angeles, the California Institute of Technology in Pasadena and the Georgia Institute of Technology in Atlanta.
The researchers came up with the idea some time ago, but two key ingredients were missing: vast and detailed seismic data sets created with modern instrumentation, and powerful computer systems to process the data efficiently. The elements finally came together a few years ago, and teams began developing techniques to find small earthquakes in new records.
The Caltech MSW group analysed seismic data known as waveforms, which represent earthquakes, and used their distinctive features to create templates that could ‘show’ algorithms what to look for in a large data set. They fed the templates into supercomputers and used them to detect the elusive fingerprints of tiny quakes in an ocean of noise.
Signal vs noise
But distinguishing between sources of low-level ground shaking is “anything but trivial”, says Yehuda Ben-Zion, acting director of the Southern California Earthquake Center at USC and co-leader of the MSW project. His group, which was studying the anatomy of seismic faults, found that the California ground shakes constantly. Vibrations from planes, trees, houses and even antennas shaking in the wind generate rumbles that, to a seismograph, look like earthquakes and can make up up 10–50% of signals in a set of seismological data.
To separate them out, the team developed machine-learning models and fed them millions of examples of both real quake signals and non-tectonic shaking. The software could “learn to correctly identify never-seen-before waveforms”, says team member Christopher Johnson, a geoscientist at the Scripps Institute of Oceanography in La Jolla, California.
The team also found that seismic records are not always good enough to create sufficient templates for the software to learn what an earthquake in a particular region looks like. So the researchers developed another algorithm, called Fingerprinting and Similarity Thresholding (FAST), which is based on a method developed for audio recognition. But unlike apps such as Shazam that recognize for music on the basis of small clips, FAST doesn’t know what clips from the earthquake ‘song’ sound like. Instead, it looks for snippets in the entire data set that are similar to each other, and flags them as candidate quakes, says Karianne Bergen, a data scientist at Harvard University in Boston, Massachusetts, who co-developed the algorithm while doing her PhD at Stanford.
For the latest paper1, the MSW team applied these approaches to the entire continuous set of data recorded by the prolific Southern California Seismic Network, which has sensors all across the region.
The researchers found 1.81 million previously undetected quakes that took place in 2008–17 — a tenfold increase on the original number catalogued.
Ben-Zion suspects that with improvements in computing power and detection methods, MSW will be able to pick out many millions more quakes even tinier than those they are currently finding.
“We can expand the number of sensors, we can put them in boreholes deep underground to reduce the background noise levels, and we can improve our automated algorithms for finding these weak events in the data,” says lead author Zachary Ross, a geophysicist at Caltech.
The technique is “be limited by the quality and paucity of data when compared with the high-quality data we have now,” says Lucile Bruhat, an earthquake physicist at the École Normale Supérieure in Paris who is not involved with the project. But “we can, and should, revisit catalogues and past large earthquakes to better characterize what happened at the time”, she says.
Bruhat suggests the technique could also be used to observe mysterious types of earthquake such as ‘slow slip’ events, which take months or years to unfold and are hinted at by numerous miniature rumbles.
Jackie Caplan-Auerbach, a seismologist and volcanologist at Western Washington University in Bellingham, thinks the approach could be applied to volcanoes.
“We know that volcanoes are creaky, unstable things, and the vast majority of their seismic activity is very small”, which makes it difficult to detect, she says. If this work can help extract these rumblings, then researchers will gain insights into the magma and superheated fluids moving about inside them.
Ross, Z. E., Trugman, D. T., Hauksson, E. & Shearer, P. M. Science https://doi.org/10.1126/science.aaw6888 (2019).