After every major earthquake, seismologists warn the public that the danger has not yet passed: aftershocks will continue to shake the ground. These aftershocks usually get smaller over time, but, occasionally, an aftershock will be larger than the original event. Standard earthquake statistics suggest that the latter situation should occur about 5–10% of the time1,2, but is there any way of knowing which aftershock sequences will behave in this anomalous way? More simply, after a big earthquake, is it possible to determine whether an even larger one is coming? Writing in Nature, Gulia and Wiemer3 propose an answer to this question. They suggest that, by continuously measuring the relative numbers of large and small earthquakes, comparatively safe aftershock sequences can be distinguished from those that will get bigger.
The magnitude distribution of earthquakes generally follows a relationship known as the Gutenberg–Richter law4. Roughly speaking, in most places on Earth, for every earthquake of magnitude 4 or larger, there will be 10 quakes of magnitude 3 or larger and 100 quakes of magnitude 2 or larger. The exact ratio of big to small earthquakes in a particular time or place is described by a parameter called the b value. If this value is low, there will be comparatively fewer small quakes for every big one. And if it is high, there will be more small quakes for every big one.
In previous work, Gulia and Wiemer, together with co-workers, found that the b value normally rises during an aftershock sequence, which means that small earthquakes become more common5. In the present work, the authors noticed that, occasionally, the b value drops instead of rising, implying that big quakes increase in frequency. They also noticed that these sequences are the only ones that contain an aftershock larger than the original quake.
According to the definition of the b value, sequences that have low values are more likely to be associated with big earthquakes than are those that have high values. Therefore, Gulia and Wiemer’s finding might seem to be merely a restatement of aftershock statistics. However, the authors suggest that the observed pattern is deterministic rather than statistical, on the basis of the fact that a falling b value is seen robustly for only two earthquake sequences in the entire data set: the 2016 Kumamoto earthquakes in Japan and the 2016 Amatrice–Norcia earthquakes in Italy (Fig. 1). Each of these sequences contained an anomalously large and damaging aftershock. For nearly all of the other sequences, the b value increased directly after the original quake. The authors note one exception to this, which they attribute to poor data quality in the early 1980s.
Making such a claim based on two aftershock sequences might seem bold. But in earthquake science, we are often driven to closely analyse the few examples that are available because nature provides only uncontrolled experiments at irregular intervals. Nonetheless, we need to proceed with extreme caution in the face of such sparse data.
In particular, measuring the magnitude distribution is not as simple as it at first seems. Many judgement calls are required to determine how big the measurement region should be, how to define the normal b value for a region and how to account for the fact that many aftershocks are not recorded in the wake of a large earthquake. These decisions must be made for each region, and the decision-making is the Achilles heel of statistical seismology studies such as this one.
For instance, the authors opt to use data collected at least 3 days after the first large Amatrice–Norcia earthquake to compute the b value, but used data collected at least 0.05 days after the first Kumamoto event, because of the higher quality of the Japanese earthquake catalogue. If they had waited 0.2 days after the first Kumamoto quake, their traffic-light coding system would have given a yellow warning rather than a red one — that is, a less-definitive warning.
Expert judgement is intrinsic to the design of scientific analyses and, in this case, a different judgement would have led to a different answer. So how can we determine whether the correct decisions have been made? The gold standard of any scientific theory is its ability to predict data that have not been collected when the theory is proposed. Gulia and Wiemer have documented their decisions through a full release of their computer code. As new earthquakes occur, the key test of the paper will be in the reuse of this code.
Earth is already providing us with opportunities to test the authors’ claim. The 2019 Ridgecrest earthquakes in California are notable for having a magnitude-6.4 event followed within days by a magnitude-7.1 event (see go.nature.com/2pjalib). Other examples will surely follow. We can all hope for a more predictable future in which these anomalous events cease to be surprises.
Nature 574, 185-186 (2019)