Future shock in California

For California, probabilistic principles can be applied to the short-term forecasting of further ground-shaking following an earthquake. How such predictions will be used by the public remains to be seen.

“What can we expect next?” — this question is foremost in the minds of the public and news media following any widely felt earthquake. Put another way, for any person living near the earthquake the question is, “What does this earthquake mean for the earthquake risk in my locality?”. The seismologist's usual reply is a vague comment that because all earthquakes are followed by smaller events (aftershocks), this one will be too, and that although earthquakes are sometimes followed shortly afterwards by larger ones, this is rare, so that the overall risk is essentially unchanged, except close to the epicentre.

Thanks to the work of Gerstenberger et al. (page 328 of this issue)1, a much more precise answer can be given, at least for California. The procedures they describe also set a new standard against which to test future earthquake predictions, and as such they deserve to be adopted, with appropriate local modifications, in other earthquake-prone areas.

The simplest kind of earthquake forecast assumes randomness, with the probability of a particular-sized earthquake occurring in a particular place over (say) the next day being always the same. This is the Poisson model. Such a forecast also assumes (as is observed) that small earthquakes occur much more often than big ones, a rule that, when expressed more precisely, is known as the Gutenberg–Richter law. This simple forecast also needs to include a description of how the probabilities vary from place to place; this can be controversial away from the boundaries of tectonic plates, but in California can be reasonably estimated from what we know about past earthquakes and geologically active faults.

Gerstenberger et al.1 add to this a sophisticated model for the other, well-established behaviour of earthquakes — that big events are followed by smaller aftershocks, at a rate that decreases with time. This is known as the Omori law, after the Japanese scientist who suggested it in 1895. An important element in Gerstenberger and colleagues' paper is to allow aftershocks to be of any size, including bigger than the original earthquake, using the Gutenberg–Richter law again to describe how likely the different sizes of aftershocks are2. This neatly unifies the common case of aftershocks with the much less common case of the first earthquake becoming, after a larger second one, a foreshock. The authors use a range of aftershock models to match the variability in observed aftershocks (Fig. 1); as the aftershocks occur, the increasing amounts of data accumulated can be used to improve the prediction of future behaviour.

Figure 1: Variability in aftershocks for four earthquakes in California.

a, The cumulative numbers of aftershocks (within 3.5 magnitude units of the mainshock) against the log of elapsed time. b, The locations of the earthquakes, which are Oceanside (magnitude 5.4; July 1986), Northridge (6.6; January 1994), Hector Mine (7.1; October 1999) and Morgan Hill (6.2; April 1984). Gerstenberger and colleagues' forecasting procedure1 automatically allows for the wide variability seen in the numbers and decay rates of aftershocks.

The result is a procedure that can be run in real time to predict the probabilities of different sizes of earthquakes, with long-term expectations being modified by recent events. The final step is to put this into a form that answers the question about local risk. For any particular location in California, the procedure first computes the probability of earthquakes occurring throughout the region, and finds the shaking that all of these other hypothetical earthquakes would produce. Combining these produces a map of the probability of shaking at a certain level over some future time interval, in this case the next 24 hours. For maximum usefulness to the public, the authors use a shaking level at what seismologists call intensity VI (causing cracked plaster and broken windows), because this is the lowest level with significant associated costs.

To demonstrate their method, they apply it retrospectively to the 1992 Landers earthquake (see Fig. 2 of the paper on page 329). Just before the main earthquake, the local probability of shaking in the area is somewhat increased because of aftershocks from an earlier event. Just after the quake, the probability becomes much larger (up to about a 10% chance of shaking the next day), diminishing over time as the aftershocks become less frequent.

This kind of short-term earthquake prediction has two values, one for public understanding and policy, the other scientific. Having such information readily available (as it will be, on the web) provides both reassurance about what is happening and a clearer picture of the risks. The public is accustomed to seeing probability numbers given for weather forecasts, and seismologists are fortunate in now being able to provide them for earthquakes — avoiding the vagueness that has dogged the colour-coded terror alert system in the United States. It remains to be seen in what ways, besides their educational value, such short-term (and generally low) probabilities will be used by the public and the emergency services.

The scientific value of these predictions is that they provide a baseline against which to test other predictive schemes. Progress in earthquake prediction depends on moving beyond single, anecdotal accounts to systematic tests of predictions using statistical methods. Such a procedure, pioneered by the late Frank Evison in New Zealand3,4, has now become common elsewhere5,6, and is explicitly deployed in Gerstenberger and colleagues' paper. They show that their model does significantly better than the Poisson model, which is the usual comparison7.

It is not surprising that a model that includes aftershocks does better than one that does not. But now that we have it, it can be taken as the hypothesis against which other predictive schemes should be tested, and challenged to demonstrate better performance. I hope the methodology can be extended to other earthquake-prone regions, so that its benefits, and the challenge it presents, can be applied there as well.


  1. 1

    Gerstenberger, M. C., Wiemer, S., Jones, L. M. & Reasenberg, P. A. Nature 435, 328–331 (2005).

    ADS  CAS  Article  Google Scholar 

  2. 2

    Reasenberg, P. & Jones, L. M. Science 243, 1173–1176 (1989).

    ADS  CAS  Article  Google Scholar 

  3. 3

    Evison, F. F. & Rhoades, D. A. NZ J. Geol. Geophys. 36, 51–60 (1993).

    Article  Google Scholar 

  4. 4

    Evison, F. F. & Rhoades, D. A. NZ J. Geol. Geophys. 40, 537–547 (1997).

    Article  Google Scholar 

  5. 5

    Kagan, Y. Y. & Jackson, D. D. Geophys. J. Int. 143, 438–453 (2000).

    ADS  Article  Google Scholar 

  6. 6

    Schorlemmer, D. et al. J. Geophys. Res. 109, B12308 doi:10.1029/2004JB003235 (2004).

    ADS  Article  Google Scholar 

  7. 7

    Stark, P. B. Geophys. J. Int. 131, 495–499 (1997).

    ADS  Article  Google Scholar 

Download references

Author information



Rights and permissions

Reprints and Permissions

About this article

Cite this article

Agnew, D. Future shock in California. Nature 435, 284–285 (2005).

Download citation

Further reading


By submitting a comment you agree to abide by our Terms and Community Guidelines. If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate.


Sign up for the Nature Briefing newsletter for a daily update on COVID-19 science.
Get the most important science stories of the day, free in your inbox. Sign up for Nature Briefing