Geological faults are not behaving as scientists once expected. Glennda Chui reports on efforts to forge a new understanding of quake behaviour.
That low rumbling emanating from California is no earthquake. It is the sound of the state's carefully honed earthquake-forecasting process being shaken hard and put back together.
"We're talking about a radical overhaul," says Ned Field, a seismologist with the US Geological Survey (USGS) in Denver, Colorado, and chairman of the Working Group on California Earthquake Probabilities. The group is charged with forecasting the probability of damaging earthquakes throughout the state over the next 30 years.
The group's report is due out in 2011 and, like its predecessors (see 'Earthquake probabilities in the California area'), it is expected to have enormous influence — not only on research but also on public policy in California, the most populous state in America and home to three-quarters of the nation's seismic risk. California has the most comprehensive earthquake-forecasting effort in the world and some of the best-studied faults, so lessons learned there will have ripple effects on other quake-plagued nations.
From the standpoint of seismology, the report could push aside an influential family of ideas that have long dominated earthquake research and forecasting. One hypothesis views faults as creatures of habit that tend to rupture in large, 'characteristic' earthquakes of about the same magnitude again and again. Another model asserts that big quakes are most likely to strike in 'seismic gaps' that haven't suffered major jolts in a long time. Both attempt to explain quake behaviour in relatively simple physical terms, and make intuitive sense. But recent research has caused confidence in them to waver.
The world of characteristic earthquakes is "a world we all wish we lived in", says Thomas Jordan, director of the Southern California Earthquake Center in Los Angeles. "It hasn't worked out that way because earthquakes don't occur on simple fault structures, but on fault systems that are rather complicated." Jordan contends that earthquakes involve complex interactions among faults, leading to chaotic behaviour that is very difficult to predict. The physics of individual faults and the way in which they influence each other may, in fact, be so complex that no simple deterministic rules can explain their behaviour.
This new understanding challenges not only the idea of characteristic earthquakes but also an older and more fundamental notion that faults obey regular patterns of behaviour. Called the elastic rebound theory, that concept sprang from the earthquake that levelled large parts of San Francisco in 1906 (pictured, right). It was developed by geologist Henry Fielding Reid of Johns Hopkins University in Baltimore, Maryland, who led the official investigation into the geology and physics of the disaster.
After studying the rupture along the San Andreas fault and the aftermath of the ground movement, Reid came to view earthquakes as a repeating pattern of pressure accumulation and release. Strain would build up in the ground until it reached a critical level. At that point, the rocks on one side of a fault would jerk forward relative to their neighbours on the other side. The sudden movement would release almost all the pent-up energy in the ground, and it would take some time before enough strain would accrue to make another quake possible.
Reid's was a revolutionary idea, coming as it did half a century before plate tectonics explained where all that stress on the San Andreas fault was coming from. And it was wonderfully appealing, because it implied that the cycles might be regular enough to predict earthquakes. In his 1910 report detailing the results of his investigation1, Reid proposed a way to foretell when the next earthquake was due: plant a line of piers at a right angle across the fault and measure their relative angles, distances and heights from time to time. When the measurements show that the strain has built to a critical level, he wrote, "we should expect a strong shock".
Stretched too far
For a century, the elastic rebound theory has been a foundation on which later seismic hypotheses were built: after all, the strain that builds up in the ground as Earth's crustal plates jostle past each other must be released, eventually, in the form of earthquakes or through a slow, quiet alternative called aseismic slip. But the idea that quakes come tumbling out as steadily as stress goes in is increasingly coming under attack as earthquakes prove to be much more complicated — and more staunchly individual — than Reid ever imagined.
That individuality has made itself known in numerous ways. In some cases, quakes have come in clusters, such as the giant magnitude-9.1 shock that hit off the northwest coast of Sumatra in 2004 and was followed by a series of nearby quakes, including one last month. In other instances, they rupture large sections of faults, combining patches that had not been known to move together in previous seismic events. Earthquakes even hop from fault to fault, as seen in the magnitude-7.3 earthquake that struck the town of Landers in California's Mojave Desert and the magnitude-7.9 shock that hit central Alaska in 2002. Both of these quakes also had other claims to fame: they triggered tremors, geyser eruptions and other seismic activity thousands of kilometres away.
The evidence against seismic regularity has emerged even along the San Andreas fault, where earlier research had provided the appearance of a repeating pattern of large earthquakes. That picture had come from research done in the late 1970s and early 1980s on the south-central section of the San Andreas fault, north of Los Angeles. Geologists working there had determined how much streams and other features had been offset by the great Fort Tejon earthquake of 1857, which measured magnitude 7.9. Trenches dug along the fault had revealed a series of earlier, similar quakes spread about 200–400 years apart. In each spot, the fault appeared to have slipped about the same amount during each quake2.
We're in a situation where we can tear down the whole thing and start from scratch. Ned Field ,
But recent excavations have dramatically changed that picture. In 2004, Lisa Grant Ludwig of the University of California at Irvine, began digging trenches in a place called the Bidart Fan, where a series of small streams have cut channels across the south-central San Andreas fault through a thick deposit of sediment. Along the trench walls, past earthquakes show up as disruptions in the sedimentary layers, which can be dated using carbon-14 analysis. In January, she and two other geologists published a paper3 that lowered the recurrence interval for large earthquakes on this part of the fault to 137 years. Since then they have dropped their estimate to less than 100 years, with some large quakes striking as few as 50 years apart — implying that earthquakes are not as regular as the previous studies suggested.
"Elastic rebound, I think, is still kind of like the foundation," Grant Ludwig says, "but maybe the building that's going to be built on that foundation is going to look a little different."
Thomas Fumal, a palaeoseismologist with the USGS in Menlo Park, California, who has been investigating the San Andreas Fault for 25 years, says that the picture along most other parts of the fault also does not conform with past expectations: scientists are finding large quakes striking sections of the fault much more often than previous studies suggested — and with a wide range of magnitudes.
The idea of characteristic earthquakes suffered another blow in 2004, when the San Andreas fault let loose a magnitude-6 shock near the central Californian town of Parkfield. Researchers had set up a dense network of instruments there in the 1980s, anticipating a moderate quake by 1993 as part of a roughly 22-year cycle that had been witnessed in quakes going back to the 1800s. When it finally arrived more than a decade late, its timing and other idiosyncracies "confounded nearly all simple earthquake models" according to a 2005 report4.
David Jackson has long expected these kinds of developments because they match what he has been seeing in earthquake patterns around the world. In 1991, Jackson and Yan Kagan, both at the University of California, Los Angeles, analysed a global catalogue of seismic events and concluded that large earthquakes bunch together in space and time5. This wasn't supposed to happen. If earthquakes obeyed the elastic rebound theory, large quakes should reset the seismic clock and there should be a gap before another big shock hit the same region.
Kagan and Jackson argued that regions hit by a big quake were more likely, rather than less likely, to be struck by another6. Other geophysicists largely dismissed their claims, but then a stunning sequence of earthquakes hit California's Mojave Desert. A magnitude-6.2 earthquake struck Joshua Tree in April 1992, followed two months later by the Landers quake and the magnitude-6.5 Big Bear quake, and then seven years later by the magnitude-7.1 Hector Mine earthquake.
Scientists had known for some time that smaller earthquakes can come in clusters; that's what happens when the ground rattles with aftershocks following a major tremor. But the idea that one large quake could follow on the heels of another was startling. Researchers now routinely talk about large quakes triggering others, both on nearby and distant faults.
In fact, the giant 2004 Sumatran quake was followed by an unusual pattern of small tremors near Parkfield — suggesting that the Indonesian event had weakened the San Andreas fault some 8,000 kilometres away, according to an analysis published this month7. In that study, Taka'aki Taira, a seismologist at the University of California, Berkeley, and his colleagues suggest that the Indonesian quake also could have triggered a spate of magnitude-8 shocks around the world.
Karen Felzer of the USGS in Pasadena, California, says that she has come to believe that many quakes of all sizes are triggered by other shocks, rather than simply responding to local forces. "Personally I agree with Jackson and Kagan that elastic rebound doesn't really happen — that what is controlling the timing of all earthquakes is the triggering process."
The upheaval in the world of seismology will play out most strikingly in the work leading up to the 2011 report on California earthquake probabilities. When Earth scientists drew up the first incarnation of this report in 1988, the picture was much simpler. An expert group of geophysicists and geologists divided the faults of the San Andreas system into well-delineated segments. They assigned probabilities of future seismic shocks to each segment on the basis of how much time had passed since that particular locale had generated a major quake.
But over the years, successive versions of the report have addressed complexities that have emerged from studies of past earthquakes, laboratory experiments and theory. Models were introduced that allowed fault segments to combine in various ways to create bigger quakes. And researchers recognized the hazard of as-yet unknown faults and assigned probabilities to them.
The latest version of the report, published this year8, identified several issues that needed to be addressed urgently. "Does the interactive complexity of a fault system effectively erase or at least significantly reduce any predictability implied by elastic rebound theory?" asked the report. To this and other troubling questions, the working group acknowledged that the answer may be 'yes'.
Field says that the layers of complexity in the most recent forecasts have accumulated to the point that "in some ways we've built a house of cards. We've identified several things we don't like about the model, things that need to be improved."
But every time scientists tweak one part of the structure, another part starts to wobble. Now, Field says, "we're in a situation where we can tear down the whole thing and start from scratch. I think we'll end up with a much cleaner, simpler model rather than a patchwork of modifications".
The 2011 report, known as Uniform California Earthquake Rupture Forecast 3, will put less emphasis on fault segmentation or maybe even eliminate it, acknowledging that an earthquake may start or end anywhere. And although it won't throw elastic rebound out the window, neither will it assume that this model is the dominant factor in shaping patterns of seismic activity over the relatively short timescales — decades to centuries — that people care about most.
"We want to include elastic rebound, because there's still a lot of the community that thinks it is very important at the larger magnitudes, at least," Field says. However, no one yet knows how the model will be incorporated into a world of constant fault interaction and fewer constraints on fault ruptures, he says.
In fact, some say earthquakes are so complex that it may be impossible — as well as impractical — to pin them down based on an understanding of their physics alone. That has led researchers to propose purely statistical models to forecast upcoming quake behaviour in much the same way that they forecast aftershocks now. "There's a lot of push now toward the statistical side," says Andrew Michael of the USGS.
Geophysicists are putting some of their forecasting theories to the test through an Internet-based project called the Collaboratory for the Study of Earthquake Predictability. "The idea is to provide an environment where people can do scientific earthquake-prediction experiments that will be evaluated objectively," says Jordan, who started the centre three years ago with funding from the W. M. Keck Foundation. The project has testing centres in Southern California, New Zealand, Japan and Switzerland. Thirty-nine forecasting models are being tested at the Southern California Earthquake Center, and more than 40 in other countries. Each will run for five years, with no tweaking allowed along the way.
Information from these simulations, along with new data from the San Andreas fault and elsewhere, will be folded into the deliberations of the working group as it seeks to reconcile the competing views of how earthquakes work. Whether this overhaul of the forecasting method will be as radical as Field anticipates remains to be seen. In the end, the working group will use complex logic trees based on the full range of expert opinion to reach its conclusions. David Schwartz, a geologist with the USGS and one of the founders of the characteristic quake model, says that it would be a mistake to toss out the basic elements of elastic rebound or the idea that earthquakes on at least some fault systems come in regular cycles. He does not mind researchers exploring new models outside of the public-forecasting exercise, which has real consequences for state residents. "That's fine, to see where it leads," he says, "but not to set my earthquake insurance rates."
Reid, H. F. (ed.) The California earthquake of April 18, 1906, Report of the State Investigation Commission Vol. 2 (Carnegie Institution of Washington, 1910).
Schwartz, D. P. & Coppersmith, K. J. J. Geophys. Res. 89, 5681-5698 (1984).
Akçiz, S. O., Grant Ludwig, L. & Arrowsmith, J. R. J. Geophys. Res. 1144, B01313 (2009).
Bakun, W. H. et al. Nature 437, 969-974 (2005).
Kagan, Y. Y. & Jackson, D. D. Geophys. J. Intern. 104, 117-133 (1991).
Kagan, Y. Y. & Jackson, D. D. J. Geophys. Res. 96, 21419-21431 (1991).
Taira, T. et al. Nature 461, 636-639 (2009).
Field, E. H. et al. Bull. Seismol. Soc. Am. 99, 2053-2107 (2009).
Glennda Chui is a science writer based in California.
Related links in Nature Research
Related external links
About this article
Cite this article
Chui, G. Seismology: Shaking up earthquake theory. Nature 461, 870–872 (2009). https://doi.org/10.1038/461870a
Earthquake Science (2012)