Deeply flawed: current knowledge doesn't allow us to predict the San Andreas fault's next shiver. Credit: K. SCHAFER/STILL PICTURES

At last month's meeting of the Southern California Earthquake Center in Palm Springs, a certain word was whispered in corridors or condemned with expletives in cocktail-party conversations. On slides during talks it was written only as the ‘p-word’.

You wouldn't think the term ‘prediction’ could provoke such strong reactions. But for earthquake researchers, it's perhaps easy to see why it does. The early history of earthquake predictions featured scientists studying animal behaviour and watching the night skies for strange lights. Even when seismic studies came along, predictions were more often wrong than right. Disillusioned, and wary that false predictions would cause more damage than they would prevent, researchers — particularly in the United States — turned their backs on the word and the concept.

“There was a lot of bad science calling itself prediction,” says seismologist Lucille Jones, who is in charge of the southern California area for the United States Geological Survey (USGS). “People wanted to dissociate themselves from it.”

But prediction is coming back into researchers' vocabularies, if not into fashion. Most of the credit — or the blame, depending on your position — goes to Vladimir Keilis-Borok of the University of California, Los Angeles (UCLA), whose recent predictions ignited public concern and interest1. UCLA's controversial press release describing his prediction of an earthquake in southern California attracted huge media attention. The quake never hit, but the episode resuscitated the p-word and brought the field into the media spotlight. “It's like we're doing experiments with the public looking over our shoulder,” says Tom Jordan, director of the Southern California Earthquake Center (SCEC) at the University of Southern California in Los Angeles.

At the same time, researchers armed with a growing range of instruments and techniques are becoming more confident that their results are scientifically significant and useful. More than a billion dollars' worth of earthquake monitoring equipment in Japan, the United States and elsewhere is being complemented by new statistical methods and theories. “The quality of the data has skyrocketed. People feel they are poised to make some real progress,” says Jones.

In response to all this, the USGS is moving to re-establish the National Earthquake Prediction Evaluation Council, a committee charged with advising the director of the USGS on the merits of particular predictions. “We have a responsibility to be an honest broker in assessing predictions,” David Applegate, senior scientific adviser at the USGS Earthquake Hazard Program, said at the Palm Springs meeting. The council was first established in the late 1970s but has not appointed any new members in 12 years. The USGS has drafted a new charter that is slowly working its way through the Department of the Interior and other bureaux. Applegate hopes the committee will be up and running by next spring. Also, a joint USGS–SCEC working group — called Regional Earthquake Likelihood Models — hopes to begin contrasting various forecast models for California by January 2005.

Hopeful harbingers

In the 1970s, enthusiastic support of earthquake prediction was less controversial. Following the discovery of plate tectonics, scientists had faith that the problem could be cracked, and in some places earthquake predictions were taken seriously. In China, the government evacuated Haicheng in February 1975, after scientists made a prediction based on changes in land elevations, groundwater levels, seismicity and animal behaviour. A magnitude 7.3 earthquake struck two days later, and the evacuation is credited with preventing 120,000 injuries and fatalities.

But failure followed this success. Just a year later, a magnitude 7.8 earthquake hit the city of Tangshan, killing 250,000 and injuring 164,000 people. There had been no prediction for that area.

Researchers came to believe that prediction was beyond their means, if not impossible. Rainfall, water levels, radon emissions, seismic waves, land deformations, geoelectric signals, cloud formations and catfish had all been studied as possible harbingers of quakes, but a solid connection to the three golden variables — time, place and magnitude — remained elusive.

A double-whammy came with the California Northridge earthquake in January 1994 and the Japanese Kobe earthquake in January 1995. Neither fault region was seen as a threat, and the lack of concern showed in each area's poor building regulations. Both quakes were devastating. Researchers in the two countries most devoted to earthquake studies had missed their cues — assuming there were any to begin with.

Even more discouraging, an assembly of 1,224 Global Positioning System (GPS) stations and about 1,000 seismometers spread around the Japanese archipelago failed to spot any seismic hints of the magnitude 8.0 Tokachi–Oki earthquake that shook northern Japan last September. “There was no clear sign at all. It was a shock,” says Ichiro Kawasaki of the Research Center for Earthquake Prediction at Kyoto University.

Experts today would call China's 1975 prediction, and others based on simple precursor events, good luck rather than good science. Of all the thousands of predictions ever made for quakes — most of them academic curiosities rather than attempts at disaster mitigation — some are bound to hit the nail on the head purely by chance. “It's like going to Vegas,” says UCLA geophysicist David Jackson, who has won money from his colleagues by betting against predictions.

Shaken faith: no predictions were made for the devastating quakes that hit Northridge, California, (left) and Kobe, Japan (right). Credit: L. J. REGAN/GETTY; K. MAYAMA/REUTERS/NEWSCOM

In some countries, scientists battled on despite the bad news, but in the United States it became a liability to mention work on predictions. Researchers keen on the field say they had to look beyond the National Science Foundation and USGS for funding. They began speaking in terms of forecasts rather than predictions, using a term borrowed from meteorology that gives a wider margin for error.

After a bad quake, people want disaster mitigation. Now public attention is shifting back towards basic science.

Particularly after Northridge and Kobe, the public's attention shifted to reducing the damage from earthquakes, and away from attempts to anticipate them. “Science followed public interest,” says Kyoto University geophysicist Jim Mori, who worked at the USGS in the 1990s. Funding turned towards early-warning systems, for example, which spot the initial rumblings of a quake and send warnings across the city faster than the quake itself. Such systems, now established in Taiwan, Japan and Mexico City, can stop trains or shut down gas lines before disaster strikes. A similar network is under consideration in California2. “After a bad earthquake, people want disaster mitigation,” says Mori. “Now public attention is shifting back towards basic science.”

True prediction — of the sort that could be used to justify evacuating San Francisco, for example — may still prove impossible. But researchers now have a better understanding of the complexity of earthquakes, which may help to pinpoint places or times where emergency efforts should be focused. At California's San Andreas fault, for example, researchers are drilling down several kilometres to inspect a point on the fault line where quakes originate, to determine stress levels, temperatures, rock type and water content. This should provide a huge insight into earthquakes — in some cases at least.

Those intent on understanding how earthquakes happen are also excited by the recent discovery of two ways in which the deep Earth can release energy.

The strong silent type

The first of these has been dubbed the silent, or slow-slip earthquake3. Such disturbances originate 30 to 40 kilometres down, last between a day and a year, and can release the energy of a magnitude 7.0 earthquake, but more slowly and without ever being felt at the surface. Friction at these fault lines is greater than in the freely moving faults that allow tectonic plates to creep by each other smoothly, but less than that at patches where stress builds up and triggers a major quake. GPS is generally used to detect these silent quakes at the surface. In Japan, the country with the biggest array of GPS devices, ten such events have been seen in the past decade, disproving critics' claims that they are a rare and insignificant anomaly.

The other oddity is a tremor whose seismic activity looks like that created by magma moving under volcanoes, but that occurs nowhere near a volcanic area. Beginning in September 2000, Kazushige Obara of the Japanese National Research Institute for Earth Science and Disaster Prevention in Tsukuba saw this kind of seismic activity in three places in western Japan, far from any magma source that might create it4. The tremors were in active earthquake zones, known as subduction zones, where an oceanic plate slides under a continental one, but they were a new phenomenon. “It's the first new source of seismic waves discovered in 50 years,” says Bill Ellsworth of the USGS in Menlo Park, California.

Obara suggests that these non-volcanic tremors are caused by water taken down with a subducting oceanic plate to a depth of some 30 kilometres, where it is so compressed that it forces its way into fractures deep in the Earth's crust, or opens up new ones.

Sense and sensitivity: seismographs provide data that are vital for forecasting future events. Credit: REUTERS/CORBIS

Both phenomena illustrate the complexity of earthquake generation, a welcome advance for researchers who knew that the simple models used for predictions were woefully incomplete. If the complex system could be understood, prediction might be possible, says Kawasaki, who tracked a silent quake5 in 1992. “These provide new perspectives that most people couldn't have imagined ten years ago,” he says.

Silent quakes and non-volcanic tremors have even been found together in the Cascadia subduction zone, off the coast of the northwest United States and Canada6. Retrospective data analyses show that these have occurred in close to 14-month cycles for the past six years. “The Earth is beginning to look like it is behaving in an orderly way,” says Ellsworth.

A recent quake at Parkfield in California also hints at a repeating system. This area was thought to have large earthquakes every 22 years. The latest quake, on 28 September, missed its predicted date by 15 years — but it did hit the right spot, reawakening debates about the cyclic nature of some quakes7.

Other researchers are looking for more complex patterns. John Rundle's group at the University of California, Davis, for example, is sifting through reports of small earthquakes in a search for hotspots likely to experience a major earthquake in the next ten years. His method assumes that seemingly chaotic patterns of magnitude 3 or 4 quakes can be used to reveal stress building up on a fault. When a threshold of stress is passed, a major quake is more likely, Rundle says.

I didn't think my prediction method would work this well. I wish people would use it now.

Since Rundle published his results8 in February 2002, 11 earthquakes of magnitude 5 or greater have hit the California study area; ten fell within range of his hotspots. “I didn't think it would work this well,” he says. Rundle is also working on a map for Japan. On 23 October, a magnitude 6.8 quake hit Niigata — killing at least 25 people and injuring more than 2,000 — near one of Rundle's hotspots.

Rundle says his maps reduce the total area known to be seismically active to 24% of active fault areas, which would help to allocate resources for retrofitting bridges and other vulnerable infrastructure. “I wish people would use it now,” he says.

Keilis-Borok also uses statistics and patterns to make predictions: his algorithms are derived from histories of large earthquakes. His most recent prediction concerned an earthquake of magnitude 6.4 or greater hitting a 32,000-km2 area of southern California between 5 January and 5 September this year1.

Shock tactics

Keilis-Borok's method9 has not been convincing, or even comprehensible, to many of his colleagues. Both his and Rundle's calculations require huge amounts of computation, leading some to charge that they are difficult for others to check.

Even if such long-term predictions were always correct, they would still leave public officials with the headache of deciding what to do with them — some fear that the panic caused by a quake alert might overshadow the benefits of an early warning.

The California Earthquake Prediction Evaluation Council, a local group that advises the state's governor on predictions, released a public notice on Keilis-Borok's prediction. It said his approach “had not been substantiated” and did not warrant any specific action. But the same document called the approach “legitimate”.

The resulting confusion showed the importance of providing the public with a clear message. The probability that there would be no earthquake, which Keilis-Borok put at 50%, never made it into the public perception, says Mark Benthien, the SCEC's director for communication, education and outreach. After 5 September passed, some people assumed the earthquake was just running late. Others thought the earthquake was set for 5 September exactly and ran out to get water the day before. The following day, one person wanted to know if it was all right to put picture frames back on the wall. “There is this idea that it's now over so we don't have to be prepared any more,” says Benthien.

Ground rules

Making predictions to the public must only be done with the consensus of the scientific community.

Governments are unlikely to embrace short-term predictions anytime soon, except perhaps in China, where ‘official predictions’ still occasionally hit the news. Even in Japan, where earthquake prediction studies abound and the word is not so feared, the government does not make official predictions, both to prevent panic, and out of a certain deference to the complexity of nature, says Kawasaki. There is an ethic that “research on prediction is a personal matter, but making predictions to the public must only be done with the consensus of the scientific community”, he says. Clearly there is as yet no such consensus.

In the United States, the debate about the science, and the vocabulary used to describe it, goes on. At a press conference at the SCEC meeting, Jones encountered a frustrated journalist who demanded to know whether he should be using ‘forecast’ or ‘prediction’ in his stories. Why, he asked, was ‘prediction’ back on the menu after years of being told, and then convincing his editor, that ‘forecasting’ was more appropriate?

Many will say the debate is academic. Most earthquake researchers asked to define the difference between the words will sigh, and defer to a colleague. But Jackson pins it down: “Predictions are a subset in which probabilities become higher than normal for some reason — high enough to warrant some special action.” If so, it is indeed a difficult word to use. But, with science, public perception and the media all pushing for a heightened awareness of the topic, the United States is getting ready to use it.