Skip to main content
for

Forecasting earthquakes that get off schedule

New model considers full history of a fault’s earthquakes to forecast next one

Results of a new study by Northwestern University researchers will help earthquake scientists better deal with seismology’s most important problem: when to expect the next big earthquake on a fault.

Seismologists commonly assume that big earthquakes on faults are pretty regular and that the next quake will occur after approximately the same amount of time as between the previous two. Unfortunately, the Earth often doesn’t work that way. Although earthquakes sometimes come sooner or later than expected, seismologists didn’t have a way to describe this.

Now they do. The Northwestern research team of seismologists and statisticians has developed an earthquake probability model that is more comprehensive and realistic than what is currently available. Instead of just using the average time between past earthquakes to forecast the next one, the new model considers the specific order and timing of previous earthquakes. It helps explain the puzzling fact that earthquakes sometimes come in clusters — groups with relatively short times between them, separated by longer times without earthquakes.

“Considering the full earthquake history, rather than just the average over time and the time since the last one, will help us a lot in forecasting when future earthquakes will happen,” said Seth Stein, William Deering Professor of Earth and Planetary Sciences in the Weinberg College of Arts and Sciences. “When you're trying to figure out a team's chances of winning a ball game, you don't want to look only at the last game and the long-term average. Looking back over additional recent games can also be helpful. We now can do a similar thing for earthquakes." 

The study, titled “A More Realistic Earthquake Probability Model Using Long-Term Fault Memory,” was published recently in the Bulletin of the Seismological Society of America. Authors of the study are Stein, Northwestern professor Bruce D. Spencer and recent Ph.D. graduates James S. Neely and Leah Salditch. Stein is a faculty associate of Northwestern’s Institute for Policy Research (IPR), and Spencer is an IPR faculty fellow.

"Earthquakes behave like an unreliable bus,” said Neely, now at the University of Chicago. “The bus might be scheduled to arrive every 30 minutes, but sometimes it’s very late, other times it’s too early. Seismologists have assumed that even when a quake is late, the next one is no more likely to arrive early. Instead, in our model if it’s late, it’s now more likely to come soon. And the later the bus is, the sooner the next one will come after it.” 

Are faults are smarter than seismologists assumed?

The traditional model, used since a large earthquake in 1906 destroyed San Francisco, assumes that slow motions across the fault build up strain, all of which is released in a big earthquake. In other words, a fault has only short-term memory — it "remembers" only the last earthquake and has "forgotten" all the previous ones. This assumption goes into forecasting when future earthquakes will happen and then into hazard maps that predict the level of shaking for which earthquake-resistant buildings should be designed. 

However, “large earthquakes don’t occur like clockwork,” Neely said. “Sometimes we see several large earthquakes occur over relatively short time frames and then long periods when nothing happens. The traditional models can’t handle this behavior.”

In contrast, the new model assumes that earthquake faults are smarter — have longer-term memory — than seismologists assumed. The long-term fault memory comes from the fact that sometimes an earthquake didn't release all the strain that built up on the fault over time, so some remains after a big earthquake and can cause another. This explains earthquakes that sometimes come in clusters.

"Earthquake clusters imply that faults have long-term memory,” said Salditch, now at the U.S. Geological Survey. “If it's been a long time since a large earthquake, then even after another happens, the fault's ‘memory’ sometimes isn't erased by the earthquake, leaving left-over strain and an increased chance of having another. Our new model calculates earthquake probabilities this way."

For example, although large earthquakes on the Mojave section of the San Andreas fault occur on average every 135 years, the most recent one occurred in 1857, only 45 years after one in 1812. Although this wouldn’t have been expected using the traditional model, the new model shows that because the 1812 earthquake occurred after a 304-year gap since the previous earthquake in 1508, the leftover strain caused a sooner-than-average quake in 1857. 

"It makes sense that the specific order and timing of past earthquakes matters," said Spencer, a professor of statistics. "Many systems' behavior depends on their history over a long time. For example, your risk of spraining an ankle depends not just on the last sprain you had, but also on previous ones."