EarthquakesFaster Computing Helps in Predicting Earthquake Damage to Infrastructure

Published 1 November 2019

Researchers are using high-performance computing systems to better predict how structures will respond to an earthquake along one of the Bay Area’s most dangerous faults.

Researchers at Berkeley Lab are using high-performance computing systems to better predict how structures will respond to an earthquake along one of the Bay Area’s most dangerous faults.

David McCallen is the principal investigator of a team of Berkeley Lab and Lawrence Livermore National Laboratory researchers working with these systems to model a 7.0-magnitude earthquake along the Hayward Fault for different rupture scenarios. A major focus of the project is being prepared to take full advantage of emerging Department of Energy exascale computing systems, expected to be available in 2022 or 2023. These systems will be capable of on the order of 50 times more scientific throughput than current High Performance Computing systems, allowing higher fidelity simulations and making modeling different scenarios dramatically quicker. The team’s goal is to learn how these different earthquake scenarios would impact structures throughout the San Francisco Bay Area region. 

McCallen answered LBL News’ questions.

LBL News: What is this new capability?
David McCallen
: Historically, the only way that scientists and engineers could try to predict future ground motions was to look to past earthquake records and then extrapolate those to the conditions at a structure. This approach doesn’t always work well because ground motions are very site-specific. This is the first time that we can use what we refer to as physics-based models to predict regional-scale ground motions and the variability of those ground motions. It’s really a three-step process where we model all the steps: rupture, propagation through the Earth, and interaction between the waves and the structure at the site.

LBL News: How could the technology be used practically?
McCallen
: There’s a lot of uncertainty in predicting future earthquake motions and what particular facilities would be subjected to. And you really need to understand those motions because if you understand the input to the structure, you can then model the structural response and understand the potential for damage.

LBL News: What have you achieved so far?
McCallen
: Because of computer limitations we could previously resolve ground motions to only about one or two hertz: ground motions that vibrate back and forth about one or two times per second. To do accurate engineering evaluations, we need to get all the way up to eight to 10 hertz. We’ve been able to do five- to 10- hertz simulations with the highest speed computers now, but those take a long time, like 20 to 30 hours.

LBL News: What will exascale computing enable?
McCallen
: We’re looking forward to getting a lot of speed up with these new advanced machines so that we can resolve very high frequencies but do it in maybe three to five hours. We need this because we need a lot of simulations to account for the uncertainty and variability in earthquake parameters. An earthquake on the Hayward Fault is overdue. We don’t know precisely how the fault will rupture, so we have to look at different rupture scenarios to fully understand the potential risk.