Fighting Biological Threats

“Getting access to the data, landing on the right parameters and evolving the best fit of information that is timely and relevant to decision makers are among the biggest problems we face,” said Charles ​“Chick” Macal, chief scientist for Argonne’s Decision and Infrastructure Sciences division and its social, behavioral and decision science group leader. ​“A large component of the work that we do as part of the biopreparedness project is to develop high performance computing workflows to improve the computational techniques, to make them more efficient.”

The parameters and factors have to predict real-world targets that researchers want to understand, like number of hospitalizations, deaths and vaccinations. To get it right, researchers run simulations over and over — sometimes hundreds of thousands of times — adjusting parameters until the model mimics what the data is telling them.

Making sure these components align, and reducing the number and run-time of the simulations, is where Sandia’s efforts come into play. Like Argonne, Sandia already had roots in computational epidemiology, starting with the anthrax scare of 2003. Later work focused on smallpox and, most recently, COVID.

For this project, Argonne’s computing infrastructure will be adapted to automatically apply Sandia-developed calibration algorithms as parameters for the epidemiological models change or new ones come to light.

“We are the guys who search out those parameters,” said Sandia’s Jaideep Ray, principal investigator of the project.

One of the most important parameters is the unknown spread rate of the disease, according to Ray. Calibrating the model predictions to real data — much of it from Chicago and New Mexico COVID-19 data and other public health surveillance sources — by optimizing the spread rate over several weeks of simulation runs and data collection allows the model to forecast future case counts in an epidemic.

“If the forecasts are right, then we know we have the right set of parameters,” said Ray.

When it comes to infectious disease epidemics, time is of the essence. Naïve calibration, which requires running an ABM thousands of times, is neither efficient nor practical. By using an artificial intelligence method called machine learning, researchers can construct and train a metamodel — a model of their ABM — that can run in seconds. The results can then be used by the machine to ​“learn” the spread-rate from epidemic data and make forecasts. 

While the process may sacrifice the accuracy of long-term forecasts, it could generate faster, short-term forecasts that reduce computational expense and set mitigation efforts in motion more quickly.

“Our whole point in doing this type of work is to make the process routine, more akin to weather forecasting or other domains where a large computational infrastructure is dedicated to continuously adjusting the models automatically as new data is obtained,” said Ozik. ​“We can then provide short-term forecasts and the ability to run longer-term scenarios that answer specific stakeholder questions.”

While Chicago served as the testbed for this model, the team expects to generalize their methods for application to any other place in the world.