Who Was Better at Predicting the Course of the Pandemic – Experts or the Public?

For each question, we also asked everyone to pick two numbers that they were 75 percent confident the true outcome would fall between. For example, someone might be 75 percent confident that between 100,000 and 1,000,000 U.K. residents would be infected by the end of the year. Someone who selects a narrower range – say, being 75 percent sure that between 200,000 and 250,000 people will be infected – is more confident about their prediction. Someone who selects a wider range is indicating that they are more uncertain.

If you are 75 percent sure that the true outcome will fall within the range you selected, you might reasonably hope to be correct 75 percent of the time. Unfortunately, our participants weren’t. Actual outcomes fell within laypeople’s ranges only between 8 percent and 20 percent of the time, depending on the question. For experts, actual outcomes fell within their ranges between 36 percent and 57 percent of the time.

In other words, experts were more accurate and less overconfident than laypeople, but still less accurate and more overconfident than we might hope.

Some notes of caution: our experts were individuals who held one of the occupations described at the beginning of this article and who responded to an announcement on social media. They aren’t necessarily representative of experts who spent the most time talking to the media or advising governments.

And our laypeople certainly weren’t practiced in forecasting, unlike the experienced predictors on websites such as the Good Judgment Project and Metaculus, who may well have outperformed experts. Our laypeople were proportional to the U.K. population with respect to age and gender, but may have differed in other ways. However, even when we restricted the comparison to those laypeople who scored well on a math test, experts were still much more accurate and less overconfident.

Perhaps it’s not surprising that most people’s best guesses about the number of deaths and infections were off: predictions about emerging diseases are hard, and none of us has a crystal ball. We found that even experts weren’t particularly good at predicting the pandemic’s ultimate course and impact. But our level of confidence about our predictions is within our control – and the evidence suggests that most of us could stand to be a bit more humble.

For experts, this suggests that extra caution is warranted around making confident public predictions, so as to avoid prediction “reversals” that may undermine public trust in science. And for the public, when faced with predictions of how future disease outbreaks will unfold, we should not be surprised if the true situation turns out to be better or worse than predicted – particularly if those predictions come from non-experts.

Unfortunately, the continued threat of pandemics means that this research may continue to be relevant in the future. For example, risks of serious natural pandemics have been estimated at between 1 percent and 5 percent every year, and the risks of engineered pandemics may grow as synthetic biology improves, so long-term investments in general-purpose disease surveillance and response technologies seem likely to come in handy eventually. In the meantime, we must all learn to live with the fact that we don’t know how the future is going to unfold, and that no one can tell us for sure.

Gabriel Recchia is Research Associate, Winton Centre for Risk and Evidence Communication, University of Cambridge. This article is published courtesy of The Conversation.