Putting consistent value on experts' uncertainty on climate change models

Oppenheimer and his co-authors use a technique known as “structured expert judgment” to put an actual value on the uncertainty that scientists studying climate change have about a particular model’s prediction of future events such as sea-level rise. Experts are each “weighted” for their ability to quantify uncertainty regarding the situation at hand by gauging their knowledge of their respective fields. More consideration is given to experts with higher statistical accuracy and informativeness. Another technique, called probabilistic inversion, would adjust a climate model’s projections to reflect those experts’ judgment of its probability.

Structured expert judgment has been used for decades in fields where scenarios have high degrees of uncertainty, most notably nuclear-energy generation, Oppenheimer explained. Similar to climate change, nuclear energy presents serious risks, the likelihood and consequences of which — short of just waiting for them to occur — need to be accurately assessed.

When it comes to climate change, however, the procedure by which experts assess the accuracy of models projecting potentially ruinous outcomes for the planet and society is surprisingly informal, Oppenheimer said.

When the Intergovernmental Panel on Climate Change (IPCC) — an organization under the auspices of the United Nations that periodically evaluates the effects of climate change — tried to determine the ice loss from Antarctica for its Fourth Assessment Report released in 2007, discussion by the authors largely occurred behind closed doors, said Oppenheimer, who has been long involved with the IPCC and served as an author of its Assessment Reports.

In the end, the panel decided there was too much uncertainty in the Antarctic models to say how much ice the continent would lose over this century. But there was no actual traceable and consistent procedure that led to that conclusion, Oppenheimer said. As models improved, the Fifth Assessment Report, released in 2013, was able to provide numerical estimates of future ice loss but still based on the informal judgment of a limited number of participants.

Claudia Tebaldi, a project scientist at the National Center for Atmospheric Research, said that the researchers propose a much more robust method for evaluating the increasing volume of climate-change data coming out than experts coming up with “a ballpark estimate based on their own judgments.”

Almost every problem out there would benefit from some approach like this, especially when you get to the point of producing something like the IPCC report where you’re looking at a number of studies and you have to reconcile them,” said Tebaldi, who is familiar with the research but had no role in it. “It would be more satisfying to do it in a more formal way like this article proposes.”

The implementation of the researchers’ technique, however, might be complicated, she said. Large bodies such as the IPCC and even individual groups authoring papers would need a collaborator with the skills to carry it out. But, she said, if individual research groups adopt the method and demonstrate its value, it could eventually rise up to the IPCC Assessment Reports.

For policymakers and the public, a more transparent and consistent measurement of how scientists perceive the accuracy of climate models could help instill more confidence in climate projections as a whole, said Sander van der Linden, a postdoctoral researcher and lecturer of public affairs, and director of Princeton’s Social and Environmental Decision-Making (SED) Lab who studies public policy from a behavioral-science perspective. With no insight into how climate projections are judged, the public could take away from situations such as the IPCC’s uncertain conclusion about Antarctica in 2007 that the problems of climate change are inconsequential or that scientists do not know enough to justify the effort (and possible expense) of a public-policy response, he said.

Systematic uncertainties are actually forms of knowledge in themselves, yet most people outside of science don’t think about uncertainty this way,” said van der Linden, who is familiar with the research but had no role in it. “We as scientists need to do a better job at promoting public understanding of uncertainty. Thus, in my opinion, greater transparency about uncertainty in climate models needs to be paired with a concerted effort to improve the way we communicate with the public about uncertainty and risk.”

Princeton notes that Oppenheimer worked with co-author Christopher Little, a climate scientist at Atmospheric and Environmental Research Inc. in Massachusetts, and a former postdoctoral research associate in the Program in Science, Technology and Environmental Policy in Princeton’s Woodrow Wilson School of Public and International Affairs; and Roger Cooke, a professor at the University of Strathclyde in Scotland and Resources for the Future in Washington, renowned for his research on structured expert judgment.

The latest paper stems from research Oppenheimer and Little published in 2013 in Nature Climate Change and the Proceedings of the National Academy of Sciences. These earlier papers proposed methods for more consistently integrating ice-loss from Antarctica and Greenland into sea level-rise projections.

— Read more in Michael Oppenheimer et al., “Expert judgement and uncertainty quantification for climate change,” Nature Climate Change 6, no. 5 (27 April 2016): 445 (DOI: 10.1038/nclimate2959)