3 min read

On the psychology behind climate models

Sometimes it’s really interesting to look into the minds of climate modellers. When scenarios are designed around a set of subjective beliefs, it's possible to reverse engineer to uncover the informal models used by the designers
On the psychology behind climate models
AI-generated via DALL-E

Here’s a paper decoding integrated assessment models (IAMs), the tools used to explore plausible future climate scenarios. Specifically, the paper explains how IAMs deployed by the Network for Greening the Financial System – the club of climate-focused central banks and supervisors – can produce different outputs even when fed the same data and objectives.

The paper is technically quite brilliant. I would, however, describe the premise of the paper as decidedly post modern.  

A skim reader may think it is an empirical analysis of climate economics as expressed by the IAMs studied. These are models, originally proposed by William Nordhaus in the 1970s, that fold physical environment and sustainability concerns into the standard set of models used by economists. The IAMs approach to scenario generation has been criticized lately but it remains the most common method for exploring the impact of climate change on the economy.

But really the paper is about something else – the nature of published scenarios themselves.

Let me explain.

A bunch of people in an NGO somewhere hold a series of committee meetings to determine the contours of a particular climate scenario. They may use some statistical modeling to do this, but there will also be a healthy dose of speculative, subjective opinion applied to develop the final curves. Why? Because the mathematical models, if they are empirically based, will very likely fail to produce scenarios that are sufficiently spicy to gather much attention from the industry.  The publishers of the scenarios will therefore apply subjective overlays to “correct” the scenarios – sex them up – before they are released. (In some cases models are not used in the scenario building process at all. The projections instead are based on pure subjective reasoning).  

Using more pejorative language, we could describe the process as committee members drawing squiggles on white boards. They compile and publish the squiggles and send them to banks and government agencies who use them to conduct august, detailed scenario analyses of a range of important portfolios.

Subsequently, in another location, a group of talented academics read the white papers describing the scenarios and find them interesting in and of themselves.  They set out to analyze the nature of the committee's squiggles.  They use well-known, cutting-edge techniques to reverse engineer the sensitivities implicitly used by the committee to construct the squiggles.

Seen this way, the paper is closer to a study of the original NGO folks’ psychology than a critique of scientific methods. In other words, the true subject of analysis is the collective mindset of the people involved in constructing the scenarios. 

As a psych paper, I think this is really powerful research. It makes sense to translate a set of predictions produced, in whole or in part, using subjective add factors (aka fudge factors) into something more formal and rigorous. Work like this can help us understand the informal models committee members use to construct their squiggles. In turn, this allows us to quantify the gap between the musings of experts and reality.

There is a vein of research like this in monetary economics. Academics try to understand the reasoning of monetary policy committee members by analyzing their votes, public comments, and inflation forecasts against the backdrop of market forces and economic data. This research is interesting, because fallible mammals (us) really are part of the process of interest rate determination.  If you want to understand interest rate dynamics, you have to understand human psychology. 

Are the scenarios built by NGOs determinative of climate risk outcomes? This is quite a stretch. I’m personally of the opinion that scenario analysis, by its very nature, is close to useless. Does it really matter if the squiggles indicate a 10% sensitivity rather than a 20% sensitivity? What aspect of our struggle for net zero is impacted if we mistake one for the other?

So while the paper is interesting, uses cool techniques, and tells us some interesting things about the psychology of scenario builders, it teaches us approximately nothing about climate risk.  

Alas.