Quantum theory, climate risk, and the limits of simulations
During my holiday travels, I came across this short piece by Dr Ron Dembo on LinkedIn. I think it’s fair to say that Dr Dembo is a fierce critic of the use of traditional statistical methods when applied to climate risk. He argues instead for a fulsome application of simulation methods and forward-looking techniques when modeling the risk posed by climate-related perils.
In his latest article, Dr Dembo likens the uncertainty of future predictions of climate change to phenomena identified by physicists in the quantum world – specifically Heisenberg’s Uncertainty Principle. I’m an economist, not a physicist, and my understanding of quantum mechanics is likely on par with that of the average taxi driver.
That said, quantum theory has been said to apply to virtually every question that science has been unable to answer, up to and including the very existence of God. It has been used to explain the perceived efficacy of things like homeopathic medicine and reiki.
I can’t even begin to comprehend how quantum theory drives the macro world via the laws of attraction, so you’ll have to wade through that link all by yourself.
Science can’t definitively prove the impossibility of psychokinesis or that an intelligent being was not responsible for the Big Bang. Perhaps quantum mechanics really do act on water molecules to give them healing properties? I frankly doubt it, it seems much more likely that people are trying to justify their beliefs by appealing to a phenomenon that is widely misunderstood by the general public. Unless a connection to the quantum world can be demonstrated, assertions like these should be rejected.
Dr Dembo, though, is not trying to claim that climate uncertainty is somehow related to quantum uncertainty, he is merely using the two concepts to draw an analogy. It’s a strange theme to choose, though, given the way quantum theory has been misapplied in the past by these other actors.
So is the analogy valid? The article states:
“In a climate change world, we need to understand how hazards may impact physical infrastructure at some location at some point in the future. For a given point in the future, in some fixed location, the hazards are uncertain. Alternatively, for a given point in the future, if the hazard is known with certainty, the location could not be known with certainty. This means that both the hazard level and the location cannot simultaneously be known with absolute certainty.”
But I suspect that this is just a wordy way of saying that the future is inherently uncertain. Unpacking this passage, it sounds like the ideal is for us to be able to predict the specific occurrence of a climate event for a given time and location – a bit like the Connecticut Yankee predicting the eclipse in King Arthur’s Court.
This would be some feat if it could ever be accomplished. Absent time travel, this just isn’t possible. The perils faced by a particular location can be expressed as probabilities that can only be estimated, imperfectly, by extrapolating historical data. If these probabilities could be made more perfect – and if the improvements could be demonstrated – that’s about the best we could hope for. One improvement I would like to see is for more attention to be paid to locations that are currently unoccupied.
The flipside of the climate uncertainty principle is that for a given peril, the precise location cannot be known with certainty. I can’t see how this form of uncertainty is distinct from the other. There are no quantum superpositions – if a map is overlaid with the path of a storm, it doesn’t matter whether you move the top page or the bottom to demonstrate the randomness of the path of destruction.
Stripped back, it seems to me that the main point of Dr Dembo’s article is to argue that the deterministic approaches preferred by financial regulators are badly flawed. On this point, we wholeheartedly agree.
In their place, Dr Dembo argues for a much wider application of stochastic simulation methods, which is a technique that I have also advocated for.
But simulations are not a panacea. Recognizing that the tails of probability distributions, where low-probability, high-impact outcomes lie, are important does not empower the modeler to understand the correct shape of the tails. After all, tail outcomes – by definition – are rarely observed and therefore very difficult to capture. The smallest tweak to a simulation’s parameters can have a profound effect on a stochastic model’s final predictions.
And, of course, these predictions still require validation. If you declare there’s a 1% chance that flood losses in a particular town will exceed $10mn in 2030, you need a way to demonstrate the veracity of your claim. This process is far from straightforward.
Choosing to use stochastic methods is a worthwhile first step, but it must still be backed by diligent empirical research if the approach is to be trusted.
In other words, model users should be made aware that modeling tail behavior is, like, really very difficult.
The whole quantum thing is great clickbait but that’s about it. I don’t think Heisenberg’s principle commonly applies in the macro world.
If you want to use stochastic methods, go right ahead, but don’t stop trying to understand what’s going on in the tails. There’s still a huge amount of work to do.
Liked this article? Show your support by buying us a coffee👇
Member discussion