Last week I argued that so-called climate “event attribution studies” should meet three criteria to enhance their scientific quality. A very astute reader responded in the comments,
“I know that SOMEBODY should check to see if the study leading to the report should have verified your three rules, but I don't believe there are any methods by which I can do so.”
This observation takes us right to a central paradox of expertise: To evaluate claims by experts requires substantial expertise in the same area as that held by the experts making the claims.
Some have proposed shortcuts to try to avoid the paradox, such as looking at the social or scientific prestige of the publication where the claims are made or perhaps the academic pedigree of the expert. Others have appointed themselves as fact-checkers or misinformation police, asserting a degree of apparent omniscience in the evaluation of expert claims.
But are there actually reliable shortcuts to evaluating claims by experts?
According to Dr. Erica Thompson in her wonderful new book, Escape From Model Land: How Mathematical Models Can Lead Us Astray and What We Can Do About It, the answer to this question is usually “no” — there are not really many shortcuts to evaluating the claims of experts who employ mathematical models employed to help us to better understand the world.
Thompson sums up the response to her book’s title as follows:
Almost all of the time, our escape from Model Land must use the qualitative exit: expert judgment. This necessary reliance on expert judgment presents us with challenges: who is an expert? Why should we trust them? Are they making assumptions that we can agree with? And if models are, as I have argued, very much engines of scientific knowledge and social decision-making rather than simple prediction tools, we also have to consider how they interact with politics and the ways in which we delegate some people to make decisions on behalf of others.
“Model Land” according to Thompson refers to the “frameworks we use to interpret data” in order “to try to predict how we can take more effective actions in future in support of some overall goal.” Often, but not always, such frameworks are formalized as mathematical models. Thompson devotes a chapter each to economic, atmospheric and pandemic modeling, but her analysis is broad enough to cover any form of mathematical modeling.
Escape From Model Land is a valuable book for the curious layperson as well as (and perhaps especially) for the seasoned quantitative modeler. Thompson imparts not just a lot of knowledge, but a considerable amount of wisdom as well.
For Thompson, the central challenge of interpreting and using models is to navigate between two extremes: one where we take models too literally and the other where we throw them out completely. She identifies a few “fundamental questions” that she’d like to help her reader to navigate:
How are models constructed? To whom do they deliver power? How should we regulate them? How can we use them responsibly?
These are really big questions — with nuanced, contested and complicated answers.
Thompson is a skilled writer and largely delivers on her aim to provide some insight into how these questions might start to be addressed. But writing about how to better understand and use models is a lot like writing about how to play better golf or piano. Sure, you can explain to readers a lot about models, golf and playing the piano — and one can achieve deep appreciation as a result. But I’m pretty sure no one has become a skilled golfer or entertaining piano player simply by reading. And its the same with the use of models. That’s part of the paradox of expertise.
A positive side of the paradox of expertise is that Thompson’s analyses offer considerable value to practicing modelers who wish to better understand the broader social and cultural enterprise that they are helping to build and of which they are a part. It is easy for modelers to live their professional lives entirely in “Model Land.” Thompson provides some tools through which they might escape.
At times, Thompson, like all of us who work in this field eventually find ourselves, approaches territory that some might find heretical. For instance, she argues that “explainability” — the ability to place model outcomes into the context of a sense-making narrative — is not a sufficient basis for evaluating claims that arise from model outcomes. She illustrates this with an example from climate science. She explains that both increasing and decreasing winter storm activity might arise from climate model studies that seek to project the future, and both outcomes might be explained as plausible based on different, contradictory narratives.
Thompson then asks:
But if we are relying on explainability as a warrant of confidence, it begs a methodological question: how do we know that we are explaining and not just constructing a post-hoc rationalization?
I have long argued in various contexts that in many contests the universe of plausible modeled outcomes is so vast that it allows us to select our preferred model outcomes not based on their scientific quality or prognostic skill, but rather, by the narrative that we prefer them to support, or even just by randomness.
Want to show that Pakistan’s flooding was made much more likely by climate change? Sure, there is a model to support that narrative. Want to show that Pakistan’s flooding was made much less likely by climate change? Well, sure, there is a model for that narrative also! Which would you prefer?
Referring to Rudyard Kipling’s children’s tale “How The Elephant Got its Trunk” (spoiler: a crocodile pulled its nose), Thompson asks, approaching but not committing heresy:
“Do we know that we always have more direct intuition about a situation that we try to describe than a five-year-old does about the evolution of animal characteristics?”
Many experts will have experienced encounters with journalists or other self-appointed fact-checkers who believe that their faith in a narrative outweighs the expert’s knowledge and experience. For instance, I’ve seen many non-experts wielding simplistic narratives dismiss or denigrate world-leading experts on topics such as whether a Covid-19 origin in research activities is plausible or the fact that hurricane landfalls in the US have not increased since 1900. Experts in the use and misuse of models and their outputs can sure be a buzz kill for those promoting simple narratives.
Still, Thompson asks her readers to become more sophisticated users and interpreters of models and their outputs. She concludes with a list of 18 questions to ask of models and modelers, such as,
“Given that the model is not perfect, whose judgment has been used to translate from Model Land into the real world?”
This and the other 17 questions are all no doubt important. But they are also deeply unsatisfying because they reveal the paradox of expertise – one has to have expertise to reliably evaluate expertise. There are no consistent shortcuts to expert judgment.
If there were, well, we wouldn’t really need experts, would we?
Few people have the time or interest to obtain a PhD in science and technology studies with parallel high-level expertise in quantitative modeling in order to learn how to reliably interpret claims grounded in quantitative models in areas such as economics, climate or health. I’ve been doing this for decades, and I am still learning every day. Think golf or piano — expertise in and about quantitative modeling is pretty similar.
Fortunately, there is a huge middle ground between a five-year old’s animal tales and a doctorate in science studies. The strength of Thompson’s book is that she enters this middle ground and provides her readers a valuable and comprehensible education on the qualitative interpretation and use of quantitative models.
The knowledge and wisdom that she shares is not always easy going, not least because it can challenge preconceptions and favored narratives. But it is well worth taking on, whether you are new to the topic or a career professional.
Escape From Model Land is a valuable book for the curious layperson as well as (and perhaps especially) for the seasoned quantitative modeler. Thompson imparts not just a lot of knowledge, but a considerable amount of wisdom as well.
For paid subscribers, below are PDFs of three of my many dozens of papers on how we might think better about models in decision making:
Saltelli, Andrea, et al. (2020). Five ways to ensure that models serve society: a manifesto. Nature.
Pielke Jr, R. A. (2003). The role of models in prediction for decision. Models in ecosystem science, 111-135.
Pielke, Jr., R. A., D. Sarewitz, and R. Byerly (2000), Decision making and the future of nature: Understanding, using, and producing predictions. Pp. 361-387 in Prediction: Science, Decision Making and the Future of Nature, Eds. D. Sarewitz, D., R. A. Pielke, Jr., and R. Byerly361-387, Island Press.
I appreciate your support. I am embarking on an experiment to see if a new type of scholarship is possible. I am looking to make a break from traditional academia and its many pathologies, and this Substack is how I’m trying to make that break. I am well on my way. Please consider a subscription at any level, and sharing is most appreciated. Independent expert voices are going to be a key part of our media ecosystem going forward and I am thrilled to be playing a part. You make that possible.