Climate Science Whiplash
Why we misinterpret climate science — Weather Attribution Alchemy, Part 4
Today’s post is Part 4 in the THB series Weather Attribution Alchemy. The previous installments are:
Part 1: Weather Attribution Alchemy
Part 3: Tricks of the Trade
Did you know that climate change is making the San Francisco region more foggy?
The Bay Area just had its foggiest May in 50 years. And thanks to global warming, it's about to get even foggier.
Did you also know that climate change is making the San Francisco area less foggy?
Declining fog cover on California's coast could leave the state's famous redwoods high and dry, a new study says. Among the tallest and longest-lived trees on Earth, redwoods depend on summertime's moisture-rich fog to replenish their water reserves. But climate change may be reducing this crucial fog cover.
The two news stories with opposite claims were published just a few months apart. Together, the contradicting claims illustrate climate science whiplash,1 a common dynamic in reporting of the science of climate change.
The thing that just happened — a hurricane, a fire, more fog, less fog, whatever — inevitably can be associated with some legitimate study found in the peer reviewed literature that appears to explain why the event happened and what it portends for the future.
In his classic paper on “How Science Makes Environmental Controversies Worse,” science policy scholar Dan Sarewitz referred to such circumstances as an “excess of objectivity,” which he characterized as:
. . . not a lack of scientific knowledge so much as the contrary—a huge body of knowledge whose components can be legitimately assembled and interpreted in different ways to yield competing views of the “problem” and of how society should respond. Put simply, for a given value-based position in an environmental controversy, it is often possible to compile a supporting set of scientifically legitimated facts.2
Today, I suggest a general dynamic that underlies much of media reporting on climate change. This dynamic helps us to understand why public discussions of extreme weather so often contradict the assessments of the IPCC.
I have called this dynamic: The Guaranteed Winner Scam Meets the Hot Hand Fallacy.
More than 15 years ago I published a paper showing that the final score of England’s FA Cup Championship match was well correlated with economic damage in the subsequent hurricane season:
Years in which the FA Cup championship game has a total of three or more goals have an average of 1.8 landfalling hurricanes and $11.7 billion in damage, whereas championships with a total of one or two goals have had an average of only 1.3 storms and $6.7 billion in damage.
No one is going to believe that there is a causal relationship at work here. But if instead of the FA Cup Championship final score as the causal agent I substituted concepts like the Clausius-Clapeyron relationship or ocean heat content — or even ink blots like global heating or weather whiplash — and provided a citation to a study, many would find the claimed causality to be eminently plausible, even if it was just as tenuous as the final score of a football match.
What is going on here is backwards reasoning: Instead of using science to inform understandings of the thing that just happened, we use the thing that just happened to cherry pick which subset of science we decide is relevant.
To explain this dynamic, let’s start with the guaranteed winner scam.
It works like this: select 65,536 people and tell them that you have developed a methodology that allows for 100 per cent accurate prediction of the winner of next weekend’s big football game. You split the group of 65,536 into equal halves and send one half a guaranteed prediction of victory for one team, and the other half a guaranteed win on the other team.
You have ensured that your prediction will be viewed as correct by 32,768 people. Each week you can proceed in this fashion. By the time eight weeks have gone by there will be 256 people anxiously waiting for your next week’s selection because you have demonstrated remarkable predictive capabilities, having provided them with eight perfect picks. Presumably they will now be ready to pay a handsome price for the prediction you offer in week nine.
Investors will be very familiar with this dynamic. You might think that investing in a managed mutual fund that recently outperformed its peers might be an effective strategy to achieve higher returns, believing the outperformance results from greater investment skill. Almost always, it is not — however, many investors have a difficult time understanding this fact. Such performance chasing is another version of the guaranteed winner scam.
Instead of football matches or mutual funds, think of publications in the peer-reviewed literature that project how weather and climate variables might change due to human influences on the climate system. There are millions of such studies, and if you tell me a outcome — for instance, more hurricanes, less hurricanes, fewer but stronger hurricanes, more but weaker hurricanes, wetter, dryer, faster, slower — I can assuredly find you a peer-reviewed study that projects such an outcome due to human influences.
Scientific assessments are so crucial because they force us to consider an entire body of research, not just a single paper or a seemingly-relevant tiny subset. Assessment is incredibly difficult not just because of the enormity of the scientific literature, but because the “excess of objectivity” supports multiple legitimate interpretations of that literature. Scientific assessment is hard to do well.
Complicating the issue even further is the “hot hand fallacy,” which was coined to describe how people misinterpret random sequences — based on how they view the tendency of basketball players to be “streak shooters” or have the “hot hand.”
The “hot hand fallacy” holds that the probability in a random process of a “hit” (i.e. a made basket, a flipped coin, or a successful seasonal hurricane forecast) is higher after a “hit” than the baseline probability. In other words, people often see patterns in random signals that they then use, incorrectly, to believe that they now have additional reliable information about the future.
The “hot hand fallacy” can manifest itself in several ways with respect to climate projections.
First, the wide range of available predictions or projections essentially spanning the range of possibilities means that some expectations will seem to have anticipated the thing that just happened and suggest more to come. Even if the projection’s apparent relevance is the result of a fortunate correspondence with the randomness of the underlying system (e.g., internal climate variability) there will be a tendency for people to gravitate to that particular projection.
Second, a defining feature of climatology is persistence, suggesting that nature does sometimes have a “hot hand” — it is not always a fallacy in the context of climate.3 However, knowing a true “hot hand” from a false one is not simple.. For instance, after the United States experienced record-breaking major hurricanes in 2004 and 2005, many suggested that we were in a “new normal” for hurricane activity and damages. The climate did not play along and there was no major U.S. (continental) hurricane landfall until 2017, the longest such stretch on record.
Identifying a true “hot hand” can only occur over a long time period, and even then can be challenging. The best example of such evaluations can be found in weather forecasting, where extensive efforts have been devoting to distinguishing forecasting skill from random noise. As a result, we have the ability to reliably anticipate the evolution of weather weeks into the future. Without commitment to scientific rigor, weather forecasting would not have achieved its incredible successes.
One reason why the IPCC framework sets a high bar for detection of changes in climate variables and the attribution of detected change to causes are is that it is so very easy for us to see patterns in randomness and ascribe causes that do not actually exist, or are not particularly significant. Frustration with the scientific rigor of the IPCC is one factor underlying the rise of far less rigorous approaches to detection and attribution.4
Thus, when the IPCC concludes that the climate has warmed and this is due to human influences, we can have considerable confidence in the conclusion, even as science continues. At the same time, when the IPCC fails to detect changes in most (not all) metrics of extreme weather we should have the same degree of confidence in this result because the IPCC is applying the same scientific standards as it does to global temperature change.
Some accept the IPCC’s findings for changes in global temperature but reject those related to changes in extreme weather, preferring instead the guaranteed winner scam. A few even accept the IPCC’s findings on extreme events and reject its top line findings. Both perspectives reveal a degree of inconsistency.
The “excess of objectivity” does not imply that we know nothing because we know so very much. What it does tell us is that knowing what is what can be challenging and requires a lot of work. This, more than anything else, is why we should all work to maintain (and improve) the integrity of scientific assessment bodies like the IPCC. Cherries are delicious — but you can’t make reliable understandings with them — stick to pie.
The easiest thing you can do to support THB is to click that “♡ Like”. More likes mean the higher this post rises in Substack feeds and then THB gets in front of more readers!
THB is reader-engaged and reader-supported. THB’s aim is to highlight data, analyses and commentary missing from public discussions of science, policy and politics. A subscription costs ~$1.50 per week and keeps THB running so I can deliver delightful posts like this to your in box several times a week. If you value THB and are able, please do support!
I first heard the concept of media “whiplash” from Andy Revkin way back in 2008. Check out Andy at Sustain What here on Substack
Sarewitz is worth quoting at length: “When global warming is considered in terms of its specific potential social consequences, however, the availability of competing facts and scientific perspectives quickly spirals out of control. Consider the following chain of logic: human greenhouse gas emissions are causing global warming; global warming will lead to increased frequency and severity of extreme weather events; reducing greenhouse emissions can thus help reduce the impacts of extreme weather events. Each link in this chain is saturated with the potential for competing, fact-based perspectives. For example, climate models and knowledge of atmospheric dynamics suggest that increased warming may contribute to a rising incidence and magnitude of extreme weather events (Houghton et al., 2001, p. 575); but observations of weather patterns over the past century do not show clear evidence of such increases, while model results are still ambiguous, and “data continue to be lacking to make conclusive cases” (Houghton et al., 2001, p. 774). While economists can show how tradable permit schemes combined with mandated emissions targets can reduce greenhouse gas emissions (Chichilnisky and Heal, 1995), they cannot agree on plausible future rates of emissions increase (The Economist Print Edition, 2003). Furthermore, perspectives on the history and economics of innovation suggest that decarbonization is likely to depend primarily on technology evolution and diffusion, not policies governing consumption (Ausubel, 1991, Nakicenovic, 1996). Social science research on natural hazards suggests that socioeconomic factors (such as land use patterns, population density, and economic growth), rather than changing magnitude or frequency of hazards, are responsible for increasing societal losses from extreme events (Pielke et al., 2003, Changnon et al., 2001). And in any case, climate scientists disagree about the extent to which greenhouse gases are responsible for warming trends, given that other phenomena, such as land use patterns, may also strongly influence global climate (e.g., Marland et al., 2003). Finally, climate models that as yet have no capacity to accurately predict regional variability in extreme events are thus even further from providing useful information about how greenhouse gas emissions reductions might influence future incidence and magnitude of extreme events. Each level of analysis is not only associated with its own competing bodies of contestable knowledge and facts, but is also dependent on how one views the other levels of analysis. Facts can be assembled to support entirely different interpretations of what is going on, and entirely different courses of action for how to address what is going on.”
There is also a fun and lively debate in the literature on whether or not basketball players actually have a hot hand. Having played a lot of basketball in my life, I can report that I have indeed had games in which I thought I did indeed have a “hot hand.” At the same time, I really cannot say whether that was because I was particularly on that particular day, or that was just a random sequence of the tens of thousands of shots I’ve taken over the past 50 years!
It remains an open question whether the IPCC itself will give in to calls to lower its scientific standards. Watch this space.
Mr. Pielke, you may be interested in a website titled Spurious Correlations at https://tylervigen.com/spurious-correlations. They have come up with 5,901 such correlations the latest titled "US Household Spending on Fresh Fruit correlates with Canadian National Railway Stock Price" with an r2 of 0.972. They prove your point 5,901 times.
I had a childhood friend who cold called for Lehman Bros. In the 1980’s. He would call a list of people and make an aggressive investment prognostication. Then he would call another list and make the OPPOSITE forecast. One of the forecasts was bound to be correct, and he would call back that list and sign them up as clients based on their confirmed perception of his “insights”. Unethical, yes. But it worked.
A more sophisticated approach was that used by Bernie Madoff. Falsify data and present it as fact. In recent years, unfortunately, a few influential scientists have used this latter approach.