How to be a smart consumer of climate attribution claims
Three rules for making sense of "event attribution" studies
Recent years have seen a proliferation of single “event attribution” claims that are quickly churned out in the aftermath of notable extreme weather events. These analyses typically lead with strong claims of a connection between climate change and the event that just happened.
Last month I explained a bit about such claims:
Single-event attribution uses climate models to calculate the odds that a particular extreme event was made more likely as a direct and attributable consequence of human-caused climate change. Such studies generally look at two scenarios, one a counterfactual based on no increase in greenhouse gas concentrations in the atmosphere and the other with observed increased concentrations. Then, models run under the two different scenarios are compared to see if the probability of extreme events similar to the one in question became more likely in the model runs with more greenhouse gases.
Today, I offer three rules for accepting such claims from a scientific perspective consistent with the work of the Intergovernmental Panel on Climate Change (IPCC). Event attribution claims are worth scrutiny because their underlying methodology was developed explicitly to support climate lawsuits, promote climate advocacy and attract media attention. You can read more about the politics of such claims here.
It is troubling that I feel like I have to say this out loud — We should not allow the political significance of a topic to overshadow scientific rigor.
Consider the case of Hurricane Florence. In September 2018, as Hurricane Florence was heading towards a landfall in North Carolina, a team of researchers announced that the storm would be 80 kilometers larger and drop 50% more rainfall due to “human induced climate change.”
The announced connection to climate change, predictably, generated headlines across the legacy media around the world. Many news outlets ran with sensational stories. For instance, the Guardian proclaimed: “Climate change means Hurricane Florence will dump 50% more rain.” Newsweek announced: “How Global Warming Is Turbocharging Monster Storms Like Hurricane Florence.”
One of the scientists who performed the initial Hurricane Florence analysis, Michael Wehner of Lawrence Berkeley National Laboratory, openly expressed his desire to get the initial analysis in the news.
“Wehner admitted that he and his colleagues are sticking their necks out in making an estimate of the effect of climate change before the storm makes landfall. But he said that it’s important to provide answers when a hurricane is in the news, not months later when most people are thinking about other issues.”
The political overlay was not subtle. Wehner was contrasted with President Trump by the Center for American Progress and he promoted (emphasis in original) an overtly political message:
“The most important message from this (and previous) analyses is that “Dangerous climate change is here now!” It is not a distant threat in the future but today’s reality”
Long after the event and the passing of the Hurricane Florence media cycle, the researchers who secured all that media attention published a major revision of their initial analysis — this time in the peer-reviewed literature and not just in a press release.
In the study published more than a year later the researchers shared that their initial numbers were actually wildly off base. Whoopsy.
“The quantitative aspects of our forecasted attribution statements fall outside broad confidence intervals of our hindcasted statements and are quite different from the hindcasted best estimates.”
In plain English that means: “We were really, really wrong.”
Scott Johnson, editor of Climate Feedback, explained the significance of a mistake in the research team’s initial analysis — it was not a small error:
“Rather than something like 50 percent of the rainfall being the result of a warmer world, the models actually show about five percent (and that's ±5%). And rather than a storm that is 80 kilometers wider because of climate change, it was about nine kilometers (±6km) wider.”
Johnson says the rush to get the initial analysis out likely contributed to the flawed numbers. He questioned whether speed and scientific accuracy are worth trading off: “Whether there's sufficient value in getting a less reliable answer faster is another question.”
We saw a similar situation last fall as massive flooding occurred in Pakistan. Again, event attribution claims raced out far ahead of what the underlying analyses could actually support. Poor reporting (see Tweet above) compounded the issues.
Event attribution analyses are increasingly easy to do, extremely media friendly, easy to misinterpret or overstate and are now commonplace. They are a fixture of media reporting and advocacy on climate. So we should be smart in how we interpret them. Here I offer three rules to guide the production and interpretation of such event attribution studies.
Before proceeding it is necessary to state the obvious: Human caused climate change is real and has undoubtedly influenced all global weather events. The world has evolved differently than it would have otherwise due to the significant human influence, which notably includes the emission of carbon dioxide from the burning of fossil fuels, but through other influences as well, such as changes to the land surface.
Apart from media sensationalism and efforts to shape public and policy maker opinion on climate change, the attribution of extreme weather to causal factors (including the emission of greenhouse gases) is important for actual decision making related to disaster planning and climate adaptation. In such contexts, science should be more than just a symbol used to generate headlines and underpin advocacy and lawsuits. Scientific quality actually matters.
To ensure rigor in its work, the IPCC has employed a statistical framework for concluding that extreme weather phenomena has actually increased (or decreased) and the factors responsible for such changes. The detection of changes required quantifying a change in the statistics of weather extremes over climate time scales of 30 years or even longer. Once detection was achieved, then scientists seek to attribute those changes to particular causes, including the accumulation of carbon dioxide in the atmosphere.
When it comes to many types of extreme events the IPCC has for decades been unable to conclusively detect changes in their frequency or intensity. For instance, the IPCC has reported increases in heat waves and in heavy precipitation, but not tropical cyclones (including hurricanes), floods, tornadoes or drought.
The rise of “event attribution” studies aids climate advocacy by allowing science-like claims of a linkage between specific extreme events and climate change. It it is not clear however that such studies offer much in the way of empirical rigor, particularly as compared to the conventional IPCC detection and attribution framework.
As you encounter event attribution claims, here are three rules for accepting “event attribution” studies as useful contributions to scientific understandings.
Rule Number One: Any model used in an event attribution study to quantify a claimed linkage between climate change a specific extreme event should also produce accurate historical climate trends associated with the relevant phenomena. The claim that rainfall from Hurricane Florence was boosted 50% by climate change should have raised immediate doubts because observations have not shown an increase in rainfall related to landfalling hurricanes. Any event attribution study that cannot accurately replicate historical trends using the same model and methods is clearly fatally flawed. A comparison of observations and modeled climate history with respect to the extreme weather phenomena under study should always be included in event attribution results.
Rule Number Two: All event attribution studies should be preregistered, which means “committing to analytic steps without advance knowledge of the research outcomes.” All methodological choices should be made transparent in advance of any event attribution study and submitted to an independent registry (there are many examples). All analyses should be subsequently published, including null- and non-findings. Such preregistration can improve the rigor of research. As one event attribution study concluded: “any event attribution statement can—and will—critically depend on the researcher’s decision regarding the framing of the attribution analysis, in particular with respect to the choice of model, counterfactual climate, and boundary conditions.” Preregistration will make such choices transparent. Any event attribution study conducted in the absence of preregistration is of questionable value.
Rule Number Three: All event attribution studies should integrate their findings with the traditional approach to detection and attribution of the IPCC. Event attribution studies often result is what is called “attribution without detection.” This means linking a specific extreme event with climate change in the absence of detecting any increase in the relevant characteristics of such events – as with attributing Hurricane Florence rainfall (or some fraction of it) to climate change, but without detecting a corresponding long-term increase in rainfall in the climatological record of U.S. hurricanes. Event attribution and the conventional IPCC approach can be integrated by calculating the emergence timescale of trends in the characteristics of the extreme weather event in question, using the same model and methods of the event attribution study. For instance, any event attribution study of a single hurricane’s rainfall should always be accompanied by a quantitative estimate of when changes over time across all hurricanes should be detectable under the conventional IPCC framework. In this way, event attribution studies can be made fully consistent with the IPCC approach.
Individual event attribution studies are here to stay. They fill a strong demand in advocacy and in politics. Meeting such demand should be fully compatible with basic standards of scientific quality.
For event attribution studies to be conducted with the highest degree of rigor they should (1) demonstrate consistency with historical observations, (2) be the product of preregistered studies, and (3) be fully integrated with the conventional methodologies of the IPCC. Until event attribution studies meet these basic rules, they will better serve purposes of advocacy rather than science.
Note: This post updates an analysis I first presented at Forbes in 2020.