“Data are the foundation on which scientific, engineering, and medical knowledge is built.” National Academy of Sciences 2009.
A new paper is just out claiming that climate change is increasing the damage associated with U.S. hurricanes:
“US hurricane damage, normalized for changes of inflation, population, and wealth, increases approximately 1% per year. For 1900–2022, 1% per year is equivalent to a factor of >3 increase, substantially but not entirely, attributable to climate change.”
As they say — Big if true.
Alas, it is not true. Stop me if you’ve heard this before — some researchers found a fake Excel “dataset” online and decided to use it in a peer reviewed paper, favoring the fake dataset over data in the peer reviewed literature of known provenance and subject to decades of scrutiny.
The authors of the new paper assert of their work with the fake dataset:
“These results contradict the previously published work”
Indeed.
“Researchers have a fundamental obligation to their colleagues, to the public, and to themselves to ensure the integrity of research data.” National Academy of Sciences 2009.
I have described in detail and with receipts the origins of the fake dataset of U.S. hurricane losses. I know all about it because I helped to set up the original not-fake version of the dataset in the late 2000s with one of my former students. In partnership with an insurance company, ICAT, we thought it would be useful to create an online tool to show historical, normalized U.S. hurricane losses based on our peer-reviewed research. So we created the ICAT Damage Estimator, shown with a fuzzy old screenshot below.
It was a very cool tool. It took data from Pielke et al. 2008 — unaltered — and displayed historical hurricanes and their normalized damage in relation to currently active storms in the Atlantic. The ICAT Damage Estimator was at the time (circa 2011) a solid scientific tool and appropriately got a lot of attention.
Time moved along — ICAT was sold, my students moved on to their careers, and I lost touch with whomever was overseeing the Damage Estimator, which was eventually and appropriately scuttled. Over these years however the data underlying the tool was modified, supplemented, and extended in ways that diverged from our or anyone’s peer-reviewed research. By the time the Damage Estimator was taken offline (~2017) the “dataset” had morphed into something decidedly unscientific.
Most notably, at some point, data in the tool on hurricane losses post-1980 was replaced with the tabulation from the NOAA Billion Dollar loss dataset. Uh-oh.
The splicing of two very different datasets together is a recipe for problems. The NOAA Billion Dollar dataset takes a maximalist approach to loss estimation — which might be fine for its purposes — but official hurricane loss data from 1900-1979 was not tabulated using such a maximal approach, meaning that the spliced dataset has 80 years of apples followed by >40 years of oranges.
To take just one obvious and undeniable example — The post-1980 maximalist data includes costs incurred by the National Flood Insurance Program (NFIP) for flooding associated with hurricanes. Such costs can run into many tens of billions of dollars for the most damaging hurricanes. These are real costs.
However, the NFIP was only created in 1968 and systematically expanded in its implementation over the past half century. That means that there were no NFIP costs prior to 1968 — So, for instance, comparing a hurricane’s damage in 1938 with one in 2008 would be problematic if the latter includes NFIP losses and the former did not. Such details are what makes normalization a bit tricky and is among the many methodological issues we have often dealt with in our research.
The problems with the “ICAT Dataset” run much deeper than just the inclusion of flood insurance. Consider that from 1900 to 2017 there are 197 official loss estimates for hurricanes. The “ICAT Dataset” has 247 loss estimates over the same time period, which means that 50 loss estimates were at some point added to our time series — No one knows where those estimates came from or how they were created.
To illustrate the problems with dataset splicing, I downloaded a version of the “ICAT Dataset” and simply tallied its base damage estimates (i.e., estimates prior to normalization or inflation adjustments) over different time periods. I then compared these tallies to the official loss estimates of the National Hurricane Center for the same time periods. Here are the ratios of total ICAT to NHC base damage estimates:
1900 to 1979: 112%
1980 to 2017: 141%
The ICAT tallies are slightly larger than those of the NHC for the period 1900 to 1979 because of the inclusion of many of the 50 extra storms, most individual storm estimates are however identical over this period. From 1980 to 2017 the ICAT total is much larger than the official NHC estimates because the methods for aggregating losses is different across the splice.
The data discontinuity resulting from splicing together two different datasets fully explains the resulting trend — Not climate change.
Despite its dodgy history, the fake “ICAT dataset” might be preferred by some researchers because it gives results that are contrary to the broader peer-reviewed literature.
We should question such claims of attribution of damages to a changing climate simply because climate data on hurricanes does not support claims of attribution. Don’t take it from me — here is what NOAA GFDL explained earlier this month, while recognizing the reality of human-caused climate change:
. . . it is premature to conclude with high confidence that human-caused increases in greenhouse gases have caused a change in past Atlantic basin hurricane activity that is outside the range of natural variability . . .
Apparently there is such a felt need to connect hurricanes (and their damage) to climate change that researchers will even use a fake “dataset” they found online back in 2017, peer reviewers will not question it, and a respected scientific journal will publish it.
Over the weekend I emailed the lead author of the new paper and the journal editor to share the unwelcome news that they have written and published an analysis using a fatally flawed dataset. The lead author responded and he was not happy. He invited a “spirited discussion.” Good, that is in the wheelhouse of THB. This post kicks of what I hope is such a discussion, out in public where it belongs.
To that end, I am happy to invite the authors of the new paper to write up a defense of the “ICAT Database,” to explain its provenance, and why it should be preferred over the official records and methods of the dataset collected by the U.S. National Hurricane Center, subject to peer-review in many papers over the past 25+ years.
More generally, the climate science community needs to stop publishing studies that use the fake “ICAT Dataset” which has been the basis for dozens of peer-reviewed papers. However, zombie data is proving difficult to kill off.
Official, peer reviewed hurricane loss data and a normalization using that data can be found at the link below.
Weinkle, J., Landsea, C., Collins, D., Musulin, R., Crompton, R. P., Klotzbach, P. J., & Pielke Jr, R. (2018). Normalized hurricane damage in the continental United States 1900–2017. Nature sustainability, 1(12), 808-813.
THB exists because of the support of readers like you. You can still get 30% off a paid subscription — forever, — by becoming a THB paid subscriber, but just for a few more days!
I’m confused, it seems clear the dataset should not be used to support the authors conclusion and they should retract the paper. What is the spirited debate?
Great article as always.
I was wondering whether you would consider writing an article listing all the dubious assumptions of RCP 8.5? I have see them mentioned in a few places (including your articles), but to the best of my knowledge, no one has listed them all out in one place. You would be the perfect person to write the article. Thanks.