Discussion about this post

User's avatar
Roger Pielke Jr.'s avatar

PNAS just reminded me that I submitted a letter on this in 2019. In the submitted letter I wrote:

"the paper uses an online dataset from ICAT that has appended data from the recently developed NOAA NCEI “billion dollar disaster” dataset to a version of Pielke et al. 2008 (2, P08). This hybrid dataset introduces a major discontinuity in loss estimates, starting in 1980 when the NCEI dataset begins. NCEI and P08 employ dramatically different loss estimation techniques for bass data loss estimates. Using data included with G19 to calculate the effects, this bias averages ~33% for storms post-1980 versus those pre-1940. Thus, under no circumstances should the current version of the ICAT dataset be used in research."

I received three reviews back of my letter, and on this point they said:

Reviewer 1: "We also observe that the most damaging storms have the largest increases in frequency (Fig S3c,d). This is consistent with the results from the analysis of the longer records from ICAT and Weinkle et al. (1,4)"

Not relevant

Reviewer 2: "the NCEI dataset 1980 onward is going to be MUCH more accurate for economic effects than either Pielke 2008, ICAT or Weinkle because it doesn't just look at wind or insured loss, it tries to estimate all economic losses. However, it is harder to compare with the older figures."

Yes, that's why you can't splice it!

Reviewer 3: "The author claims that the current version of ICAT should never be used in research. I find two things lacking in this critique: 1) the author provides no reference for this critique of the current ICAT dataset and 2) ignores that G19 include a robustness analysis based on two other datasets (Weinkle et al., 2018; NCEI, 2019)."

This reviewer refuses to believe that ICAT = Weinkle + NCEI, and that I helped to create ICAT. (Screams into the void!)

I did not resubmit the letter, as PNAS limits letters to 500 words and I suppose that I saw no point in arguing about the unreality of the "ICAT database."

At the time, I did not quantitatively explore the consequences for G19 results of using the mashed-up datasets. I should have. Because if I had done so, I would have called for retraction then rather than just writing a letter.

None of this matters. The fact is that G19 uses a dataset they found online, which is flawed and determines their results. That dataset does not actually exist. On that basis alone, G19 should be retracted.

Expand full comment
Barry Butterfield's avatar

"Ostracism of experts who created the data you are seeking to use has consequences."

First question I would ask of G18 is, how could you ignore the different conclusions. How could they not ask themselves, why are we different from everyone else? All other sets show no trend, yours does? Why is that? That's a grad student, rookie mistake. I would then ask the same question of the original reviewers of the PNAS paper. Yes, it is very sloppy work from the grad student, but inexcusable from the reviewers.

Second question I would ask is about the response letter from PNAS. Your reviewer no. 2 states "the NCEI dataset 1980 onward is going to be MUCH more accurate for economic effects..." Do you agree with that conclusion? I certainly don't. It's indirect, meaning it can't be measured accurately, only subjectively. Therefore, you cannot conclude your result is more accurate. It is different, that's all.

I think you'll get a response from Grinsted et al WAY before you get any acknowledgement of error from PNAS. Please keep us posted.

Expand full comment
40 more comments...

No posts