42 Comments

The most upsetting issue for me is the NSF selection of this one report. The next issue I have is the US scientific community giving credibility to IPCC whose quality control and assurance standards are so far out of alignment with what is expected of our governments science based agencies.

As a retired engineer I can’t help to see standard peer review process as being a very low level of quality control and assurance. In researching what quality standards government agencies apply to their work I came across the Office of Management and Budget “Guidelines for Ensuring and Maximizing the Quality, Objectivity, Utility, and Integrity of Information Dissemination by Federal Agencies”

This outlines the requirements for each agency to develop separate quality standards and their requirement to establish mechanisms to address those persons seeking corrections.

NSF Standards are at (https://www.nsf.gov/policies/infoqual.jsp) and includes a link to NSF Information Correction Form.

My belief is that knowledge persons should be inundating agencies who promote bias and straight out misinformation to promote an internal perspective that lacks objectivity and ignores valid science that raises uncertainty of their perspectives. E.g. UHI, model vs observation, and unsupported claims of extreme weather.

Expand full comment

Lies, damn lies, and statistics. Thanks for the piece sir!

Expand full comment

I seem to recall Roger writing that the new head of the IPCC will improve things, a scientist, i guess this is his test.

I'm betting nothing changes.

Why not have him make a very public statement that RCP8.5 is invalid and all "science", all 45,000 examples referencing it shall be discounted and retracted?

But of course, that eliminates the best source of decision-based evidence-making and so that also will not happen.

At least we will have confirmation that the Mannian destruction of climate science is not changing.

Expand full comment

Roger, perhaps you should have mentioned that Chris Landsea left the IPCC due to all their shenanigans. Otherwise, excellent work, as usual.

What will happen now?

1. First of all, the IPCC-camp will ignore the writings of Roger Pielke jr, until it is no longer possible to do so. Then they will continue ignoring Roger until AR7 is written, where we might or might not see a small admission of error in the form of a footnote.

2.Then someone will lean on PNAS to prevent a retraction. If that fails, point 1 is applied again, with someone else duplicating the retracted research so IPCC can continue referring to it.

This is political climate science, business as usual.

Expand full comment

This is indeed a real test of integrity. I am curious to see if it gets attention in the legacy media and in what form.

My gut feeling: you will be ignored by the IPCC and PNAS.

The question is: what will the members of the IPCC panel who review this section do?

Expand full comment

Was it error, or just propaganda?

Expand full comment

I just finished viewing https://www.youtube.com/watch?v=NoOgDwhWXYk on the UK redefining how excess daths are calculated. At least they had the "integrity" to go back to the past and recalculate all of the years -- although they worked very hard using "backwards Math" to come up with an excess deaths calculation that made 2023 look good. G18 and ICAT should have at least pretended to apply the same definition of losses across all years.

Expand full comment

Nice post!

"...today — in draw-dropping fashion..."

Maybe should be "jaw-dropping"... ?

Expand full comment

This kind of merging of data sets appears to be common in global warming “science”.

IIRC at least one of the hockeystick graphs was obtained in part by effectively grafting post 1900 temperature data onto proxy data from prior centuries, but a few minutes of searching climateaudit has failed to find it and the closest climategate email I can find is this one - https://www.di2.nu/foia/1254108338.txt

Expand full comment

If I understand correctly, an insurance company created a dataset that said climate change was responsible for increasing damage. This seems to me to be always COI when coupled with request to regulators to increase premiums based on CC. Am I missing something here?

Expand full comment

Bravo, Roger.

Expand full comment

An interesting experiment. Given PNAS's previous behaviors, my expectations are not high. Still, I hope...

Expand full comment

The bottom line here is unclear to me. Are you saying the "billion dollar disasters" data set from NOAA is inaccurate even though it comes from NOAA? Are you saying it's accurate but mixes direct and indirect costs while earlier observations have only direct costs, therefore later observations are inflated relative to earlier due to the inclusion of additional costs?

Is there any way you can strip out the indirect costs from the "billion dollar disasters" data set for comparison purposes? I think if you can document that in a very short piece that will be more effective than a long personal history of the dispute like this.

Expand full comment

Thanks

On the BDD, yes, you can read my pre-print evaluating that dataset here, and it is indeed flawed, intransparent and misleading.

Happy to hear comments as it is still out for review and I have time to incorporate them:

https://osf.io/preprints/socarxiv/3yf7b

On the Grinsted et al. paper, there is no personal dispute here, I've never met them, however, for better or worse they are conducting research on a dataset I've help to develop over the past 25 years, so perhaps I know too much ... understanding that history is important for anyone to know how egregious a failure of scientific integrity this is. It is not just an obscure paper either -- it is the single paper held up by IPCC and the USNCA to dismiss an entire literature (which they shouldn't be doing anyway).

I know the journey into the sausage factory is never any fun ;-)

Expand full comment

It is very depressing that certain scientists are repeatedly biasing data in furtherance of their politics. This seems to be most prevalent in climate science where the peer review process has been abused and long term historic temperature datasets "adjusted". This politicisation of the science by the UN and IPCC has undoubtedly encouraged much of the unrest in the world at the moment, albeit indirectly. Thank you Roger for highlighting it yet again and so forcibly.

Expand full comment

Roger, I think Grinsted et al would stick to their original conclusion. The main focus of G18 was a new normalization methodology, but it also introduced a new damage dataset. That means that are 4 possible combinations of new/old normalization methodology and new/old dataset. The main paper already reported that the new methodology-new dataset combination was not statistically significant. E.g., Figure 2B showed p=0.2 and Table 2 shows NDICAT OLS trend was -10 to 26. The Supplemental Information showed that the new methodology-old dataset combination was also not statistically significant. E.g., Figure S2b showed p=0.1 and Table S1 showed ATDW OLS trend was 0-14 to 17.

G18's argument was to downplay linear trends because the data was lumpy and skewed. To help reveal a claimed "emergent" trend that was allegedly hidden in the data, G18 introduced a new measure: the decadal frequency of damage events over a threshold. G18 first took the 50th, 60th, 70th, 80th, and 90th percentiles and then calculated a 10-year smooth of the frequency (count) of the number of storms exceeding that threshold, and then determined a Poisson regression to determine the trend. It is this new measure that served as the basis of the paper's claim of robustness. So now there's another factor to consider: a new trend G18 showed that in all 4 methodology-dataset combinations, the new trend showed that the strongest storms had become far more common. The new trend of the new methodology-new dataset combination showed that the top 10% storms increased 3.3x, top 20% 2.9x, and top 30% 1.9x. The new trend of the old methodology-old dataset combination showed a similar increase of 2.6x for the top 10%, 2.0x for the top 20%, and 1.6x for the top 30%. In all cases, "the frequency of the largest damage events" increased over the last 120 years, which was the core claim of the paper. Hence, I think Grinsted, the IPCC, etc. won't really be disturbed by the fact that the G18's "new dataset" was distorted. They will still claim that Grinsted's results are "robust" regardless of normalization methodology and dataset.

Of course, what this really means is that Grinsted's conclusions ultimately does not rely on the new methodology nor on the new dataset but rather on the new mechanism for calculating the trend. That new mechanism was able to take the old methodology/old dataset combination (from Team Pielke including Weinkle, Landsea, etc.) that showed no trend (flat as pancake with p=0.5) and turn it into an increasing trend for the most damaging storms. That increase can be seen in Figure S2d. The whites and yellows in that graph, indicating the most damaging storms, do appear to be increasing. The statistics of how that is done are over my head, but this doesn't seem valid. I suspect that there may be some manipulation taking place - it may have something to do with using counts over a threshold rather than values and/or running a trend on data that has already been smoothed (e.g., isn't that double processing data?)

Expand full comment

Greg,

This is false:

"The main focus of G18 was a new normalization methodology, but it also introduced a new damage dataset"

As I explain in this post, there is no "new dataset"

What dataset is it that you believe was new?

And with respect to trends in the strongest events, it is appropriate to use climate data for that analysis, not economic data. Here are the trends in hurricanes and major hurricanes since 1900:

https://rogerpielkejr.substack.com/p/us-hurricane-overview-2023

Even a stats whiz is going to have trouble turning these into uptrends.

One reason why we don't use economic loss as a proxy for storm strength can be seen in Hurricane Idalia last year. It made land fall as a category 3 storm but will result in <$1 billion insured losses, but had it hit Tampa, that might have been $25B. Sandy 2012 was a massive loss event and was not even at hurricane strength upon landfall.

Bottom line: Don't use economic losses as a proxy for climate data.

Expand full comment

This makes we wonder how many other conclusions in the IPCC report are based on highly selective choice of papers. I believe you mentioned another example, Roger, perhaps on hurricanes?

Expand full comment