It would seem Sarewitz is mildly full of it or he unintentionally has shown us the problem....studies with far too wide a scope combined with the hubris of the scientist to deduce "facts" from an inordinately large number of studies that have an overwhelming number of confounding factors and/or misplaced attribution due to lack of knowledge. If you can make two "legitimate " opposing claims from a body of study, you and the other guys pumping out the facts are in the wrong business.
Humility, narrower focus, and more humility. That's a winning hand every single damned time.
And I’m thankful you have clarified what the best data (so far) says according to the IPCC. Like all outsiders I was only aware previously of the summaries which are nonsense in total opposition to the data and so I thought the ipcc was totally useless, but it turns out to be only partially useless.
Some great nuggets in here.
“Put simply, for a given value-based position in an environmental controversy, it is often possible to compile a supporting set of scientifically legitimated facts.”
A good working definition for “decision based evidence making”.
And here is yet another:
“Instead of using science to inform understandings of the thing that just happened, we use the thing that just happened to cherry pick which subset of science we decide is relevant.”
Both definitions fit, both are functions of narrative control.
Just one thing in that post I don't fully agree with, I think the IPCC doesn't state that all of the recent (last 150 years) rise in temperature is due to human causes.
I find this, again, unsatisfactory in that it does not point to how media SHOULD be relating "the science" to an event.
There are a series of questions to be addressed.
1) Does the event reinforce or contradict the model linking CO2 (and other GHG) to harmful geophysical changes?
2) According to those same best models, what is the difference in the probability distribution (mean and variance) of the specific damaging weather event as a result of the CO2 (and other GHG) accumulation since some previous reference period? That difference is the "attribution" of the event.
3) Given the probability distributing actually faced, had optimal adaption taken place? Were incentives in place to lead decision-making entities to have taken the optimal degree of adaptation?
Clearly a _reader_ cannot know whether the model used for attribution is consistent with past data. If there a way that it could not be without it just being a badly estimated model?
Rule 2.
Reasonable and clearly reduces the confident in instant analyses, as a researcher may have a bias toward one result or another and consciously f unconsciously follow estimating procedures to obtain that result.
Rule 3.
It is not clear how Rule 3 fits with Rules 1 and 2. Is it that
1) following rules 1 and 2 produces a point estimate, the event attribution is (p2 – p1), with p1 being the probability with no change in CO2 concentration and p1 the probability with the actual change in CO2 concentration?
2) the traditional detection and attribution approach asks, can we reject the hypothesis that (p2 - p1) is zero?
How do these rules translate into what researchers should do and how journalists should interrogate an attribution claim?
👏👏👏👏👏As most statisticians understand, with enough data torturing you can almost always provide a correlation which someone will misinterpret as causation.
Anyone who still blindly “follows the science” after the Covid debacle/tragedy/scam utilizing that cover has not been paying attention to how frequently that admonition is misused , sometimes naively but often in an intentionally misleading manner.
Hi Roger--Manchester United fan here. I'd be curious to learn about the ramifications of this year's FA Cup third-round ties on hurricane damage. I guess we will have to wait and find out!
I had a childhood friend who cold called for Lehman Bros. In the 1980’s. He would call a list of people and make an aggressive investment prognostication. Then he would call another list and make the OPPOSITE forecast. One of the forecasts was bound to be correct, and he would call back that list and sign them up as clients based on their confirmed perception of his “insights”. Unethical, yes. But it worked.
A more sophisticated approach was that used by Bernie Madoff. Falsify data and present it as fact. In recent years, unfortunately, a few influential scientists have used this latter approach.
Absolutely often utilized. A variant is a service selling an investment methodology that offers to furnish their brokerage statement to show you how it worked in real time. Of course, they really set up multiple brokerage accounts with small sums of money and then utilize the ones with their results to seek the service.
A really unsophisticated one I caught was a service selling low price stock recommendations and giving examples of several low price stocks that had become huge winners. But I was familiar with several of the companies and knew that they had never traded at the prices quoted. In fact, they were often successful companies whose stocks had done so well that they had s-lit the stock several times to keep the price attractive to small investors( which used to be a much more frequent practice than it is today fir multiple reasons and the copywriter for the advertisements was too financially illiterate ( or assumed the potential customers were too dumb) to realize that when they accessed historical price dats that it was split adjusted and the stocks had never traded at those prices
Mr. Pielke, you may be interested in a website titled Spurious Correlations at https://tylervigen.com/spurious-correlations. They have come up with 5,901 such correlations the latest titled "US Household Spending on Fresh Fruit correlates with Canadian National Railway Stock Price" with an r2 of 0.972. They prove your point 5,901 times.
Nice discussion, Roger
It would seem Sarewitz is mildly full of it or he unintentionally has shown us the problem....studies with far too wide a scope combined with the hubris of the scientist to deduce "facts" from an inordinately large number of studies that have an overwhelming number of confounding factors and/or misplaced attribution due to lack of knowledge. If you can make two "legitimate " opposing claims from a body of study, you and the other guys pumping out the facts are in the wrong business.
Humility, narrower focus, and more humility. That's a winning hand every single damned time.
Great piece, love cherry pie.
And I’m thankful you have clarified what the best data (so far) says according to the IPCC. Like all outsiders I was only aware previously of the summaries which are nonsense in total opposition to the data and so I thought the ipcc was totally useless, but it turns out to be only partially useless.
Some great nuggets in here.
“Put simply, for a given value-based position in an environmental controversy, it is often possible to compile a supporting set of scientifically legitimated facts.”
A good working definition for “decision based evidence making”.
And here is yet another:
“Instead of using science to inform understandings of the thing that just happened, we use the thing that just happened to cherry pick which subset of science we decide is relevant.”
Both definitions fit, both are functions of narrative control.
I thought "the science was settled", that the hot hand is not a
fallacy. Is there something more recent than Miller & Sanjuro 2019?
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2450479
Excellent piece. This is yet another manifestation of confirmation bias.
Thanks again, Roger, but I don't get the cartoon's relevance. What am I missing?
Just one thing in that post I don't fully agree with, I think the IPCC doesn't state that all of the recent (last 150 years) rise in temperature is due to human causes.
The iPCC charter only looks for human causes
I find this, again, unsatisfactory in that it does not point to how media SHOULD be relating "the science" to an event.
There are a series of questions to be addressed.
1) Does the event reinforce or contradict the model linking CO2 (and other GHG) to harmful geophysical changes?
2) According to those same best models, what is the difference in the probability distribution (mean and variance) of the specific damaging weather event as a result of the CO2 (and other GHG) accumulation since some previous reference period? That difference is the "attribution" of the event.
3) Given the probability distributing actually faced, had optimal adaption taken place? Were incentives in place to lead decision-making entities to have taken the optimal degree of adaptation?
Here are my suggestions
https://rogerpielkejr.substack.com/p/how-to-be-a-smart-consumer-of-climate
Roger,
Rule 1.
Clearly a _reader_ cannot know whether the model used for attribution is consistent with past data. If there a way that it could not be without it just being a badly estimated model?
Rule 2.
Reasonable and clearly reduces the confident in instant analyses, as a researcher may have a bias toward one result or another and consciously f unconsciously follow estimating procedures to obtain that result.
Rule 3.
It is not clear how Rule 3 fits with Rules 1 and 2. Is it that
1) following rules 1 and 2 produces a point estimate, the event attribution is (p2 – p1), with p1 being the probability with no change in CO2 concentration and p1 the probability with the actual change in CO2 concentration?
2) the traditional detection and attribution approach asks, can we reject the hypothesis that (p2 - p1) is zero?
How do these rules translate into what researchers should do and how journalists should interrogate an attribution claim?
Researchers
1. Show how well model results compare with observations
2. Preregister
3. Calculate model's emergence time scale and present it in comparison to IPCC doing the same
Journalists
1. Ask how well model results compare to observations
2. Ask if study is preregistered
3. Ask how model's emergence time scale compares to IPCC and request an explanation for differences
Any study that fails to follow these rules can be understood as exploratory, not great science, or marketing
1. Meaning the confidence limits of p1 and p2?
2. Yes as best practice. But if asked before any registration has been done should the answer always be "I have not done the full analysis?"
3. I guess I still do not understand Rule 3. [Not necessarily your fault. :)]
👏👏👏👏👏As most statisticians understand, with enough data torturing you can almost always provide a correlation which someone will misinterpret as causation.
Anyone who still blindly “follows the science” after the Covid debacle/tragedy/scam utilizing that cover has not been paying attention to how frequently that admonition is misused , sometimes naively but often in an intentionally misleading manner.
Thank you.
Hi Roger--Manchester United fan here. I'd be curious to learn about the ramifications of this year's FA Cup third-round ties on hurricane damage. I guess we will have to wait and find out!
Ha! Thanks for helping clear the fixture jam for the Gunners to win the Champions League ;-)
A United fan, now there is a modern form of self-flagellation.
These days it’s a hard position to be in.
I still like my Giggs jersey but that’s about it.
You always provide valuable insights.
I had a childhood friend who cold called for Lehman Bros. In the 1980’s. He would call a list of people and make an aggressive investment prognostication. Then he would call another list and make the OPPOSITE forecast. One of the forecasts was bound to be correct, and he would call back that list and sign them up as clients based on their confirmed perception of his “insights”. Unethical, yes. But it worked.
A more sophisticated approach was that used by Bernie Madoff. Falsify data and present it as fact. In recent years, unfortunately, a few influential scientists have used this latter approach.
Absolutely often utilized. A variant is a service selling an investment methodology that offers to furnish their brokerage statement to show you how it worked in real time. Of course, they really set up multiple brokerage accounts with small sums of money and then utilize the ones with their results to seek the service.
A really unsophisticated one I caught was a service selling low price stock recommendations and giving examples of several low price stocks that had become huge winners. But I was familiar with several of the companies and knew that they had never traded at the prices quoted. In fact, they were often successful companies whose stocks had done so well that they had s-lit the stock several times to keep the price attractive to small investors( which used to be a much more frequent practice than it is today fir multiple reasons and the copywriter for the advertisements was too financially illiterate ( or assumed the potential customers were too dumb) to realize that when they accessed historical price dats that it was split adjusted and the stocks had never traded at those prices
Wow, a real guaranteed winner scam! I would guess this ploy is common.
Mr. Pielke, you may be interested in a website titled Spurious Correlations at https://tylervigen.com/spurious-correlations. They have come up with 5,901 such correlations the latest titled "US Household Spending on Fresh Fruit correlates with Canadian National Railway Stock Price" with an r2 of 0.972. They prove your point 5,901 times.
I made use of your material some years ago to make this point to some of my professional colleagues - great stuff…
Thanks but its not my stuff. It's tylervigen's, if that's the name. It is a truly useful site, particularly these days.
Ha! I'll check it out, thx