40 Comments

Email me a copy of anything you would like proofed. I don't guarantee I'll get to it before you have to post, but I'm on my PC for several hours every day, and might well see it immediately.

Expand full comment

... another, in order to word towards the achievement of desired outcomes.

should be:

another, in order to work towards the achievement of desired outcomes.

Expand full comment

This one had an unusually large number of typos, maybe 5, apologies -- I was a bit under the weather when I wrote it (that's my excuse;-). I'm glad the other readers left one uncaught for you.

Thanks again to all for the eagle eyes.

Expand full comment

Be exceeding precise

Should be:

Be exceedingly precise

Expand full comment

The worst example of not using real world variables is the number we get for the average surface temperature over the whole surface of the earth for a whole year. That number is meaningless in physical terms. It's a statistical result that comes from manipulating huge amounts of data.

Expand full comment

Excellent post, sir, thank you.

I was surprised in reading the comments that no one discussed your very first principle, "use real-world variables." Unless I'm missing the obvious (which is entirely possible), it seems that terms like "global average temperature" or "atmospheric concentration of carbon dioxide" are representative examples of this principle.

Regarding your principle #9, I was reminded of an observation made by Dwight Eisenhower: Farming looks mighty easy when your plow is a pencil, and you’re a thousand miles from the corn field.

Expand full comment

What a great discussion! And I was relieved that your "10" didn't actually include specific mathematical techniques. Whew!

Much of the discussion pertains to how researchers handle data, to which there can be reasonable disagreement. It reminded me of a 2022 PNAS study where they sent the identical database to 73 research groups. No one here will be surprised that there were remarkably different conclusions amongst the groups. A 2013 editorial called it "The Garden of Forked Paths".

www.pnas.org/doi/10.1073/pnas.2203150119

Expand full comment

The hard sciences (theoretical physics) do not seem immune to the fundamental problem of mathiness - namely critiques of string theory as 'not even wrong' (i.e. producing some elegant mathematical results but incapable of producing observable predictions).

I hadn't realised that you worked with Ross Tucker (I'm a relative newbie) - from my (admittedly inexpert) eye he always seemed like one of the good guys.

Expand full comment

6. Replication of analyses used in policy is as important as novel studies

6 has a corollary which is that studies that do not release the methods and data to replicate are inherently suspicious. Take, for example, numerous notorious climate science papers or (in the news just now) Claudine Gay's dissertation and follow up paper.

Expand full comment

Your point, of course, is spot on, but I was momentarily waylaid by your decision to use a graphic with three separate graphs. I spent some time wondering whether the same underlying phenomenon — drought area — might lead to differing trends depending on how it's aggregated. That may or may not be the case but it wasn't the point you were trying to illustrate. I think there's an implied principle but I haven't yet figured out how to articulate it.

Expand full comment

Another fav: "a simple falsehood will always triumph over a complex truth." The original quote is attributed to Alexis de Toqueville: "It is easier for the world to accept a simple lie than a complex truth."

Expand full comment

I think a good example is using wildfire acreage as a dependent variable. If we want to know what causes “bad” fires as opposed to “good” fires, we can’t put them all together. We want to increase "good fires" on the landscape and reduce negative impacts of "bad" fires.

But scientists can’t study bad ones as opposed to good unless someone puts them in those categories. Which is not really scientific, but it seems like people impacted would be very clear (and if we can't agree that's another issue.) Here's how it appears to me.

Funding is in climate and (bad) wildfires

There is no data on good vs. bad

We could interview fire suppression folks as to how the bad effects of bad ones happened, but that doesn’t employ models and satellites, so… not a preferred scientific tool. It seems a bit like the “streetlight effect” for science. https://en.wikipedia.org/wiki/Streetlight_effect

The question is.. where are the institutions that would take people's questions and convert them to actually helpful studies? Science for the People!

Expand full comment

I have a PhD in the study of wildfire history. Before then I spent more than 20 years as a reforestation contractor whose crews successfully performed nearly 20,000 acres of prescribed fires.

A "good fire" is one with intended positive outcomes and a planned ignition; a "bad fire" is one with an unplanned ignition or arson. The federal government says different, of course.

When using acreages to determine a fire's size it is important to consider the fuels involved: grasslands, shrublands, deciduous woodlands, conifer forestlands, and human structures are good divisions. Simple acreages are usually misleading or even meaningless.

Expand full comment

I believe the full quote is "If you torture the data long enough, it will confess to anything." I understand it is attributed to Ronald Coase, "How Should Economists Choose?"

Expand full comment

Fantastic! 🙏

Expand full comment

And

I'm reminded of my favorite math joke.

A mathematician, an economist and a statistician are asked "what is 2+2"?

Mathematician answers "4".

Economist answers "its between 3 and 5".

Statistician answers "what do you want it to be"?.

That's the whole story to me.

Expand full comment

I thought it was an auditor for a big five accounting firm who said "What do you want it to be?"

Expand full comment

Could very well be the genesis of it, I’m just aware of the joke which of course I didn’t make up.

Expand full comment

My opinion remains that much of "science" today as outlined above is dedicated to decision based evidence making. The corruption of standards is everywhere, you have outlined some of it here on your Substack.

The response to Claudine Gay among so many in academia and media is the clearest possible example of just how far everything has fallen.

But calling it out is the only path forward and through, so please continue what you are doing.

Looking forward to 2024, no matter how depressing it gets.

Expand full comment

I really liked these two pieces on 'mathiness' (wonderful term!). Roger - the 'mapping' you write about exists. I learned about systems mapping, also called causal loop diagrams, quite a while ago. The more of them I saw, however, the more discouraged I became because what should be a great tool for reaching clarity -- at least regarding your own thinking -- was being lost in poor/no definitions and rules that were not helpful. A number of years ago I set a goal to invent a step-by-step approach to systems 'mapping' (using a terrific platform called Kumu). I am still at work on that and keep getting sidetracked as I follow curiosity to sources that I think ultimately might help. Anyway, I made the map at this link a while ago: https://kumu.io/graciemorgan/the-odd-approach#the-odd-approach. It is not complete - some of the elements have no description, which is, of course, one of my main complaints about most maps but, alas, that's the way it is. If you follow this link (my maps are all public), you can find a couple of others that have more detail about the methodology that exists mostly in my own head. If the link doesn't work for some reason, just go to kumu.io, make a free account and search for graciemorgan. You will find all of my maps, including some of the old ones I learned from. I hope to return to filling out this project in the new year.

Expand full comment

Worth noting that in my admittedly biased information ecosystem at the time the claim was made that the debt-growth correlation didn't fundamentally change after adding the missing countries

I find a credible looking 2019 study accounting for both public and private debt that finds a transient loss to GDP growth within a year of the debt shock that dissipates over time. I'm not sure if that means the growth is recovered or the rate of growth recovers with the loss remaining

https://www.sciencedirect.com/science/article/abs/pii/S037842661930086X

Expand full comment