Yesterday, Science magazine reported that the University of Delaware “found one of its star scientists guilty of research misconduct.” This is a big deal. Science reports that the university
has accepted an investigative panel’s conclusion that marine ecologist Danielle Dixson committed fabrication and falsification in work on fish behavior and coral reefs. The university is seeking the retraction of three of Dixson’s papers and “has notified the appropriate federal agencies,” a spokesperson says.
In response, Science retracted one of the three papers which last February was placed under an “editorial expression of concern.” The University of Delaware report and associated call to retract three papers is perhaps just the tip of the iceberg, as dozens of other studies may be implicated. In response, Prof. Dixson’s lawyer says, ““Dr. Dixson adamantly denies any and all allegations of wrongdoing, and will vigorously appeal any finding of research misconduct.”
In a nutshell, the controversy here involves research into the supposed effects of increased carbon dioxide levels on the behavior of tropical fish — and yes, that means there is a direct climate change connection. Professor Dixson and her collaborators, including her PhD supervisor Philip Munday of James Cook University (now retired) in Townsville, Australia, have published dozens of papers suggesting very large and ecologically harmful effects of increasing carbon dioxide on fish behavior. Not surprisingly, this research has been published in major journals, cited widely in the media and resulted in considerable public funding of subsequent studies.
Several years ago, a separate group of researchers led by Timothy Clark, of Deakin University in Australia, expressed concerns about the integrity of this research. Clark and colleagues documented their concerns in a 2020 paper that sought to replicate the findings of a significant effect of increased carbon dioxide in the ocean, called “ocean acidification,” on the behavior of fish (Prof. Munday’s response can be found here in PDF). Clark and colleagues’ replication failed to reproduce the original findings. (For a deeper look at the background to this story see the excellent reporting of Martin Enserink here, here, here and here).
The scientific integrity issues here are not subtle. I spent some time last year looking at this case and the underlying data issues, and quickly identified some concerning problems in one of the datasets in question (see this long Twitter thread), confirming some of the excellent work of independent analyst Nick Brown.
Here I suggest three lessons of this case, which should result in some deep soul-searching in the science community and hopefully some open discussions. They are: Bad science often has powerful defenders; Bad science overshadows good science; and, ending on a positive note, Science is working.
Bad science often has powerful defenders
One might think that uncovering and exposing scientific misconduct would be rewarded in the scientific community. Sometimes it is, but in many cases scientists themselves oppose the exposure of bad science. Such resistance is often political — including the small politics of academia and the big politics of how research plays in real world politics. In my experience, bringing into the picture climate change (or any hyper-politicized issue) dramatically increases the stakes and the magnitude of opposition.
Consider how Hans-Otto Pörtner of the Alfred Wegener Institute in Bremerhaven, Germany — a co-chair of the Intergovernmental Panel on Climate Change — responded to allegations of misconduct in this case:
“Building a career on judging what other people did is not right. If such a controversy gets outside of the community, it's harmful because the whole community loses credibility.”
This statement gives me chills every time I read it.
And Pörtner is not an outlier. Just recently I was told by another high-ranking IPCC official that they strongly agree with our recent peer-reviewed critiques of out-of-date climate scenarios, but “Of course, I can never say that in public.” Oh the stories I can tell. Another time.
Individual scientists are often quick to take sides in debates over research integrity, and at times invoke factors well outside the scientific dispute. Have a look at the following Tweet is from an influential professor at the University of North Carolina to Fredrik Jutfelt, a co-author of the Clark et al. paper which failed to replicate the original research.
The solutions here are simple to state but difficult to implement.
Leading institutions and their leaders should be honest brokers, and not advocates for their friends, peers or favorite scientific or political conclusions. If those at the helm can’t serve as an honest broker, they should not be at the helm.
Those identifying research misconduct or even just bad research should be professionally encouraged and rewarded. The incentives in science and academia typically discourage such recognition.
Those seeking to shout down or shame legitimate scientific inquiry — even if that inquiry is politically or professionally uncomfortable — should be called out for impeding research. Science is tribal, like many areas of human activity, but because we occupy an authoritative and privileged position in society we should expect more from our community.
Bad science overshadows good science
The “decline effect” refers to a common dynamic: A study reports large and significant results (like carbon dioxide impacts fish behavior) and that effect declines in subsequent studies. The “decline effect” has been observed across many areas of science and is associated with issues in research design, publication incentives and, occasionally, research misconduct.
In Clements et al. 2022 — another paper by the group of researchers exploring research integrity on this issue — the authors explicitly look at the “decline effect” in research linking carbon dioxide and fish behavior. Clever. Not surprisingly, given what we know now, they find an extreme decline effect in the research on carbon dioxide and fish behavior published following the initial studies.
What I find absolutely amazing is that when they removed the work from the research group under scrutiny here, the decline effect across the literature disappears. The request in the Tweet below of the journal editors clearly indicates some degree of suspicion about research integrity. Bad research is rarely a secret.
Clements et al. also “found that large effect size magnitudes tend to be published in high impact journals & continue to have a stronger influence on this field in terms of citations.” Of course they do. Scientific journals like studies with “Big Wow!” results. These have a better chance to be published in “high impact” journals. That makes them more readily accessible to the media, which also likes “Big Wow!”
University press offices love them as well. So too do researchers looking for grants and promotion. Consider that after the initial series of publications and global attention to her research, Danielle Dixson was awarded a prestigious NSF CAREER grant. Incentives result in the “Big Wow!” — it is not complicated.
An interesting side note here is that the allegations of fish research misconduct reach to James Cook University (where Prof. Dixson’s PhD supervisor recently retired from). You may have heard of James Cook University in another controversy — that involving marine researcher Peter Ridd, who lost his university job as a consequence of questioning the scientific merit of research suggesting that the Great Barrier Reef was in terminal decline (see this Twitter thread for more details). The Australian government just announced that the Great Barrier Reef has seen a record recent recovery. People can argue over whether Ridd was right or wrong, but there can be no doubt his views were legitimate. For expressing them, he was pushed out of the academy.
Here as well, it is much easier to describe what should be done than it is to figure out how to do it.
We all, but especially us experts, need to exhibit a much greater degree of single-study-skepticism. The IPCC is generally pretty boring for most people because it assembles a vast literature and thus avoids the single-study hyperbole and biases that are endemic to media coverage of climate research.
Replication and robustness must become more central to research with policy or political implications. It is ridiculously easy to cherry-pick single studies in support of a favorite policy or political party. But that is not how robust science is produced.
And once again, we in the scientific community need to realign our professional and career incentive structures to support good scientific practices and to discourage the bad. We are arguably far from that right now.
Science is working
Following Fredrik Jutfelt in the Tweet above, I’ll end on a positive note.
This entire episode, as sordid and shocking as it is, should give us some hope that the mechanisms of self-correction in science — that which makes the scientific endeavor so special — are in fact working. I spend a lot of time at the messy intersection of science and politics and I am optimistic.
Are mechanisms of self-correction too slow, too incomplete, too inefficient? Absolutely.
But they are working.
It is up to all of us in the expert community, as well as in the media and in the attentive public to work harder to ensure that good science rises to the top and bad science is identified and sinks to the bottom. Many of the steps that might be taken to improve self-correction in science involve improving our institutions and the incentives that we build into them.
But these steps also involve human behavior in the context of political and societal conflicts. And that is exactly what makes this such a challenging issue.
Paying subscribers to The Honest Broker receive subscriber-only posts, regular pointers to recommended readings, occasional direct emails with PDFs of my books and paywalled writings and the opportunity to participate in conversations on the site. I am also looking for additional ways to add value to those who see fit to support my work.
There are three subscription models:
1. The annual subscription: $80 annually
2. The standard monthly subscription: $8 monthly - which gives you a bit more flexibility.
3. Founders club: $500 annually, or another amount at your discretion - for those who have the ability or interest to support my work at a higher level.
Very good analysis of this situation! Thank you.
Alas, this has been a big problem around COVID, where a number of "studies" were motivated by politics and not based at all on science.
Examples: https://www.the-scientist.com/news-opinion/disputed-hydroxychloroquine-study-brings-scrutiny-to-surgisphere-67595
https://www.pnas.org/doi/10.1073/pnas.2012415117
https://www.justfactsdaily.com/famed-bangladesh-mask-study-excluded-crucial-data
This email arrived at a very opportune time. Some time ago, I had read about the work on John Ioannidis at Stanford who wrote about the statistical unreasonableness of research results. Now I am deep in a book - "Science Fictions" - that explores the whole subject of bad research, fraud, bias etc. There is also some discussion of how invalid results are detected. If you don't mind shattering your last fragment of trust in research results, I thoroughly recommend the book.