In 1999, Brookhaven National Laboratory in New York was preparing to start experiments with its new high-energy physics collider called the Relativistic Heavy Ion Collider (RHIC). But before experiments began, a physicist at Brown University published a preprint arguing that the experiments planned at RHIC could lead to the creation of a black hole that would destroy the Earth. As they say, big if true!
It did not take long for the potential BHIC black hole apocalypse to reach the British and U.S. media, prompting the director of Brookhaven, Jack Marburger1, to convene an expert committee to evaluate the risks of catastrophe. That committee concluded that the possibility of catastrophe could be “firmly excluded.” The experiments got underway and the world was not sucked into a black hole.2
Today, we can smile at the RHIC episode, but it does tell us some important things — Among them, experts can disagree on the risks (and benefits) of proposed research, the sustainability of research depends upon its public legitimacy, and trust is furthered by accountability and transparency.
I thought of the RHIC yesterday while attending a fantastic symposium at the American Enterprise Institute in Washington, DC, titled Regulating Risky Research: The Science and Governance of Pathogens of Pandemic Potential.3 You can watch the recording of the symposium at YouTube. In this post I share some of my takeaways from the day based on my discussions with scientists, agency officials and policy makers.
First, the regulation of risky research is not new in science and technology policy. The discussions and debates go back more than 50 years, and today includes topics as varied as geoengineering, artificial intelligence, CRISPR, nuclear technologies, human subjects — and the subject of yesterday’s discussions, research on potential pandemic pathogens.
Attention is increasingly focused on the risks of research with pathogens because there is a chance — a good chance in my opinion — that COVID-19 was the result of a research-related incident involving U.S.-China research collaborations, something I’ve written about frequently here at THB. However, COVID-19 origins aside, the issues of regulating risky pathogen research stand on their own.
Commonly — but not entirely accurately — characterized as “gain of function” research, there are actually three areas of concern, according to Harvard’s Marc Lipsitch, who spoke yesterday. One is research on existing pandemic agents, like smallpox, which is heavily regulated. A second is research that seeks to enhance the transmissivity or virulence of agents without pandemic potential, turning them into agents with pandemic potential. The third area is research that may pose potential pandemics risks, but those risks are unknown when the research is conducted, and may in fact be unknowable.
Perhaps not surprisingly, researchers who work with potentially risky viruses do not always agree on what research is or is not risky. Such lack of agreement makes it difficult to identify what would fall under more strict research regulations and what would not. There are lots of opinions on this.
One reason for the lack of agreement on the risks and benefits of future research is that — as is the case in most areas of research — knowing with any degree of certainty the outcomes of research is difficult and often impossible. If we could accurately anticipate the results of proposed research, well, we wouldn’t need to do the research. The word “research” derives from a Middle French word, recherche, that means to go about searching. There is no shortcut to certainties about risks and benefits in such research.
Some who advocate for conducting more risky research rely on the notion of “basic research” — Since we don’t know where research will take us and what research will pay off in societal benefits, we therefore need to do as much research as we can to maximize potential benefits. After all, if risky research pays off in a vaccine that can be developed before Disease X appears sometime in the future, we could save millions and millions of lives.
Others who favor stronger regulations often assert the reality of human fallibility or malevolence — Simple probability tells us that over time, a research-related incident that leads to a pandemic is inevitable. As well, knowledge gained through such research could be weaponized by bad actors, also leading to a pandemic. By simply not conducting such research, we could save millions and millions of lives.
There is simply no empirical basis on which to reconcile these two perspectives. Consequently, discussion of risks and benefits is often about other factors, usually unstated — like the professional self-interests of the researchers who would be working with dangerous viruses, the perceived importance of such research by governments to national security, and the economic interests of all involved.
My emerging perspective on research with dangerous pathogens is that there should be an exceedingly high bar for the conduct of research on potential pathogenic pathogens. The conduct of such research anywhere in the world would have to at minimum meet the following criteria:
A clear case for the benefits of the research must be made, and there must be a path to benefits. Curiosity or exploration do not offer such a path.
It must be shown that there are no risk-free alternative research methods or designs that would obtain the desired knowledge.
The research should be preregistered with the location of the research publicly identified, prior to the research being conducted.
The research should be indemnified by the host organization via insurance, to some significant amount — e.g., perhaps the first $500 billion of damages — should a research-related incident occur.4
'The research could only be conducted in a laboratory with a specified level of biosafety, such as BSL-4.
Implementing such principles would require their institutionalization in some form.5 Of course, not all countries may agree to follow such guidelines, but there is precedent in global agreements for research on smallpox, polio, human subjects research, chemical weapons and nuclear technologies. These issues, and others, offer effective models in place that might be useful in securing a global consensus on norms and regulations related to risky research.
In the case of the RHIC experiments at Brookhaven, the risk-benefit calculus was simple — any non-zero probability of apocalypse meant that the expected costs of the experiment would be infinite. Further, the research itself was not obviously necessary to perform. Research on potential pandemic pathogens is far more complex — the risks are exceedingly large, but there is also the possibility of large benefits, and both might actually be unknowable.
As science and technology advance, we are inevitably going to see more issues for which we need to govern research with uncertain or unknowable risks and benefits, and do so in a way the furthers democratic accountability. We have our work cut out for us.
Thanks for reading! I welcome your comments, perspectives and critique. Thanks for your support.
Jack Marburger, a Democrat, went on to become President George W. Bush’s science advisor and was pilloried by the scientific community for his willingness to work for the Republican Administration. Marburger was the first science advisor that I interviewed in our University of Colorado science advisor project, in 2005. He became a collaborator until his death at 70 in 2011. Here is a speech Marburger gave upon becoming president of Stonybrook University in 1981: The Trap of Thinking We Know it All.
Some days I wonder if perhaps we were sucked into a black hole and that explains the craziness of American politics over the past decade. I’m not sure, more research is needed.
Bravo to Tony Mills and his team at AEI.
Indemnification by private insurance would also motivate an independent (of the governments and research community) evaluation of the potential risks of the research.
Perhaps the emerging global Pandemic Agreement offers an umbrella.
"Simple probability tells us that over time, a research-related incident that leads to a pandemic is inevitable."
As occurred in 2019. Not sure why you use qualifiers, the released messages showing Fauci directing the creation of the "Proximal Origins" paper regarding covid 19 should put that to bed. As noted, the researchers who produced this paper had research grants before Fauci and they internally thought lab leak was most likely.
Its just all so dirty.
This is an outstanding article and I agree 100% with your recommended criteria for allowing such research. The current situtation is indefensible. We allow the scientists themselves to decide what experiments are acceptable, and allow them to be done in BSL-2 or BSL-3 containment. That more or less guarantees a future pandemic like COVID-19, or worse.