The Covid Vaccine and Learning to Love the Technological Fix
Our best hope for moving beyond the pandemic is vaccination, but not all problems can be addressed through technology
Exactly one year ago, we saw the first reports of an unknown influenza-like disease in Wuhan, China. Twelve months later the world continues to struggle with the consequences of the Coronavirus pandemic, which has been at once a global public health crisis and a global economic crisis.
There will be many lessons to learn from the pandemic and the policy failures and successes that have accompanied it. But one lesson is already crystal clear — in the 21st century global health and prosperity depend crucially on science and technology and their governance.
The coronavirus vaccines that are now being administered around the world certainly offer the best hope for moving beyond the global coronavirus pandemic. Imagine if the only tools in the policy tool box were shutdowns, masks, social distancing and other behavioral approaches that have proven to be highly divisive and politicized.
Of course, vaccination is also political and vaccine hesitancy is a problem, not just for coronavirus but for other diseases as well. But even so, evidence for the effectiveness of vaccination for public health is undeniable. In 2017, the World Health Organization estimated that vaccination against diphtheria, tetanus, whooping cough and measles prevents 2-3 million deaths every year. In 1963, measles killed more than 2.6 million people around the world. Thanks to the measles vaccine, by 2017 that number dropped to 95,000.
Vaccines work.
They are a perfect example of what has been called a “technological fix” — a phrase popularized in 1966 by physicist Alvin Weinberg. He recognized that problems involving societal behavior were “more complex” than engineering challenges like rocket science. Weinberg explained that typically, “to solve social problems one must induce social change – one must persuade many people to behave differently than they have behave in the past.” In the Covid pandemic we are all now very familiar with what efforts to implement social change looks like — mask mandates, lockdowns, gathering limits, travel restrictions and so on. The pandemic has illustrated that social change is difficult to implement as policy, and sometimes practically impossible.
In contrast, Weinberg observed that technological development was comparatively much simpler, pointing to the Manhattan Project to develop a nuclear bomb as an example. Given this stark difference between the difficulties of behavioral change and the relative ease of technological development, Weinberg asked a provocative question: “ To what extent can social problems be circumvented by reducing them to technological problems?”
The appeal of the technological fix is ubiquitous and shapes our everyday lives across the range of human experience. For instance, if you have imperfect eyesight you might wear glasses or have undergone laser eye surgery. If our cities do not have enough readily available water, we dam rivers. We could adapt our behaviors and communities to poor eyesight, uncertain water availability and extensive childhood mortality due to disease, but we typically choose not to, preferring to employ a technological fix across these cases.
In some circumstances — eyeglasses for example — technological fixes are uncontroversial, but in others — such as nuclear power — technologies are deeply contested. How can we tell the difference between circumstances that may be amenable to a technological fix and those that are better addressed through behavioral change? How can we answer Weinberg’s challenge to identify circumstances where technology can help us to circumvent social problems?
One important answer to these questions can be found in a seminal 2008 article in Nature, in which Daniel Sarewitz and Richard Nelson offered three rules by which to distinguish “problems amenable to technological fixes from those that are not.”
Rule #1: The technology must largely embody the cause-effect relationship connecting problem to solution.
A technology should do what it is advertised to do. A vaccine should prevent disease, laser eye surgery should improve eyesight, a dam should reliably store water. Invariably, technologies that work also have downsides. Vaccines can have unwanted side effects, laser surgery doesn’t always improve eyesight and dams result in environmental impacts and sometimes fail. Consequently, careful assessments of positives and negatives associated with technologies must accompany their proposed deployment. Of course, people will disagree on costs, benefits and what is even to be valued in the first place. So understanding cause and effect relationships doesn’t determine decisions about technologies, but better informs them.
Rule #2: The effects of the technological fix must be assessable using relatively unambiguous or uncontroversial criteria.
When deployed the effects of the technology should be identifiable. The technological fix should actually fix the problem. When vaccination became widespread during my parents generation, the effects were easy to see — in 1953 the United States saw almost 58,000 cases of polio and more than 3,000 deaths. By 2000, the U.S. had no cases of polio. Ironically, the widespread success of vaccination may actually undercut this criteria of observable effectiveness, as people of my generation likely have never seen anyone with polio. Overcoming vaccine hesitancy today may depend more on trust in public health than in observable effects.
Rule #3: Research and development is most likely to contribute decisively to solving a social problem when it focuses on improving a standardized technical core that already exists.
Science is most practical in supporting technology when it works from an established base of knowledge for which there is much experience. Sarewitz and Nelson explain that vaccine development is based on more than two centuries of experience contributes their effectiveness, in stark contrast to, for instance, more uncertain, contested and theoretical knowledge of how people learn how to read. One reason why the Covid-19 vaccines have been developed so quickly is that researchers have been studying the underlying messenger RNA technology on which they are based “for decades.” Correspondingly, we should not often expect technological fixes requiring fundamentally novel breakthroughs. Instead, technological fixes typically result from the advancement of technologies that we already have.
The Sarewitz/Nelson rules help us to understand the widespread acceptance of eyeglasses — they improve eyesight, obviously, and have been around for centuries — and trepidation when it comes to technologies like self-driving cars or artificial intelligence — it is not clear what problem they are solving, if they work as expected, and we have no experience with them. Technologies are never neutral, so assessment of their impacts on society is essential to effective governance.
Sarewitz and Nelson caution, “one of the key elements of a successful technological fix is that it helps to solve the problem while allowing people to maintain the diversity of values and interests that impede other paths to effective action.” As such, “technological fixes do not offer a path to moral absolution, but to technical resolution.” If a goal of policy implementation is in fact societal change, then a technological fix that allows existing patterns of behavior to remain in place won’t be seen as a solution. We have seen such conflicts involving behavior over technologies of contraception, which are viewed by some as a solution to unwanted pregnancies and by others as enabling immoral behaviors.
Decades and centuries from now the rapid development and deployment of the Covid-19 vaccines will no doubt be looked back upon as a triumph of science and policy. In the near term however, there remain significant challenges to securing sufficient global vaccination coverage and restoring public health and the economy. We are not yet past the pandemic, but thanks to the rapid development of vaccines, we can now see a brighter future.
An important lesson that can already draw from our experiences over the past year is that it is never enough to “follow the science.” Instead, we must understand both societal and scientific contexts to help us determine collectively when technologies offer the prospect for policy remedies, and when they don’t. In short, we must learn to better “lead the science” to help us get to where we want to go.
Paying subscribers to The Honest Broker receive subscriber-only posts, regular pointers to recommended readings, occasional direct emails with PDFs of my books and paywalled writings and the opportunity to participate in conversations on the site. I am also looking for additional ways to add value to those who see fit to support my work.
There are three subscription models:
1. The annual subscription: $80 annually
2. The standard monthly subscription: $8 monthly - which gives you a bit more flexibility.
3. Founders club: $500 annually, or another amount at your discretion - for those who have the ability or interest to support my work at a higher level.