The high-consequence / low-probability nature of atomic risks inevitably puts nuclear skeptics at a disadvantage.
Imagine, for instance, a hypothetical engineer who, in the early 1970s, studies the risk analyses of the Fukushima Daichi nuclear plant and correctly identifies the various shortcomings that will one day bring it down. She understands that the risks of earthquakes and tsunamis are higher than envisaged, and that the plant is ill-designed to accommodate them. This leads her to presciently imagine a natural disaster that causes a chain of meltdowns that, at their peak, legitimately threaten the future of Tokyo: home to 35 million people and a cornerstone of the global economy.
Now imagine her future. It is comforting to believe that her insight would have led to a reconsideration of the plant – a slew of changes that would have undone Fukushima’s fate and protected Japan in April 2011. But, alas, this narrative isn’t very credible. It neglects the fact that all through the development of that plant – and, indeed, all nuclear plants – there were credible critics who made such warnings and were ignored.
Engineers like to imagine that truth ‘shines by its own lights’ but this profoundly misrepresents technical discourse. Every technological finding is actually an ‘argument’ with contestable uncertainties and interpretations. And where powerful interests are at stake – as they have been throughout the history of nuclear technologies – unwelcome technical arguments are always vigorously contested. ‘Truth’ is always negotiated, because although veracity does shine to a degree, it rarely shines brightly enough to be distinguishable on its own.
So it is that our engineer’s prescient arguments are met with counterclaims and doubts. ‘The evidence is incomplete,’ she is told. ‘Your calculation is built on an uncertain variable’. ‘Our studies reach different conclusions.’ ‘The plant is safe.’
In most engineering circumstances she could look forward to her vindication, terrible though it might be. Eventually her predictions would come to pass, and her critics would be proven wrong. Many technical and scientific reputations have been built on unpopular claims that are subsequently borne-out: journeys in the wilderness endured for their eventual glories. Karl Popper saw this dynamic as exemplary of the scientific method.
The nature of nuclear risks makes vindication unrealistic, however. We know today that our engineers fears will be realized in time, but it would have been unrealistic for her to expect this or to build a career around it. Nuclear risks have timeframes of hundreds and sometimes thousands of years (millions, in the case of waste storage). These timeframes are vital, given the gravity of the hazards involved, but they fit awkwardly onto human discourse. This is to say that even if Fukushima’s actual risks were orders of magnitude higher than the industry’s calculations promised, any critic who understood the real risk and argued for it could expect to go an entire lifetime without seeing their claims validated. (Even if our engineer had seen the future, she might still have been deterred. Construction on Fukushima began in July 1967; the plant did not fail until almost 44 years later – the length of an entire career.)
With no realistic hope of empirical vindication, our engineer is forced to slug it out in an exhausting crossfire of claims and counterclaims. It is a losing battle. Every year that passes without the disaster she foresees subtly undermines her credibility. This is unfair as, statistically-speaking, even decades of safe operation prove very little about a reactor that is expected to melt down no more than once in every 100,000 years (or more). But humans are not statistical beings. Our imaginations and intuitions are honed for numbers at a human scale. Forty years without a catastrophe intuitively feels like compelling evidence of a reactor’s safety, just as fifty years without an atomic holocaust seems like compelling evidence that “deterrence works” (as I outline in a previous post).
Our engineer’s antagonists fuel and leverage this misperception of safety, and wield enormous resources through which to do so. Billions of dollars are invested in the notion that nuclear is safe, as are an array of expert careers and widespread notions of national security: a pervasive network of economic, military and professional interests. So it is that she comes to be marginalized by a vast ‘nucleocracy’ that controls the official publication channels, the professional bodies, and, through them, the public perception of ‘orthodox’ knowledge.
The inevitable result is a tragic career. For like Cassandra, the mythical Greek beauty who angered the gods, she is cursed with terrible foresights that nobody will heed. She struggles to publish. Her work is vilified by aggressive and well-funded lobbyists. She is appointed to few, if any, official committees. In short, she receives few of the accolades through which modern societies confer expert legitimacy: the ‘right to speak’ on the esoteric nuclear matters that frame our lives and livelihoods depend. She publishes in the ‘alternative’ media – mostly online – whichs offer a platform from which to speak, even as it further drains her credibility.
Such is the fate of the nuclear naysayer.
This vignette is hyperbole to some extent, of course, but there is more far more truth to it than most people realize. For a compelling glimpse into the fate of even well-established scientists who defy the ‘nucleocracy’ see Gayle Greene’s fascinating (2011) article: “Richard Doll and Alice Stewart: reputation and the shaping of scientific ‘truth’” in Perspectives in Biology and Medicine. 54 (4): 504–31. Or look instead to the various critical studies of Fukushima’s safety that were marginalized before the accident and rediscovered afterwards. Also, for an account of why our engineer’s ‘vindication’ is likely to be hollow even when it comes, see this working paper by yours truly.