The Slow Death of Nuclear Power and the Rise of Renewables
The Chairman of the U.S. Atomic Energy Commission Lewis Strauss noted in a 1954 speech to the National Association of Science Writers that the splitting of the atom and the dawn of the Atomic Age heralded, in his view, a coming era of electrical power that for consumers would be “too cheap to meter.” Soon, he said, “it would not be too much to expect that our children”—meaning, of course, us—would know of things like famine, limited range of travel and nearly every other human malady only from reading about them in history books.
The technology known as nuclear power today is a lumbering giant utterly dependent on state-based largesse for its existence. Photo courtesy of Shutterstock
Born on a great tide of technological progress released by the atom, he intoned, mankind would collectively face a new age of prosperity the likes of which the world had never known before. At the time it was widely assumed that Strauss—a pivotal figure in America’s early years of nuclear experimentation and tinkering—was talking about fission power, as he had just days earlier spoken of industry having at its command vast amounts of “electrical power from atomic furnaces,” at the groundbreaking of the Shippingport Atomic Power Station, the world’s first full-scale, civilian nuclear power reactor, located outside of Pittsburgh.
In fact, Strauss was actually talking about fusion power, which, at the time, was a top secret, Cold War concern of the American government. However, the supposition of the country’s technocratic elite was that just as fission research had led to both the atomic bomb and plants like the one at Shippingport, fusion breakthroughs would soon lead to controlled fusion reactions and reactors that would herald the coming age of plenty that Strauss predicted.
From too-cheap-to-meter marvels to state-supported dinosaurs
History didn’t turn out as Strauss had envisioned, of course, as a controlled fusion reaction—as opposed to the uncontrolled variety which scientists easily pulled off and turned into ever more powerful nuclear weapons—proved devilishly difficult to produce. Controlled fusion was orders of magnitude more difficult to achieve, as it meant capturing the process that fueled the sun and placing it at the center of an apparatus that could both contain and sustain the reaction while drawing enough energy from it to provide all that electricity Strauss said would be too plentiful to even bother billing people for using.
While we are much closer today to the dream of fusion power than we were in 1954, it is still some ways off, and in the meantime, the type of plant represented by the Shippingport Power Station—dirty, dangerous and relatively inefficient fission-based reactors—became the standard-bearer for nuclear energy. Thus, a stopgap technology meant to be merely the first section of a fusion-based bridge to the future that we were never supposed to come to rest upon became what humanity adopted and got stuck with. Instead of being too cheap to measure, the technology known as nuclear power today is a lumbering giant utterly dependent on state-based largesse for its existence.
To see why, one has to only look at how fission-based reactor plants were adopted since 1954. In the West, reactors became the preserve of heavy industry that worked hand-in-glove with the state to ensure that reactors got built, and while the safety record of the West’s reactors is rather remarkable, it came at huge cost. Put simply: The inherent danger represented by fission reactors—as seen at Three Mile Island, Chernobyl, and now Fukushima—represents a potential cost so high that the state became the only entity large enough to bear it.
Because of this danger, Western plants—from America’s privately-built and -operated reactors, to France’s gleaming fleet of power stations—are all required to be too big to fail, and rightly so. After all, if a major accident were to occur, the damage inflicted would be incalculable. But being required to be too big to fail often means fission reactors are too expensive to build privately and, thus, require state intervention to succeed. In the U.S., this intervention comes in the form of massive subsidies, both direct and indirect, which lower the cost for corporate actors at every end of the business—from the mining of uranium, to the storage of radioactive waste. Indeed, nuclear subsidies in the U.S. are so huge that their value is often greater than any actual energy produced.
To drive home the point of just how coddled the nuclear power industry is in America, consider the Price–Anderson Nuclear Industries Indemnity Act of 1957—since renewed several times. It established a no-fault insurance-type system for the industry, in which the industrial player responsible for an accident is liable for the first $12.6 billion in damages. Past the $12.6 billion mark, the federal government would effectively be responsible for covering losses incurred by the public. As should be obvious, this is a laughably small amount and it means that if a major accident would ever occur in the U.S., Uncle Sam—not the local power company—will be responsible for covering your now-glowing losses.
Meanwhile, in France—the Western country that has had perhaps the best experience with fission-based nuclear power—a state-controlled company runs the country’s nuclear power plants and a state-coddled national champion builds them. As a result, even more in France than in the U.S., the economic cost of nuclear power is borne by the taxpayer. While France has as a result managed to create a very large, very safe nuclear power industry that exports large amounts of electricity to neighboring European countries, it is not a private one by any stretch of the imagination. It is the French state—not the market—that is behind the success of France’s nuclear energy industry.
What’s more, no one has come up with a viable scheme to take care of the immense amounts of radioactive toxic waste produced by fission reactors, and there’s certainly no scheme that would do so for the millennia required to keep such material out of harm’s way. Current plans bandied about in both Europe and the U.S. envision entombing the waste deep underground for over 10,000 years, but so far, few communities have proven willing to host the giant waste repositories anywhere near them. This has led to the stalling of U.S. plans for long-term storage, while development of similar plans in Europe are behind schedule, over budget and proceeding piecemeal, at best. Regardless of what is eventually done with the waste, it is guaranteed to be something that taxpayers will ultimately be on the hook for in one way or another for decades—if not centuries—to come.
Fission’s extinction is on the horizon
Thus coddled, it should come as no surprise that nuclear power became limited in scale and scope as the 20th century wore on. Indeed, nuclear energy’s share of civilian electricity production has remained relatively stagnant for years, and outside of China—another centrally-planned system of government support that those running French and American reactors might find familiar and comforting—signs of a so-called nuclear renaissance once touted by industry supporters has shown little sign of actually appearing. Indeed, commercial fission-reactor power plants in the U.S.—just like coal—are being driven to extinction by competition from gas, wind and now solar power.
Furthermore, this is a worldwide trend, as a recent report issued by Germany’s Green-leaning Heinrich Böll Foundation points out. Globally, nuclear power’s worldwide share of electricity production hit its max in the mid-1990s at around 17.6 percent of all electricity produced. Since then, there has been a steady descent, contributing 10.8 percent of all electricity produced in 2013. And this isn’t just shares of a total that are falling—it’s absolute production, too. Globally, output from nuclear reactors hit an all-time high of 2,660 terawatt-hours (TWh) in 2006 before falling to 2,359 TWh in 2013. (1 TWh is enough to power 90,000 homes.)
All this is driven by the simple fact that whereas up until the late 1980s, many more nuclear power plants were being brought online in a given year than were shuttered or mothballed, that trend has flipped. Now, more nuclear power plants are being closed than opened in an average year, and starting in 2000, existing plants have generally operated below capacity due to maintenance issues or cost competition from other sources of power. Notably, gas, wind and solar have been the major beneficiaries of this decline, since while nuclear plants have begun to consistently operate below capacity, these cheaper alternatives have simultaneously experienced tremendous growth.
One saving grace that the industry might tout as evidence of its non-obsolescence is that the number of active, new-build construction projects is higher now than it has been since 1987. While technically true, this “build up” is deceptive and pales in significance when compared to the mountain of new builds that came in the 1970s. Compared to what came before, current construction is a molehill next to a mountain. Further, many projects listed as “active” are little more than paper projects, with many having been technically active for years with little to show for it. Indeed, according to the German Greens’ report:
Eight reactors have been listed as “under construction” for more than 20 years and have continually seen delays and setbacks. Of these eight, just two are likely to be hooked up to the grid in the coming years;
One Indian reactor has been similarly under construction for 12 years with no hook-up date in sight;
In Taiwan, two reactor units under construction for 15 years were halted this past April due to political opposition;
At least 50 of the units listed as “under construction” have encountered construction delays—delays lasting from several months to several years;
In China, ground zero for the so-called nuclear renaissance, 21 of the 28 units under construction are experiencing delays lasting between several months and more than two years;
Of the 17 remaining projects, a few have come online but many have yet to reach a targeted start-up date, and may or may not face delays or cancellations in the future.
The future is renewable
Any way you want to slice it, the report issued by the German Greens is an impressive, devastating indictment of the grim state of the global nuclear power industry. If not yet dead, the industry is nonetheless so gravely ill that leaving it to its own devices would surely lead to its death. Without even more state support, in other words, the production of electricity from nuclear power plants is increasingly going to become something that is a very small, very expensive part of the global energy complex.
This is because reactors are being retired faster than they are being completed, existing fleets are aging and becoming targets for shuttering, and the global nuclear power fleet has been operating well below capacity for many years now—all while new-build projects face delays, cost run-ups and regulatory and market uncertainty going forward. This is happening everywhere, not just in one or two countries. Given this, it should come as no surprise that nuclear is fast being replaced by renewables.
Again, according to the report issued by the German Greens, in 2013, 32 gigawatts (GW) of wind and 37 GW of solar were added to the world’s power grids—output equivalent to several nuclear power plants. By the end of last year, China had a total of 91 GW of wind power and 18 GW of solar capacity installed, with solar exceeding operating nuclear generation capacity for the first time. Indeed, China added four times more solar than nuclear capacity in just the past year and actually generated more electricity in 2013 from both solar and wind than nuclear power. But the reality is actually far worse for nuclear than that, as China generated more electricity from both wind and solar separately than nuclear as a whole.
Meanwhile, Spain generated more power from wind than from any other source, outpacing nuclear and other competitors for the first time. It also marked the first time that wind has become the largest electricity-generating source over an entire year in any country. While impressive, this means that Spain has thus joined the list of countries that possess nuclear power and produce more electricity from renewables—excluding large hydro-power—than from nuclear power. (This is no small group, as it includes not just the abovementioned China, but Brazil, Germany and Japan, too.)
This trend also isn’t going away. Solar is entering an exponential growth path—mimicking the route electronics has taken—while wind continues to grow not just in the U.S., but in the rest of the world as well. Renewables are currently the second largest source of electricity for the European Union and are on a path to overtake fossil fuels—which are declining—in the coming decades. Solar and wind as individual projects are cheaper, quicker to bring to market, more flexible once on the market and have none of the devastating liability issues that nuclear carries. Worse for nuclear, though, is that renewables actually work to reduce wholesale electricity prices in ways devastating to nuclear—and, indeed, all centralized hub-and-spoke utility models—which requires huge amounts of steadily-priced power to remain competitive.
Nuclear, then, is entering a long twilight period of decline, and it’s difficult to see how it will emerge as a viable industry. Like the dinosaurs of old that did not realize their days were numbered, the nuclear power industry is a slow-to-adapt sector requiring a very specialized operating environment in the form of hugely expensive subsidies and an exquisitely calibrated—some would say rigged—market to exist. Since those two pillars of the industry are crumbling, the future of fission-based nuclear power plants may very well become something only talked about—as Lewis Strauss predicted long ago—in the history books.
You Might Also Like
A rare yellow penguin has been photographed for what is believed to be the first time.
- World-Renowned Photographer Documents Most Remote ... ›
- This Penguin Colony Has Fallen by 77% on Antarctic Islands ... ›
EcoWatch Daily Newsletter
By Stuart Braun
We spend 90% of our time in the buildings where we live and work, shop and conduct business, in the structures that keep us warm in winter and cool in summer.
But immense energy is required to source and manufacture building materials, to power construction sites, to maintain and renew the built environment. In 2019, building operations and construction activities together accounted for 38% of global energy-related CO2 emissions, the highest level ever recorded.
- Could IKEA's New Tiny House Help Fight the Climate Crisis ... ›
- Los Angeles City-Owned Buildings to Go 100% Carbon Free ... ›
- New Jersey Will Be First State to Require Building Permits to ... ›
By Eric Tate and Christopher Emrich
Disasters stemming from hazards like floods, wildfires, and disease often garner attention because of their extreme conditions and heavy societal impacts. Although the nature of the damage may vary, major disasters are alike in that socially vulnerable populations often experience the worst repercussions. For example, we saw this following Hurricanes Katrina and Harvey, each of which generated widespread physical damage and outsized impacts to low-income and minority survivors.
Mapping Social Vulnerability<p>Figure 1a is a typical map of social vulnerability across the United States at the census tract level based on the Social Vulnerability Index (SoVI) algorithm of <a href="https://onlinelibrary.wiley.com/doi/abs/10.1111/1540-6237.8402002" target="_blank"><em>Cutter et al.</em></a> . Spatial representation of the index depicts high social vulnerability regionally in the Southwest, upper Great Plains, eastern Oklahoma, southern Texas, and southern Appalachia, among other places. With such a map, users can focus attention on select places and identify population characteristics associated with elevated vulnerabilities.</p>
Fig. 1. (a) Social vulnerability across the United States at the census tract scale is mapped here following the Social Vulnerability Index (SoVI). Red and pink hues indicate high social vulnerability. (b) This bivariate map depicts social vulnerability (blue hues) and annualized per capita hazard losses (pink hues) for U.S. counties from 2010 to 2019.<p>Many current indexes in the United States and abroad are direct or conceptual offshoots of SoVI, which has been widely replicated [e.g., <a href="https://link.springer.com/article/10.1007/s13753-016-0090-9" target="_blank"><em>de Loyola Hummell et al.</em></a>, 2016]. The U.S. Centers for Disease Control and Prevention (CDC) <a href="https://www.atsdr.cdc.gov/placeandhealth/svi/index.html" target="_blank">has also developed</a> a commonly used social vulnerability index intended to help local officials identify communities that may need support before, during, and after disasters.</p><p>The first modeling and mapping efforts, starting around the mid-2000s, largely focused on describing spatial distributions of social vulnerability at varying geographic scales. Over time, research in this area came to emphasize spatial comparisons between social vulnerability and physical hazards [<a href="https://doi.org/10.1007/s11069-009-9376-1" target="_blank"><em>Wood et al.</em></a>, 2010], modeling population dynamics following disasters [<a href="https://link.springer.com/article/10.1007%2Fs11111-008-0072-y" target="_blank" rel="noopener noreferrer"><em>Myers et al.</em></a>, 2008], and quantifying the robustness of social vulnerability measures [<a href="https://doi.org/10.1007/s11069-012-0152-2" target="_blank" rel="noopener noreferrer"><em>Tate</em></a>, 2012].</p><p>More recent work is beginning to dissolve barriers between social vulnerability and environmental justice scholarship [<a href="https://doi.org/10.2105/AJPH.2018.304846" target="_blank" rel="noopener noreferrer"><em>Chakraborty et al.</em></a>, 2019], which has traditionally focused on root causes of exposure to pollution hazards. Another prominent new research direction involves deeper interrogation of social vulnerability drivers in specific hazard contexts and disaster phases (e.g., before, during, after). Such work has revealed that interactions among drivers are important, but existing case studies are ill suited to guiding development of new indicators [<a href="https://doi.org/10.1016/j.ijdrr.2015.09.013" target="_blank" rel="noopener noreferrer"><em>Rufat et al.</em></a>, 2015].</p><p>Advances in geostatistical analyses have enabled researchers to characterize interactions more accurately among social vulnerability and hazard outcomes. Figure 1b depicts social vulnerability and annualized per capita hazard losses for U.S. counties from 2010 to 2019, facilitating visualization of the spatial coincidence of pre‑event susceptibilities and hazard impacts. Places ranked high in both dimensions may be priority locations for management interventions. Further, such analysis provides invaluable comparisons between places as well as information summarizing state and regional conditions.</p><p>In Figure 2, we take the analysis of interactions a step further, dividing counties into two categories: those experiencing annual per capita losses above or below the national average from 2010 to 2019. The differences among individual race, ethnicity, and poverty variables between the two county groups are small. But expressing race together with poverty (poverty attenuated by race) produces quite different results: Counties with high hazard losses have higher percentages of both impoverished Black populations and impoverished white populations than counties with low hazard losses. These county differences are most pronounced for impoverished Black populations.</p>
Fig. 2. Differences in population percentages between counties experiencing annual per capita losses above or below the national average from 2010 to 2019 for individual and compound social vulnerability indicators (race and poverty).<p>Our current work focuses on social vulnerability to floods using geostatistical modeling and mapping. The research directions are twofold. The first is to develop hazard-specific indicators of social vulnerability to aid in mitigation planning [<a href="https://doi.org/10.1007/s11069-020-04470-2" target="_blank" rel="noopener noreferrer"><em>Tate et al.</em></a>, 2021]. Because natural hazards differ in their innate characteristics (e.g., rate of onset, spatial extent), causal processes (e.g., urbanization, meteorology), and programmatic responses by government, manifestations of social vulnerability vary across hazards.</p><p>The second is to assess the degree to which socially vulnerable populations benefit from the leading disaster recovery programs [<a href="https://doi.org/10.1080/17477891.2019.1675578" target="_blank" rel="noopener noreferrer"><em>Emrich et al.</em></a>, 2020], such as the Federal Emergency Management Agency's (FEMA) <a href="https://www.fema.gov/individual-disaster-assistance" target="_blank" rel="noopener noreferrer">Individual Assistance</a> program and the U.S. Department of Housing and Urban Development's Community Development Block Grant (CDBG) <a href="https://www.hudexchange.info/programs/cdbg-dr/" target="_blank" rel="noopener noreferrer">Disaster Recovery</a> program. Both research directions posit social vulnerability indicators as potential measures of social equity.</p>
Social Vulnerability as a Measure of Equity<p>Given their focus on social marginalization and economic barriers, social vulnerability indicators are attracting growing scientific interest as measures of inequity resulting from disasters. Indeed, social vulnerability and inequity are related concepts. Social vulnerability research explores the differential susceptibilities and capacities of disaster-affected populations, whereas social equity analyses tend to focus on population disparities in the allocation of resources for hazard mitigation and disaster recovery. Interventions with an equity focus emphasize full and equal resource access for all people with unmet disaster needs.</p><p>Yet newer studies of inequity in disaster programs have documented troubling disparities in income, race, and home ownership among those who <a href="https://eos.org/articles/equity-concerns-raised-in-federal-flood-property-buyouts" target="_blank">participate in flood buyout programs</a>, are <a href="https://www.eenews.net/stories/1063477407" target="_blank" rel="noopener noreferrer">eligible for postdisaster loans</a>, receive short-term recovery assistance [<a href="https://doi.org/10.1016/j.ijdrr.2020.102010" target="_blank" rel="noopener noreferrer"><em>Drakes et al.</em></a>, 2021], and have <a href="https://www.texastribune.org/2020/08/25/texas-natural-disasters--mental-health/" target="_blank" rel="noopener noreferrer">access to mental health services</a>. For example, a recent analysis of federal flood buyouts found racial privilege to be infused at multiple program stages and geographic scales, resulting in resources that disproportionately benefit whiter and more urban counties and neighborhoods [<a href="https://doi.org/10.1177/2378023120905439" target="_blank" rel="noopener noreferrer"><em>Elliott et al.</em></a>, 2020].</p><p>Investments in disaster risk reduction are largely prioritized on the basis of hazard modeling, historical impacts, and economic risk. Social equity, meanwhile, has been far less integrated into the considerations of public agencies for hazard and disaster management. But this situation may be beginning to shift. Following the adage of "what gets measured gets managed," social equity metrics are increasingly being inserted into disaster management.</p><p>At the national level, FEMA has <a href="https://www.fema.gov/news-release/20200220/fema-releases-affordability-framework-national-flood-insurance-program" target="_blank">developed options</a> to increase the affordability of flood insurance [Federal Emergency Management Agency, 2018]. At the subnational scale, Puerto Rico has integrated social vulnerability into its CDBG Mitigation Action Plan, expanding its considerations of risk beyond only economic factors. At the local level, Harris County, Texas, has begun using social vulnerability indicators alongside traditional measures of flood risk to introduce equity into the prioritization of flood mitigation projects [<a href="https://www.hcfcd.org/Portals/62/Resilience/Bond-Program/Prioritization-Framework/final_prioritization-framework-report_20190827.pdf?ver=2019-09-19-092535-743" target="_blank" rel="noopener noreferrer"><em>Harris County Flood Control District</em></a>, 2019].</p><p>Unfortunately, many existing measures of disaster equity fall short. They may be unidimensional, using single indicators such as income in places where underlying vulnerability processes suggest that a multidimensional measure like racialized poverty (Figure 2) would be more valid. And criteria presumed to be objective and neutral for determining resource allocation, such as economic loss and cost-benefit ratios, prioritize asset value over social equity. For example, following the <a href="http://www.cedar-rapids.org/discover_cedar_rapids/flood_of_2008/2008_flood_facts.php" target="_blank" rel="noopener noreferrer">2008 flooding</a> in Cedar Rapids, Iowa, cost-benefit criteria supported new flood protections for the city's central business district on the east side of the Cedar River but not for vulnerable populations and workforce housing on the west side.</p><p>Furthermore, many equity measures are aspatial or ahistorical, even though the roots of marginalization may lie in systemic and spatially explicit processes that originated long ago like redlining and urban renewal. More research is thus needed to understand which measures are most suitable for which social equity analyses.</p>
Challenges for Disaster Equity Analysis<p>Across studies that quantify, map, and analyze social vulnerability to natural hazards, modelers have faced recurrent measurement challenges, many of which also apply in measuring disaster equity (Table 1). The first is clearly establishing the purpose of an equity analysis by defining characteristics such as the end user and intended use, the type of hazard, and the disaster stage (i.e., mitigation, response, or recovery). Analyses using generalized indicators like the CDC Social Vulnerability Index may be appropriate for identifying broad areas of concern, whereas more detailed analyses are ideal for high-stakes decisions about budget allocations and project prioritization.</p>
By Jessica Corbett
Sen. Bernie Sanders on Tuesday was the lone progressive to vote against Tom Vilsack reprising his role as secretary of agriculture, citing concerns that progressive advocacy groups have been raising since even before President Joe Biden officially nominated the former Obama administration appointee.