One Month After West Virginia Chemical Spill Major Data Gaps and Uncertainties Remain
Yesterday marked exactly a month since what is now said to be 10,000 gallons of “crude MCHM”—mixed with what was later found to have included other chemicals—spilled into West Virginia’s Elk River, contaminated 1,700 miles of piping in the water distribution system for nine counties, and disrupted the lives of hundreds of thousands of the state’s residents.
Despite declining levels of the chemical in the water being fed into the distribution system, late this past week five area schools were closed due to detection of the distinctive licorice-like odor of MCHM and multiple reports of symptoms such as eye irritation, nausea and dizziness among students and staff.
The latest sampling data (for Feb. 7 and 8) at locations such as area fire hydrants and hospitals and at schools shows that MCHM is at non-detect levels (<10 parts per billion) in most samples, but the chemical is still being detected in a minority of the samples despite extensive flushing. Despite repeated calls to do so, officials appear to have yet to conduct any sampling of taps in residents’ homes.
This past week also featured a press conference by state and federal officials seeking to explain their response to the spill (a video of the entire press conference is available in four parts here; it’s worth watching).
Yesterday’s Charleston Gazette features the latest in a long series of outstanding front-line reports by Ken Ward, Jr., and his colleagues, who have closely followed every twist and turn of both the spill and the government’s response to it. Yesterday’s article makes clear the extent to which federal officials were winging it in the hours and days after the spill was discovered as they rushed to set a “safe” level for MCHM in tap water.
In this post I’ll delve a little deeper into CDC’s rush to set the “safe” level and the many ways in which CDC inadequately accounted for major data gaps and uncertainties. I’ll end by saying what I think CDC should have done instead.
CDC’s Rush to Set A “Safe” Level
On full display in last week’s press conference was CDC’s remarkable effort to claim that every new data point and every new source of uncertainty that have arisen since it set what Ken Ward calls the “magic number” of 1 part per million (1 ppm) had already been taken into account. The Charleston Gazette piece makes clear that this “safe” level was first derived by CDC late in the evening of Jan. 9, the day the spill was first discovered.
Bear in mind CDC set the 1 ppm level:
- before CDC had any data other than a single median lethal dose (LD50) value cited in Eastman Chemical’s 2005 Material Safety Data Sheet for crude MCHM.
- before CDC had knowledge of the existence of Eastman’s studies.
- before Eastman provided any of those studies to CDC or state officials.
- before CDC decided to switch from its indefensible reliance on the LD50 value to use a “no observed effect level” asserted by Eastman in a 1990 study of “pure MCHM,” a different test substance than that which actually spilled.
- before CDC recommended that the state consider advising pregnant women not to drink the water until MCHM could not be detected, and the state did so.
- before the announcement that a second chemical (actually a mixture called “PPH, stripped”) was present in and had leaked from the tank.
- before concerns were raised about CDC’s reliance on a study that only examined MCHM’s toxicity by oral ingestion, in light of its claim that the water would be safe for all uses, including showering and bathing that would involve exposures through inhalation and dermal contact.
A CDC official, when asked about these and other concerns at last week’s press conference, stated unequivocally: “I can tell you: you can use your water however you like, you can drink it, you can bathe in it, you can use it however you like.” In support of that statement, she repeatedly invoked the third of what she called the three 10-fold “safety protection factors” used in CDC’s calculation as sufficient to account for each and every “lack of information on specific questions.”
Overloading the Third 10x Factor
The third 10-fold factor is what is known as a “database uncertainty factor.” It is intended to be applied where, for example, one has only a single study of a chemical’s toxicity, or studies done only on adult animals, or studies done in only one species of animal. In the present case, all three limitations apply—more than justifying use of the third 10-fold database uncertainty factor.
But in this case, we also have other major gaps and uncertainties:
- The Eastman study that CDC relied on looked only at short-term exposures and effects (it exposed rats and looked for effects over only a 28-day period).
- Whether the study actually identified a no-effect level—as claimed by Eastman and apparently accepted at face value by CDC—has been questioned, with some arguing that effects were seen at the lowest dose tested (which would all by itself justify reducing the “safe” level by factor of 40).
- CDC appears never to have gained access to the full study report and underlying data, which is necessary to ascertain whether or not Eastman’s interpretation of the data is correct.
- The study, conducted in 1990, used an old protocol dating back to 1981. That protocol has been significantly upgraded at least twice in the intervening two decades to address deficiencies in the original protocol and include important health endpoints such as neurotoxicity, immunotoxicity and endocrine disruption that were clearly not examined in the Eastman study.
- The test substance used in the study was pure MCHM, and only contained one (albeit the major one) of the six synthetic chemicals present in the crude MCHM, which was the substance that actually spilled.
- The study obviously did not examine the toxicity of what we now know was a second mixture of chemicals—“PPH, stripped”—that was also present in the tank, which contains four additional chemicals.
- The study considered toxicity by only one route of exposure—oral ingestion—despite the obvious potential for exposure by other routes.
- The effectiveness of the flushing procedures being employed for MCHM is unstudied and hence unknown. Both sampling data and reports of residual odor indicate the chemical is still present in parts of the distribution system a month after the spill and despite extensive flushing. This situation—coupled with the lack of sampling data at the actual point of exposure, i.e., in residents’ homes—calls into question what assumptions should be used as to the levels and durations of exposure.
To use, as CDC did, a single 10-fold factor to account for all of these myriad data gaps and uncertainties is wholly inadequate. It also deviates from standard risk assessment practice. For example:
The U.S. Environmental Protection Agency (EPA)’s risk assessment guidance calls for use of an additional 10-fold uncertainty factor to extrapolate from acute or short-term effects and exposures to longer-term ones.
Where the only studies available find effects at the lowest dose of a chemical tested, the EPA typically uses a 10-fold factor to account for starting with a lowest-observable-adverse-effect level (LOAEL) instead of a no-observable-adverse-effect level (NOAEL); see pp. 46-7 here.
Finally, rather than try to cram the uncertainty due to lack of data on toxicity by other routes of exposure into the same 10-fold database uncertainty factor—as CDC did—the EPA often applies a separate “relative source contribution” factor (or RSC) to account for the additive nature of other sources of exposure to a given chemical. In the absence of data to the contrary, the EPA typically assumes that the exposure route for which data exist—in this case oral ingestion—accounts for 20 percent of total exposure. That amounts to another five-fold factor.
While the EPA does not always apply the RSC in calculations for shorter-term exposures, it does so in setting Provisional Health Advisory values “developed to provide information in response to an urgent or rapidly developing situation;” see the EPA’s Provisional Health Advisories for Perfluorooctanoic Acid (PFOA) and Perfluorooctane Sulfonate (PFOS) (p. 3). Clearly, CDC could and should have done so in this case.
CDC Still Erroneously Maintains Its 1000-Fold “Blanket of Protection” is Highly Conservative
In last week’s press conference, a CDC official doubled down on CDC’s claim that its calculations of the “safe” level were highly conservative. Consider these statements made by the official:
“The levels are really 1000-fold more protective than those that were shown to cause harm.”
“The blanket of protection we have put of 1000 is an extremely strong blanket of protection and would prevent any harm at the levels.”
As I have noted in an earlier post, these statements indicate a fundamental misunderstanding or misrepresentation of the nature of, and the intent behind applying, the various uncertainty factors—which CDC continues to erroneously refer to as “safety factors.” I will repeat here what the National Academy of Sciences, in its 2009 report, Science and Decisions: Advancing Risk Assessment has to say on this subject (p. 132, emphases in original):
Another problem … is that the term uncertainty factors is applied to the adjustments made to calculate the RfD [reference dose, derived from, e.g., a no-effect level] to address species differences, human variability, data gaps, study duration and other issues. The term engenders misunderstanding: groups unfamiliar with the underlying logic and science of RfD derivation can take it to mean that the factors are simply added on for safety or because of a lack of knowledge or confidence in the process. That may lead some to think that the true behavior of the phenomenon being described may be best reflected in the unadjusted value and that these factors create an RfD that is highly conservative. But the factors are used to adjust for differences in individual human sensitivities, for humans’ generally greater sensitivity than test animals’ on a milligrams-per-kilogram basis, for the fact that chemicals typically induce harm at lower doses with longer exposures, and so on. At times, the factors have been termed safety factors, which is especially problematic given that they cover variability and uncertainty and are not meant as a guarantee of safety.
What Should CDC Have Done?
Yesterday's article in the Charleston Gazette quotes me saying that state and federal officials "were faced with the clearly daunting task of having exceedingly limited information to go on as they tried to assess and communicate about the risks of a chemical contaminating the tap water of hundreds of thousands of people.”
I believe that. It was a thankless situation. I also believe, in the face of the extremely limited data available and the enormous uncertainties involved, CDC should have refused to recommend a “safe” level and made clear there was no scientific basis for setting one. Instead, CDC and the state of West Virginia should have told affected residents to avoid contact with the water until the chemical could not be detected—something they did for pregnant women a week into the spill.
As a practical matter, that decision would have effectively applied a 100-fold actual safety factor, because the limit of detection for MCHM in the sampling officials have conducted is at 10 parts per billion, a level 100-fold lower than the 1 ppm level. It appears that, with some exceptions, that no-detect level is being achieved in most of the distribution system—with the major caveat that the lack of sampling in residents’ homes is a remaining large unknown.
While this idea of CDC refusing to set a level may sound extreme to some, it would have actually reflected the available science. It would also have avoided much of the ensuing missteps by and mistrust of government officials that we’ve witnessed.
In an interview Thursday, Rafael Moure-Eraso, the Chairman and Chief Executive of the U.S. Chemical Safety Board, which is investigating the spill, put it succinctly: “There should be no MCHM in drinking water….period. There is no safe level.”
A rare yellow penguin has been photographed for what is believed to be the first time.
- World-Renowned Photographer Documents Most Remote ... ›
- This Penguin Colony Has Fallen by 77% on Antarctic Islands ... ›
EcoWatch Daily Newsletter
By Stuart Braun
We spend 90% of our time in the buildings where we live and work, shop and conduct business, in the structures that keep us warm in winter and cool in summer.
But immense energy is required to source and manufacture building materials, to power construction sites, to maintain and renew the built environment. In 2019, building operations and construction activities together accounted for 38% of global energy-related CO2 emissions, the highest level ever recorded.
- Could IKEA's New Tiny House Help Fight the Climate Crisis ... ›
- Los Angeles City-Owned Buildings to Go 100% Carbon Free ... ›
- New Jersey Will Be First State to Require Building Permits to ... ›
By Eric Tate and Christopher Emrich
Disasters stemming from hazards like floods, wildfires, and disease often garner attention because of their extreme conditions and heavy societal impacts. Although the nature of the damage may vary, major disasters are alike in that socially vulnerable populations often experience the worst repercussions. For example, we saw this following Hurricanes Katrina and Harvey, each of which generated widespread physical damage and outsized impacts to low-income and minority survivors.
Mapping Social Vulnerability<p>Figure 1a is a typical map of social vulnerability across the United States at the census tract level based on the Social Vulnerability Index (SoVI) algorithm of <a href="https://onlinelibrary.wiley.com/doi/abs/10.1111/1540-6237.8402002" target="_blank"><em>Cutter et al.</em></a> . Spatial representation of the index depicts high social vulnerability regionally in the Southwest, upper Great Plains, eastern Oklahoma, southern Texas, and southern Appalachia, among other places. With such a map, users can focus attention on select places and identify population characteristics associated with elevated vulnerabilities.</p>
Fig. 1. (a) Social vulnerability across the United States at the census tract scale is mapped here following the Social Vulnerability Index (SoVI). Red and pink hues indicate high social vulnerability. (b) This bivariate map depicts social vulnerability (blue hues) and annualized per capita hazard losses (pink hues) for U.S. counties from 2010 to 2019.<p>Many current indexes in the United States and abroad are direct or conceptual offshoots of SoVI, which has been widely replicated [e.g., <a href="https://link.springer.com/article/10.1007/s13753-016-0090-9" target="_blank"><em>de Loyola Hummell et al.</em></a>, 2016]. The U.S. Centers for Disease Control and Prevention (CDC) <a href="https://www.atsdr.cdc.gov/placeandhealth/svi/index.html" target="_blank">has also developed</a> a commonly used social vulnerability index intended to help local officials identify communities that may need support before, during, and after disasters.</p><p>The first modeling and mapping efforts, starting around the mid-2000s, largely focused on describing spatial distributions of social vulnerability at varying geographic scales. Over time, research in this area came to emphasize spatial comparisons between social vulnerability and physical hazards [<a href="https://doi.org/10.1007/s11069-009-9376-1" target="_blank"><em>Wood et al.</em></a>, 2010], modeling population dynamics following disasters [<a href="https://link.springer.com/article/10.1007%2Fs11111-008-0072-y" target="_blank" rel="noopener noreferrer"><em>Myers et al.</em></a>, 2008], and quantifying the robustness of social vulnerability measures [<a href="https://doi.org/10.1007/s11069-012-0152-2" target="_blank" rel="noopener noreferrer"><em>Tate</em></a>, 2012].</p><p>More recent work is beginning to dissolve barriers between social vulnerability and environmental justice scholarship [<a href="https://doi.org/10.2105/AJPH.2018.304846" target="_blank" rel="noopener noreferrer"><em>Chakraborty et al.</em></a>, 2019], which has traditionally focused on root causes of exposure to pollution hazards. Another prominent new research direction involves deeper interrogation of social vulnerability drivers in specific hazard contexts and disaster phases (e.g., before, during, after). Such work has revealed that interactions among drivers are important, but existing case studies are ill suited to guiding development of new indicators [<a href="https://doi.org/10.1016/j.ijdrr.2015.09.013" target="_blank" rel="noopener noreferrer"><em>Rufat et al.</em></a>, 2015].</p><p>Advances in geostatistical analyses have enabled researchers to characterize interactions more accurately among social vulnerability and hazard outcomes. Figure 1b depicts social vulnerability and annualized per capita hazard losses for U.S. counties from 2010 to 2019, facilitating visualization of the spatial coincidence of pre‑event susceptibilities and hazard impacts. Places ranked high in both dimensions may be priority locations for management interventions. Further, such analysis provides invaluable comparisons between places as well as information summarizing state and regional conditions.</p><p>In Figure 2, we take the analysis of interactions a step further, dividing counties into two categories: those experiencing annual per capita losses above or below the national average from 2010 to 2019. The differences among individual race, ethnicity, and poverty variables between the two county groups are small. But expressing race together with poverty (poverty attenuated by race) produces quite different results: Counties with high hazard losses have higher percentages of both impoverished Black populations and impoverished white populations than counties with low hazard losses. These county differences are most pronounced for impoverished Black populations.</p>
Fig. 2. Differences in population percentages between counties experiencing annual per capita losses above or below the national average from 2010 to 2019 for individual and compound social vulnerability indicators (race and poverty).<p>Our current work focuses on social vulnerability to floods using geostatistical modeling and mapping. The research directions are twofold. The first is to develop hazard-specific indicators of social vulnerability to aid in mitigation planning [<a href="https://doi.org/10.1007/s11069-020-04470-2" target="_blank" rel="noopener noreferrer"><em>Tate et al.</em></a>, 2021]. Because natural hazards differ in their innate characteristics (e.g., rate of onset, spatial extent), causal processes (e.g., urbanization, meteorology), and programmatic responses by government, manifestations of social vulnerability vary across hazards.</p><p>The second is to assess the degree to which socially vulnerable populations benefit from the leading disaster recovery programs [<a href="https://doi.org/10.1080/17477891.2019.1675578" target="_blank" rel="noopener noreferrer"><em>Emrich et al.</em></a>, 2020], such as the Federal Emergency Management Agency's (FEMA) <a href="https://www.fema.gov/individual-disaster-assistance" target="_blank" rel="noopener noreferrer">Individual Assistance</a> program and the U.S. Department of Housing and Urban Development's Community Development Block Grant (CDBG) <a href="https://www.hudexchange.info/programs/cdbg-dr/" target="_blank" rel="noopener noreferrer">Disaster Recovery</a> program. Both research directions posit social vulnerability indicators as potential measures of social equity.</p>
Social Vulnerability as a Measure of Equity<p>Given their focus on social marginalization and economic barriers, social vulnerability indicators are attracting growing scientific interest as measures of inequity resulting from disasters. Indeed, social vulnerability and inequity are related concepts. Social vulnerability research explores the differential susceptibilities and capacities of disaster-affected populations, whereas social equity analyses tend to focus on population disparities in the allocation of resources for hazard mitigation and disaster recovery. Interventions with an equity focus emphasize full and equal resource access for all people with unmet disaster needs.</p><p>Yet newer studies of inequity in disaster programs have documented troubling disparities in income, race, and home ownership among those who <a href="https://eos.org/articles/equity-concerns-raised-in-federal-flood-property-buyouts" target="_blank">participate in flood buyout programs</a>, are <a href="https://www.eenews.net/stories/1063477407" target="_blank" rel="noopener noreferrer">eligible for postdisaster loans</a>, receive short-term recovery assistance [<a href="https://doi.org/10.1016/j.ijdrr.2020.102010" target="_blank" rel="noopener noreferrer"><em>Drakes et al.</em></a>, 2021], and have <a href="https://www.texastribune.org/2020/08/25/texas-natural-disasters--mental-health/" target="_blank" rel="noopener noreferrer">access to mental health services</a>. For example, a recent analysis of federal flood buyouts found racial privilege to be infused at multiple program stages and geographic scales, resulting in resources that disproportionately benefit whiter and more urban counties and neighborhoods [<a href="https://doi.org/10.1177/2378023120905439" target="_blank" rel="noopener noreferrer"><em>Elliott et al.</em></a>, 2020].</p><p>Investments in disaster risk reduction are largely prioritized on the basis of hazard modeling, historical impacts, and economic risk. Social equity, meanwhile, has been far less integrated into the considerations of public agencies for hazard and disaster management. But this situation may be beginning to shift. Following the adage of "what gets measured gets managed," social equity metrics are increasingly being inserted into disaster management.</p><p>At the national level, FEMA has <a href="https://www.fema.gov/news-release/20200220/fema-releases-affordability-framework-national-flood-insurance-program" target="_blank">developed options</a> to increase the affordability of flood insurance [Federal Emergency Management Agency, 2018]. At the subnational scale, Puerto Rico has integrated social vulnerability into its CDBG Mitigation Action Plan, expanding its considerations of risk beyond only economic factors. At the local level, Harris County, Texas, has begun using social vulnerability indicators alongside traditional measures of flood risk to introduce equity into the prioritization of flood mitigation projects [<a href="https://www.hcfcd.org/Portals/62/Resilience/Bond-Program/Prioritization-Framework/final_prioritization-framework-report_20190827.pdf?ver=2019-09-19-092535-743" target="_blank" rel="noopener noreferrer"><em>Harris County Flood Control District</em></a>, 2019].</p><p>Unfortunately, many existing measures of disaster equity fall short. They may be unidimensional, using single indicators such as income in places where underlying vulnerability processes suggest that a multidimensional measure like racialized poverty (Figure 2) would be more valid. And criteria presumed to be objective and neutral for determining resource allocation, such as economic loss and cost-benefit ratios, prioritize asset value over social equity. For example, following the <a href="http://www.cedar-rapids.org/discover_cedar_rapids/flood_of_2008/2008_flood_facts.php" target="_blank" rel="noopener noreferrer">2008 flooding</a> in Cedar Rapids, Iowa, cost-benefit criteria supported new flood protections for the city's central business district on the east side of the Cedar River but not for vulnerable populations and workforce housing on the west side.</p><p>Furthermore, many equity measures are aspatial or ahistorical, even though the roots of marginalization may lie in systemic and spatially explicit processes that originated long ago like redlining and urban renewal. More research is thus needed to understand which measures are most suitable for which social equity analyses.</p>
Challenges for Disaster Equity Analysis<p>Across studies that quantify, map, and analyze social vulnerability to natural hazards, modelers have faced recurrent measurement challenges, many of which also apply in measuring disaster equity (Table 1). The first is clearly establishing the purpose of an equity analysis by defining characteristics such as the end user and intended use, the type of hazard, and the disaster stage (i.e., mitigation, response, or recovery). Analyses using generalized indicators like the CDC Social Vulnerability Index may be appropriate for identifying broad areas of concern, whereas more detailed analyses are ideal for high-stakes decisions about budget allocations and project prioritization.</p>
By Jessica Corbett
Sen. Bernie Sanders on Tuesday was the lone progressive to vote against Tom Vilsack reprising his role as secretary of agriculture, citing concerns that progressive advocacy groups have been raising since even before President Joe Biden officially nominated the former Obama administration appointee.