One Month After West Virginia Chemical Spill Major Data Gaps and Uncertainties Remain
Yesterday marked exactly a month since what is now said to be 10,000 gallons of “crude MCHM”—mixed with what was later found to have included other chemicals—spilled into West Virginia’s Elk River, contaminated 1,700 miles of piping in the water distribution system for nine counties, and disrupted the lives of hundreds of thousands of the state’s residents.
Despite declining levels of the chemical in the water being fed into the distribution system, late this past week five area schools were closed due to detection of the distinctive licorice-like odor of MCHM and multiple reports of symptoms such as eye irritation, nausea and dizziness among students and staff.
The latest sampling data (for Feb. 7 and 8) at locations such as area fire hydrants and hospitals and at schools shows that MCHM is at non-detect levels (<10 parts per billion) in most samples, but the chemical is still being detected in a minority of the samples despite extensive flushing. Despite repeated calls to do so, officials appear to have yet to conduct any sampling of taps in residents’ homes.
This past week also featured a press conference by state and federal officials seeking to explain their response to the spill (a video of the entire press conference is available in four parts here; it’s worth watching).
Yesterday’s Charleston Gazette features the latest in a long series of outstanding front-line reports by Ken Ward, Jr., and his colleagues, who have closely followed every twist and turn of both the spill and the government’s response to it. Yesterday’s article makes clear the extent to which federal officials were winging it in the hours and days after the spill was discovered as they rushed to set a “safe” level for MCHM in tap water.
In this post I’ll delve a little deeper into CDC’s rush to set the “safe” level and the many ways in which CDC inadequately accounted for major data gaps and uncertainties. I’ll end by saying what I think CDC should have done instead.
CDC’s Rush to Set A “Safe” Level
On full display in last week’s press conference was CDC’s remarkable effort to claim that every new data point and every new source of uncertainty that have arisen since it set what Ken Ward calls the “magic number” of 1 part per million (1 ppm) had already been taken into account. The Charleston Gazette piece makes clear that this “safe” level was first derived by CDC late in the evening of Jan. 9, the day the spill was first discovered.
Bear in mind CDC set the 1 ppm level:
- before CDC had any data other than a single median lethal dose (LD50) value cited in Eastman Chemical’s 2005 Material Safety Data Sheet for crude MCHM.
- before CDC had knowledge of the existence of Eastman’s studies.
- before Eastman provided any of those studies to CDC or state officials.
- before CDC decided to switch from its indefensible reliance on the LD50 value to use a “no observed effect level” asserted by Eastman in a 1990 study of “pure MCHM,” a different test substance than that which actually spilled.
- before CDC recommended that the state consider advising pregnant women not to drink the water until MCHM could not be detected, and the state did so.
- before the announcement that a second chemical (actually a mixture called “PPH, stripped”) was present in and had leaked from the tank.
- before concerns were raised about CDC’s reliance on a study that only examined MCHM’s toxicity by oral ingestion, in light of its claim that the water would be safe for all uses, including showering and bathing that would involve exposures through inhalation and dermal contact.
A CDC official, when asked about these and other concerns at last week’s press conference, stated unequivocally: “I can tell you: you can use your water however you like, you can drink it, you can bathe in it, you can use it however you like.” In support of that statement, she repeatedly invoked the third of what she called the three 10-fold “safety protection factors” used in CDC’s calculation as sufficient to account for each and every “lack of information on specific questions.”
Overloading the Third 10x Factor
The third 10-fold factor is what is known as a “database uncertainty factor.” It is intended to be applied where, for example, one has only a single study of a chemical’s toxicity, or studies done only on adult animals, or studies done in only one species of animal. In the present case, all three limitations apply—more than justifying use of the third 10-fold database uncertainty factor.
But in this case, we also have other major gaps and uncertainties:
- The Eastman study that CDC relied on looked only at short-term exposures and effects (it exposed rats and looked for effects over only a 28-day period).
- Whether the study actually identified a no-effect level—as claimed by Eastman and apparently accepted at face value by CDC—has been questioned, with some arguing that effects were seen at the lowest dose tested (which would all by itself justify reducing the “safe” level by factor of 40).
- CDC appears never to have gained access to the full study report and underlying data, which is necessary to ascertain whether or not Eastman’s interpretation of the data is correct.
- The study, conducted in 1990, used an old protocol dating back to 1981. That protocol has been significantly upgraded at least twice in the intervening two decades to address deficiencies in the original protocol and include important health endpoints such as neurotoxicity, immunotoxicity and endocrine disruption that were clearly not examined in the Eastman study.
- The test substance used in the study was pure MCHM, and only contained one (albeit the major one) of the six synthetic chemicals present in the crude MCHM, which was the substance that actually spilled.
- The study obviously did not examine the toxicity of what we now know was a second mixture of chemicals—“PPH, stripped”—that was also present in the tank, which contains four additional chemicals.
- The study considered toxicity by only one route of exposure—oral ingestion—despite the obvious potential for exposure by other routes.
- The effectiveness of the flushing procedures being employed for MCHM is unstudied and hence unknown. Both sampling data and reports of residual odor indicate the chemical is still present in parts of the distribution system a month after the spill and despite extensive flushing. This situation—coupled with the lack of sampling data at the actual point of exposure, i.e., in residents’ homes—calls into question what assumptions should be used as to the levels and durations of exposure.
To use, as CDC did, a single 10-fold factor to account for all of these myriad data gaps and uncertainties is wholly inadequate. It also deviates from standard risk assessment practice. For example:
The U.S. Environmental Protection Agency (EPA)’s risk assessment guidance calls for use of an additional 10-fold uncertainty factor to extrapolate from acute or short-term effects and exposures to longer-term ones.
Where the only studies available find effects at the lowest dose of a chemical tested, the EPA typically uses a 10-fold factor to account for starting with a lowest-observable-adverse-effect level (LOAEL) instead of a no-observable-adverse-effect level (NOAEL); see pp. 46-7 here.
Finally, rather than try to cram the uncertainty due to lack of data on toxicity by other routes of exposure into the same 10-fold database uncertainty factor—as CDC did—the EPA often applies a separate “relative source contribution” factor (or RSC) to account for the additive nature of other sources of exposure to a given chemical. In the absence of data to the contrary, the EPA typically assumes that the exposure route for which data exist—in this case oral ingestion—accounts for 20 percent of total exposure. That amounts to another five-fold factor.
While the EPA does not always apply the RSC in calculations for shorter-term exposures, it does so in setting Provisional Health Advisory values “developed to provide information in response to an urgent or rapidly developing situation;” see the EPA’s Provisional Health Advisories for Perfluorooctanoic Acid (PFOA) and Perfluorooctane Sulfonate (PFOS) (p. 3). Clearly, CDC could and should have done so in this case.
CDC Still Erroneously Maintains Its 1000-Fold “Blanket of Protection” is Highly Conservative
In last week’s press conference, a CDC official doubled down on CDC’s claim that its calculations of the “safe” level were highly conservative. Consider these statements made by the official:
“The levels are really 1000-fold more protective than those that were shown to cause harm.”
“The blanket of protection we have put of 1000 is an extremely strong blanket of protection and would prevent any harm at the levels.”
As I have noted in an earlier post, these statements indicate a fundamental misunderstanding or misrepresentation of the nature of, and the intent behind applying, the various uncertainty factors—which CDC continues to erroneously refer to as “safety factors.” I will repeat here what the National Academy of Sciences, in its 2009 report, Science and Decisions: Advancing Risk Assessment has to say on this subject (p. 132, emphases in original):
Another problem … is that the term uncertainty factors is applied to the adjustments made to calculate the RfD [reference dose, derived from, e.g., a no-effect level] to address species differences, human variability, data gaps, study duration and other issues. The term engenders misunderstanding: groups unfamiliar with the underlying logic and science of RfD derivation can take it to mean that the factors are simply added on for safety or because of a lack of knowledge or confidence in the process. That may lead some to think that the true behavior of the phenomenon being described may be best reflected in the unadjusted value and that these factors create an RfD that is highly conservative. But the factors are used to adjust for differences in individual human sensitivities, for humans’ generally greater sensitivity than test animals’ on a milligrams-per-kilogram basis, for the fact that chemicals typically induce harm at lower doses with longer exposures, and so on. At times, the factors have been termed safety factors, which is especially problematic given that they cover variability and uncertainty and are not meant as a guarantee of safety.
What Should CDC Have Done?
Yesterday's article in the Charleston Gazette quotes me saying that state and federal officials "were faced with the clearly daunting task of having exceedingly limited information to go on as they tried to assess and communicate about the risks of a chemical contaminating the tap water of hundreds of thousands of people.”
I believe that. It was a thankless situation. I also believe, in the face of the extremely limited data available and the enormous uncertainties involved, CDC should have refused to recommend a “safe” level and made clear there was no scientific basis for setting one. Instead, CDC and the state of West Virginia should have told affected residents to avoid contact with the water until the chemical could not be detected—something they did for pregnant women a week into the spill.
As a practical matter, that decision would have effectively applied a 100-fold actual safety factor, because the limit of detection for MCHM in the sampling officials have conducted is at 10 parts per billion, a level 100-fold lower than the 1 ppm level. It appears that, with some exceptions, that no-detect level is being achieved in most of the distribution system—with the major caveat that the lack of sampling in residents’ homes is a remaining large unknown.
While this idea of CDC refusing to set a level may sound extreme to some, it would have actually reflected the available science. It would also have avoided much of the ensuing missteps by and mistrust of government officials that we’ve witnessed.
In an interview Thursday, Rafael Moure-Eraso, the Chairman and Chief Executive of the U.S. Chemical Safety Board, which is investigating the spill, put it succinctly: “There should be no MCHM in drinking water….period. There is no safe level.”