I recently spoke in Ottawa at the Institute for Science, Society and Policy about my research on the politics of Canada's regulation of BPA, so I took this moment to do something I've been meaning to do for some time, which is to examine the publicly available material to assess whether or not Canada's Chemicals Management Plan was too cautious when it got started. In short, I think it was.
Since 1999, Health and Environment Canada have been undertaking a monumental task to retroactively assess the risks associated with approximately 24,000 chemicals (natural and synthetic) that had been circulating through the Canadian economy and society in the 1980s. The point of this exercise was to act in the face of increasing public concern about the health and environmental effects of chemicals (think DDT) after unfettered industrialization had released thousands into the economy and ecosystems without any safety testing.
That is all well and good, however, 24,000 chemicals is a lot of chemicals. If government experts were to conduct a full assessment of the scientific knowledge of how each works and whether we are exposed to it, Health and Environment Canada would never get around to other important ways of keeping us safe (think monitoring air quality or food-processing facilities). We might just spend so much money examining chemicals that we end up shortchanging our health in other areas.
Interest groups, parliamentarians and bureaucrats were well aware of this challenge when they devised amendments to the Canadian Environmental Protection Act (CEPA) in the mid-1990s to manage this process efficiently. They decided to enshrine, in law, two criteria that governed whether a substance on the Domestic Substance List (DSL) would get a full, comprehensive risk assessment by bureaucrats in Health and Environment Canada. Those two criteria were these (paraphrased from http://laws-lois.justice.gc.ca/eng/acts/c-15.31/page-9.html#h-31 of CEPA):
- Those substances that presented the greatest potential for exposure; or,
- Those substances that were "inherently toxic" to either humans or the environment and either persisted in the ecosystem or bio-accumulated up the food chain.
I'm going to zero in on the first criterion because it is the most conservative, precautious criterion. It guaranteed that a lot of chemicals were going to get subjected to a full risk assessment. When the categorization was complete, just shy of 4,000 chemicals were caught, or just under 1/6th of the entire list. At the time, people were shocked the list was so long. Health Canada developed a couple of measures to operationalize potential for exposure, two of which were numerical measures of how much each substance circulated (i.e. the number of companies manufacturing it and an estimate of how much was in circulation). But right away, the list caused problems for Health Canada because the GPE criterion on its own caught everything from carbon, silicon, zinc and iron to gelatins, starch, olive oil and, yes, water. Water? Sure. It was on the DSL and there certainly is a great potential for exposure to it. Very quickly everybody realized that what matters for risk management is not the potential for exposure, but the actual exposure and that the government was headed down the road of spending a lot of money and time for not very much return.
So, Health Canada found a way to not pursue further work on some of the obvious substances (no, water did not get a full risk assessment) even though the legislation is fairly clear that it should have gone further. But that still left just shy of 900 substances that Health Canada and Environment Canada both dedicated enormous resources to assessing.
To further organize their work, Health and Environment Canada prioritized 193 of these substances into 12 high-priority batches that got screened right away, starting in 2006. You could consider these to be the worst of the worst, the ones most likely to present a threat to human health or the ecosystem. All the relevant data associated with these 193 risk assessments are right here. Perhaps most usefully, it tells you which substances got classed as "toxic" and subject to regulation, and which ones were not.
I always thought that tabulating what proportion of substances actually got classed as toxic would give us a measure of how well parliamentarians, interest groups and bureaucrats did when they set up the two criteria that would trigger a full risk assessment. If they got the criteria right, then the full-risk assessments would declare close to 100%, probably a little lower, like 75% of substances, to be toxic. If they were too cautious and too overzealous, they would be much lower.
Here, I indulged my side interest in data scraping, computer programming and R to develop a short script that automatically downloaded the material in each table. The script is available here and can be sourced with the commands below.
What are the results? Only 22% of the 193 highest priority substances in the Chemicals Management Plan met the criteria for toxicity in the Canadian Environmental Protection Act and so were subject to regulation. That strikes me as low. So it strikes me that the initial criteria embedded in the legislation were unnecessarily broad.
Someone might say that it's better to be safe than sorry. And that is true to a point. But the challenge associated with risk management is to balance the costs associated with assessing and managing risks with the risks themselves.
When the Chemicals Management Plan was first established in CEPA (of course it wasn't called that at the time) Prof. William Leiss wrote an interesting book about the process and pointed many of these problems out. In particular, he was concerned that what he called the excessive list-making exercises in the legislation might be detracting from any discussion of the costs of regulating potentially risky substances in light of their benefits.
What I mean is this: the protracted process of listing, assessing, and (possibly) regulating substances or sets of substances goes on entirely in the absence of any calculus of where the best trade-offs may be found (Leiss, 2001: 211)
That only 22% of the first, highest priority substances are actually subjected to regulation years after the legislation, and that the rate of regulation will be almost certainly lower in the remaining substances to be analyzed, may have proven him right. The process set up in 1999 was too cautious and probably wasted a lot of important scientific resources.
This article has been cross-posted to http://policyoptions.irpp.org/author/skiss/