Sunday, February 28, 2016

In defense of the chemophobe

Today's post is by guest blogger Beth Seacord.

A few months ago a meme was circulating in social media. It went something like this:

Dihydrogen Monoxide. It is found in every lake, river and ocean. It is used in nuclear reactors, corrodes metal, burns human skin, and causes thousands of deaths each year.

People responded with outrage and concern, calling for the ban of dihydorgen monoxide. Only later was it revealed that di-hydrogen (two hydrogen atoms), mon-oxide (one oxygen atom) is just plain water. This ‘chemophobe’ meme implies that the ordinary person is gullible, scientifically illiterate, and because of this, overly sensitive and irrationally fearful of pesticides, artificial food additives, genetically modified organisms (GMOs), or other industrial byproducts that make their way into the environment and food chain.

Here is a second example of the ‘chemophobe’ meme. This time, the message is more explicit:

“Nearly every chemical constituent will, in certain concentrations, kill children and adults....Don’t be alarmed by words you don’t understand.…Read. Understand. Reduce the stupid. Science. It reduces the stupid.”

Attacks on the public’s aversion to industrial chemicals are not limited to social media. Frustrated scientists, industry representatives and risk assessors contend that if only the public understood the mathematics involved in calculating risk probabilities, they would realize there is no reason to fear ‘life-bettering’ bio-technology and chemistry.

This criticism rests on two fundamental mistakes. The first is to assume that rational aversion to death and other harms is a linear function of the probability of death and other harms (see Daniel Kahneman, et al)  In other words, it is a mistake to assume that because a particular risk probability is small, the value of avoiding the risk is also small. However, people often have very good reasons for viewing risks of fatality or serious injury in a non-linear way. Here are just two:

First, the value of accepting or avoiding a given risk depends upon the benefits associated with the risk. A person may rationally decide to expose themselves to a greater risk while at the same time choosing to avoid a lower-probability risk because of the benefits associated with each. For instance, I might choose to drive because I need to get to work even though I know that the leading cause of death or serious injury for those in my age group is automobile accidents. The same person who drives to work every day may rationally decide to avoid the lower-probability risk of BPA by using BPA-free water bottles. For example, this would be rational if there were no benefits to be gained by exposing oneself to the low-probability risk. Consider the following example given by philosopher of science, Kristin Shrader-Frechette: You are asked to play a game of Russian roulette where the chances of death are one in ten thousand. It is rational to refuse to play even though the chances of death are small because there is nothing to be gained by playing. Analogously, a person may rationally decide to avoid products with endocrine disruptors like BPA, for example, either because these products are not necessary or because there are safer alternatives available.

Second, it is sometimes rational to view risks non-linearly if the risks are not morally equivalent. It is morally important how one comes to be exposed to a given risk. In other words, there is an important moral difference between risks that are imposed on us and risks that are voluntarily chosen. It seems perfectly rational to be much more averse to risks that are imposed on us without our consent than other risks that are voluntarily accepted. Citizens who are concerned about pesticide residue on produce, artificial food additives and un-labed GMO ingredients, or emission levels from the local factory are concerned about risks that have, in many cases, been imposed on them. It seems rational to have a non-linear risk aversion favoring chosen to un-chosen risks (see, e.g., Kristin Shrader-Frechette's Risk Analysis and Scientific Method).

A second mistake made by the ‘chemophobe’ meme is that it is not clear that the probability of harm from industrial pollution and other toxins is low. While lifetime exposure for each individual toxin might be within acceptable limits, the combined effect of hundreds of environmental toxins may put us at significant risk. For instance, the Center for Disease Control estimates that about 144,000 cancer deaths per year can be traced to non-tobacco related environmental pollution. This accounts for about 30% of all deaths from cancer.

Finally, there are vulnerable people in our population who are more susceptible to pollution than others. Women, children, workers, the elderly, the disabled and the poor are either more vulnerable to pollution or experience greater exposure. One need look no further than our current headlines. In Flint, Michigan the concerns of citizens about the water quality in the city were dismissed, belittled and mocked for months before journalists revealed corruption among the city’s officials. The citizens in Flint were told they were being paranoid.

So am I an “irrational, bored, stupid, alarmist” who is overly fearful about toxins in our water, air and food? On behalf of the 8,000 children in Flint Michigan who have suffered irreversible damage to their brains and nervous systems…on behalf of those in cities like Flint, I am a ‘chemophobe.’

Beth Seacord
Department of Philosophy
Grand Valley State University


  1. Thanks for this interesting post. You seem to have pitted yourself against the "chemophobe" memes, but I think the authors of those meme would agree with most or all of what you have to say. Here are a few claims I take you and them to share:
    1. Believing all chemicals to be dangerous (without considerations of dose etc.) is irrational.
    2. When faced with potential risks, it's often best to seek more information, and then make informed cost-benefit analyses.
    3. Science, especially "good" science is the best source of that information.

    It seems to me that your real opponent is the person who argues that everything commercially available must be OK to consume (as directed on the packet) because scientists and the government would not permit even mildly poisonous products to be sold without warning labels.

    1. Hi Dan, one of the problems with tweets, texts and social media memes like these is that it is hard to know what the author or those re-posting the memes have in mind because the context limited.

      I think that very few people would make the fine-grained distinctions you do here (which, for the most part, I agree with). I read the meme as making fun of anyone who says things like "babies are born with x concentration of chemicals in their bodies." The thrust of the meme is to downplay the concerns that people have about toxins and to portray them as over-reacting. I think we aren't concerned enough given incidents like Flint as well as other epidemics (e.g. increasing asthma rates, increasing severity of allergies, increasing cancer rates and drastically increasing autism rates).

  2. Hi Beth, thanks for this interesting post on a really important topic.

    I agree with what Dan said above. But perhaps you could also clarify two other things for me.

    1. I'm not sure I see the discussion of your first point as evidence that rational aversion to harm is not a linear function of the probability of that harm. You seem to discuss cases in which there are simply other harms and benefits to be factored into the calculation. I'm not asserting that it is a linear function, since that point can also be made with respect to diminishing marginal utility or, conceivably, prospect theory.

    2. You say it seems rational to be more averse to harms that are imposed than those that are chosen. I wonder if you could say more about the rationality claim. I would say that it is understandable, but not rational if your aim is to maximize personal utility. I mean, you can certainly factor the resentment you feel from an imposed moral harm (and the benefit of relieving it) into an expected value equation, but that doesn't seem to me to be a consideration about moral harms themselves. And when you do that routinely, it makes all sorts of actions rational, such as spending enormous amounts of money on insurance policies for "peace of mind." It also tends to support designing a penal system in accord with what is required to satisfy the resentment we feel toward criminals, rather than what would ultimately benefit society. Again, I think there are perspectives from which your claim might be defensible, but I'm not sure which one you are appealing to.

    Finally, are you saying that you think that chemophobic attitudes typically can be seen as rational by reference to these and perhaps other considerations? If so, I'm pretty suspicious of that at a personal level. This is mostly because the people I know who fall under this description typically subscribe to a panoply of demonstrably irrational beliefs of an empirical nature, e.g. the effectiveness of homeopathic remedies, alleged dangers of vaccination, etc. And some of these are actually supported in some way by their chemophobia. To be clear, though, I would in no way suggest that people who are not chemophobic are typically more rational. The main personal value of becoming scientifically literate is the ability to know when empirical claims are supported by the evidence, regardless of their origin.


    1. Hi Randy, It was great meeting you yesterday. (Now I know what all the Randy-hype was about!)

      I was trying to defend the position that it can be rational to regard equal risks to health/or equal harms to health unequally, that is, it is rational to be more averse to harms that are imposed than those that are chosen.

      Your second point is an excellent objection to this: If I explain the extra dis-value accrued by a person by imposed risks/harms in terms of resentment (or peace of mind), then I open the door to all kinds of irrational behavior--A person may have an overly large fear of spiders (me) might invest large amounts of money in having an anti-spider task force come to the house on a weekly basis (not me). The disvalue a person who is irrationally afraid of spiders might place on being exposed to a spider would make all sorts of things ‘rational’ for them to do in order to avoid the imaginary risk of spider-exposure. So even if all the steps someone took eased their peace of mind and made them better off in this regard, calling this rational behavior is a mistake. Or as you mention someone might spend obscene amounts of money to insure themselves from an imagined danger like the zombie apocalypse. (I take it that this is the point you were making because insuring your life, your health and property for its replacement value is seen as rational behavior). And it would not be rational to lock people up for life (or kill them) just because it makes us feel better.
      But these examples are importantly different than the desire to avoid harms (or the risk of harm) from exposure to toxins. My spider example is different because we know that spiders are harmless (and the evidence for this is readily available). In many cases the effects of certain chemicals and additives are known and have been demonstrated to be harmful even in small doses (dose response assessments show that there is no safe dose of some substances like BPA and mercury). If the arachnophobe had evidence that the spiders in her house were brown-recluses or black widows then we would no longer regard her as irrational for bringing in the anti-spider swat team.
      In other cases, dangers are unknown because there are conflicting scientific reports about their safety (e.g. GMOs) or because the substance has not been adequately studied (e.g. nano-particles). So if the risk from the spiders was unknown it seems rational to err on the safe side to get rid of the spiders if it were not too costly or difficult.

      The prison example is also importantly different. The example is one of a public policy and not a personal policy. The peace of mind a person may get from imprisoning or executing another person is not adequate moral justification for imprisoning or executing others.

      As far as your first point, are you saying that if I factored in benefits along with the risks then I may still have a linear relation between risk and the value of aversion? My target here is those who fail to consider benefits and only look at risk. So on social media people will make fun of chemophobes who buy organic vegetables but commute through Los Angeles traffic. For example, Cohen and Lee in the article “rank 54 health risks solely according to their decreasing probability” and “claim that the ordering in their table should be society’s order of priority for health and safety” (Shrader-Frechette, Risk and the Scientific Method, pg. 161). Cohen and Lee fail to take social benefits into consideration.

      You are right that it is possible to include benefits with the risk to get an expected value and this does not show that rational aversion is not a linear function of expected value.

  3. Beth,

    Thanks for this provocative post.

    While I, like Randy, agree with Dan’s initial reply above, what caught my eye most was your second point about the moral difference between imposed and consented-to risks.

    However, unlike Randy’s speculation, I suspect your answer to this worry will not involve the concept of resentment.

    Just to track where my suspicion enters in, let me repeat the first four sentences of your second point:

    “…it is sometimes rational to view risks non-linearly if the risks are not morally equivalent.”

    So far, so good.

    “It is morally important how one comes to be exposed to a given risk.”

    Again, so far, so good.

    “In other words, there is an important moral difference between risks that are imposed on us and risks that are voluntarily chosen.”

    OK, this is where I am slightly tempted to get off the bus.

    “It seems perfectly rational to be much more averse to risks that are imposed on us without our consent than other risks that are voluntarily accepted.”

    OK, this is where I am strongly tempted to get off the bus.

    But there are ways I might be induced to stay on the bus. Randy suggested “resentment”. But here is another one: The volenti maxim. And I quote:

    “Here, Mill appears to endorse the maxim volenti non fit injuria, which he glosses in Utilitarianism as the doctrine “that is not unjust which is done with the consent of the person who is supposed to be hurt by it”…As this gloss makes clear, it is not that one cannot be hurt by something one has consented to or freely risked. Rather, when one has knowingly and willingly risked something harmful, one cannot legitimately complain when that harm comes home to roost.” (David Brink, Mill’s Progressive Principles, Oxford University Press, 2013, page 175.)

    Perhaps something like the volenti maxim explains why large waves are not feared in the same way by fishermen who endure them and professional surfers who seek them out on purpose.

    Similarly (?), perhaps you view the risk of a person growing up in Flint with no knowledge of their water problems differently than a risk-seeking person who decides to move to Flint precisely to see what happens when they ingest a lot of its tainted water.

    But here’s a lingering worry about bringing in volenti here: volenti does not really address the riskiness of the risk, does it? As the last two sentences I quoted from Brink make clear, volenti has to do with the moral badness of the risk.

    But wait: that just proves the point you were trying to make originally, right?

    So…maybe I’ve just talked myself into staying on your bus after all.

    1. Hi Russell, I love the fishing/surfing analogy or even more apt is this analogy Maverick's surfer vs. tsunami victim (the tsunami victim hasn't even implicitly consented to dangers posed by waves). The question you raise and also, Matt and Randy is, is rational to regard equal harms unequally?

      So let’s say someone moves to Flint on purpose in order to drink the tainted water. She has health effects that are physically equivalent to someone who lives in Flint and was harmed unknowingly. Is it rational for these people to assess the effect the pollution had on their wellbeing differently? I’m inclined to think that even if you adjust for the effect of depression, rage and helplessness in the person who was harmed without consent there is a violation of autonomy or a violation of the person's rights as a citizen that is hard to include in well-being calculus

  4. Hi Beth. Thanks for the interesting post. Lots of good stuff here.

    I think I agree that there can be some reasonable circumstances under which to adopt a non-linear risk function. I'm with Randy though in being skeptical that this is what's happening with many people who express these fears. There is a kind of risk mistake that many people are guilty of making in these cases that you don't seem to consider. There are many cases where people apply different, inconsistent thresholds of acceptable risk to cases that should be treated similarly. So in your terms, for example, imagine a case where two equivalent risks are imposed on someone without their consent, as they are with non-GMOs, but people refuse one risk but accept the other.

    We know from research like Kahneman's that you mention, that the recency, vividness, and ease of recall of a case disproportionately affect the amount of weight we give to a consideration when they shouldn't. As you say, " it is a mistake to assume that because a particular risk probability is small, the value of avoiding the risk is also small," but it would be a mistake in two cases where the risk is small and the value of avoiding the risk is comparable, to act to avoid one of the risks and not act comparably to avoid the other. So this agent is being internally inconsistent, and the frequency of cases like this are common enough to warrant some skepticism about those agents risk aversion policies.

    Great, thought-provoking post.

    1. This is a good point Matt. I used to work at a cosmetics shop (L'Occitane) in the mall and I would make fun of the people who came in and asked about the parabens (used as preservaties)in our products while at the same time sipping a huge soda. Whether that soda was diet or regular the detriments of that soda were far worse than the parabens in the products we sold could be. I think we would both agree that this is irrational behavior.

  5. Interesting post, Beth! This is definitely the sort of issue I wish philosophers would publicly engage with more. I'd love to hear your thoughts on another factor which may be at play in the risk assessments of so-called chemophobes, namely aretaic concern (I'm drawing here on work by people like Dan Hicks, Alvin Goldman, Paul Thompson, and Whyte & Crease's 2010 paper).

    The argument runs like this: various experts present chemicals in certain quantities and combinations as safe, either explicitly or by virtue of them being legally for sale. Many people in our society are unable to understand the science behind particular chemicals' safety or lack thereof, and are unable to assess the putative experts' technical proficiency or lack thereof. Given these, those members of the public must assess the trustworthiness of the experts and their claims through ordinary means of character evaluation. For some people in our society, those experts are not seen as trustworthy by those ordinary standards. This could be due to a perceived conflict of interest (e.g. they're paid by the people whose products they're evaluating), or due to previous betrayals by experts in the past (e.g. the low level of trust by some members of the African American community of public health officials due to past abuses, see Thomas & Quinn 1991), or any number of other good and bad reasons. Due to this low trust, they view the assurances of those experts with deep suspicion, and either disregard them when assessing the safety of particular products, or have it actively count against a product's likely safety. Without those experts, many so-called chemophobes base their decisions on people they do find trustworthy for ordinary reasons -- mothers might trust other mothers who seem to be doing well with their children, religious people might trust religious leaders in their community, people living in a tightly knit community might trust conventional wisdom, and so on.

    If aretaic concerns about the trustworthiness of experts is part of the issue, then it is a powerful argument against the colorful meme you bring up in your article. Tricking people into thinking something is dangerous, then laughing at them for being too stupid to know they were talking about water or human blood is hardly a good way to change people's poor judge of your character.

    Again, really interesting post!

  6. Ian, thanks. Your comments cut to the heart of the issue. I believe that what is going on under the surface is exactly as you say: The chemophobe is suspicious of special interest science and of the power of lobbyists to influence the FDA and USDA (e.g. the fact that there is still a dairy group pictured in is clearly influenced by the dairy lobby and not by the best research in nutrition). The chemophobe doesn't trust that the long-term effects of allowable levels of pollution, pesticide residue, preservatives and other chemicals will be benign. The chemophobe meme makes fun of these people for their distrust of the scientist and makes this distrust out to be a sign of ignorance and an anti-science stance.
    Your point that there are segments of the population who have reason to distrust the experts is a good one. It is unfortunate that some experts who are in the pocket of special interest (e.g. UMass toxicologist Ed Calabrese takes hundreds of thousands of dollars from chemical companies and finds through his research that these certain doses of these chemicals are good for you: or Smithsonian Institute’s climatologist and
    climate change denier, Willie Soon who was discovered to have taken 1.25 million for the Koch brothers: have undermined the public’s trust in science. Trust is broken people react by throwing "the baby out with the bathwater" and unfortunately distrust all science and turn to other mothers or their religious leaders for (what has turned out to be bad) advice on vaccinating their children, for example.
    The chemophobia meme only deepens this distrust as it exploits norms about honest communication. There is good science and good scientists and they need to win the public back. As a recent article in the Scientific American put it we should not let “chemophobia-phobia poison” public perception of science: