In last week's post Garret Merriam argued that the famous brain-in-a-vat thought experiment is incoherent. In this post I argue that many popular moral thought experiments are flawed as well. I won't argue that they are incoherent; rather, I claim that they tend to presume and promote a flawed understanding of human decision-making.
So first a few words about that:
Human beings are social animals. We have learned to cooperate with one another in order to acquire goods that we can not easily secure in isolation. In every human society adults are expected to do two things: (1) manage their personal affairs, and (2) respect the rules that make the benefits of cooperation possible.
On any given day we make thousands of almost entirely self-interested decisions. Most are trivial, such as which word I should use to finish this taco. Some are more significant, such as whether to head for the beach or the mountains on Sunday. In each case I am just doing my best to figure out which of two options will deliver the greatest personal utility. (I am not saying that these actions have no moral significance, only that we do not typically make moral considerations when deciding whether to perform them.) We also, though less commonly, make decisions that are almost entirely moral in nature. For example, I may be completely committed to helping you move to a new apartment, deliberating only on how I can be of greatest assistance.
But the more interesting decisions occur when both types of considerations are salient. The magic of well-organized societies is that they tend to support the same conclusion. When my alarm rings in the morning I haul my butt out of bed and drive to work. This is because it would be bad for me not to and wrong of me as well. Sometimes, however, these considerations support different decisions. It might be morally better to help you on move day; still, it is shaping up to be beautiful outside and I would much rather go for a hike. In situations like this I have to decide whether to do what is right, or to do what I like.
When doing moral philosophy we sometimes wrongly suppose that whenever considerations of morality and self-interest come into conflict, we ought to do the morally right thing. But this is incorrect. Of course it is tautological that morally we ought to, but we are not always expected to sacrifice our own interests for the benefit of others. Rather, when decisions like this arise, we weigh what we ought to do morally against what we ought to do prudentially, and make the best decision we can. This is easier said than done, especially since these two types of value are not obviously fungible. But it is our task, nonetheless.
Now for the problem with moral thought experiments:
Most moral thought experiments are intended to bring out a conflict between different ways of thinking about morality, typically between a utilitarian and a deontological approach. In the trolley problem, e.g., it is first established that most people judge that one ought to pull a switch that would divert a runaway trolley so that it kills the fewest possible people. Later we see that most of us also judge that one ought not to push a fat man off a bridge to precisely the same effect. Some philosophers argue that this shows that we are prone to making inconsistent moral judgments. Others claim that we must be detecting morally relevant differences between the two cases.
I don't think either of these conclusions is warranted. This experiment, and others like it, are flawed.
The flaw is that the hypothetical situation described in thought experiments like these are presented as if they constitute a purely moral decision. As noted above, these do occur in everyday life, but scenarios like the trolley problem don't approximate them. Rather, they provide a decision in which considerations of self-interest and morality are both salient.
This is easily seen in the trolley problem. In each case there is a non trivial question concerning what is best for society as well as what is best for me. In the switch-pulling version, considerations of morality and self-interest more or less coincide. I calculate that pulling the switch is the best outcome for society and also the result I can live with personally. In the fat man version, these considerations collide. Sure, pushing the man off the bridge will save lives. But in the future I suspect I will suffer nightmares too intense to bear.
Some may respond impatiently: This is just the familiar sophomoric complaint that the thought experiment is unrealistic. All thought experiments are unrealistic, that is why they are thought experiments rather than real ones. Philosophers know that considerations of self interest play a role in real life, but we ask that you do your best to bracket these considerations in an effort to develop a clearer understanding of morality.
That is not good enough.
Just how are we supposed to bracket considerations of self-interest in this case? Are we asked to disregard our moral emotions altogether? It is these, after all, that predict a future I wish to avoid. But to do that is to squelch one of our main sources of moral evidence as well. Alternatively, should we allow ourselves to pay attention to the moral emotions, but only for the purpose of moral judgment, taking care (a) not to let considerations of self-interest infect these judgments, and (b) not to confuse the best decision with the morally correct one?
Wow. I have never heard the trolley problem presented like that. It is is not at all clear that we have this ability. But if it could somehow be trained up, I'm betting we would end up with a very different data set.
G. Randolph Mayes
Department of Philosophy