Sunday, June 22, 2014

A dilemma for Utilitarians

How do Utilitarians understand the judgments they derive from the Utilitarian calculus? Let’s work with an example. Here’s one:
G: It is morally right (required) to give 10% of your income to people who are less fortunate than yourselves (because doing so would maximize utility).
I’m not interested in whether G is true. Let’s assume it is. I’m interested in how Utilitarians understand G.

One way they might understand G is to say, well, G is a moral judgment. Moral judgments are (by definition) normative. They are claims or directives asserting that there are quite strong reasons for acting. G, then, can be roughly translated as
G1 “You have significantly weighty reason to give 10% of your income to the less fortunate.” Or,
G2 (“You must) give 10% of your income to the less fortunate.”
More: if these claims are genuinely moral, truly justified, then they can’t legitimately be simply shrugged off. So you can add to the translation above by saying
G3 “If you don’t give 10% of your income to the less fortunate, then you’re blameworthy.” And,
G4 “If you don’t give 10% of your income to the less fortunate, then you should feel guilty.”
What if I don’t agree that I have this significantly weighty reason? (I reflect and introspect about my reasons, and that one just ain’t there. How does the Utilitarian know better than me what reasons for action I have?) What if rather I think it would be really nice for me to do it, but it isn’t required and so guilt and blame would be inappropriate. I can understand how nice it would be, but I don’t understand anyone getting angry at me if I don’t. I mean I know (we’re supposing) that doing this will maximize utility, but why must I do that?

But on this first way of understanding G for a Utilitarian (where G implies G1-G4), my view about what reasons I have don’t matter. Only the Utilitarian calculus does (or the Utilitarian’s calculation of net aggregate happiness does). Utilitarians don’t care about my view about what I have reason to do. They don’t care about whether G1-G4 make sense from my considered point of view. But that seems oppressive. It seems less like a justified pronouncement of moral authority and more like authoritarianism – Utilitarians just bossing me around or using moral language to manipulate me into doing what they want me to do.

I have Utilitarian friends (well, can Utilitarians actually be friends?) who will deny that G means G1-G4. Instead they say nothing practical straightforwardly follows from G. It’s good, in some sense, when it happens that utility is maximized, but it turns out that moral judgments aren’t actually judgments about what people have reason to do. “Giving 10% of your income to the less fortunate is ‘right’” these Utilitarians would be saying, “but I don’t know what to tell you to do.” So, people who fail to give 10% of their income to the less fortunate would be failing to maximize utility, but that doesn’t mean they’re blameworthy or that they acted against really strong reasons for action. Therefore, the judgment isn’t authoritative; G is just a claim about what would maximize utility and so, according to Utilitarianism, the “right” action. But you might not have particularly strong reason to do the “right” action.

But if G is understood in this way, instead of being oppressive, it’s simply inert. These ‘moral’ judgments seem like abstract theoretical claims and don’t even claim normativity for themselves. This option is at least odd because morality is typically thought to be normative, playing an important practical role in human relations.

Either way, Utilitarianism seems like a pretty revisionist view, and not in a good way.

Kyle Swan
Department of Philosophy
Sacramento State

Monday, June 9, 2014

The explanatory reductio

“It ain’t what you don’t know that gets you into trouble. It’s what you know for sure that just ain’t so.”

                                                                                                                                     ~ Mark Twain

One simple way of identifying the defining characteristic of an explanation is to distinguish it from an argument. Whereas an argument provides reasons we should believe something, an explanation provides reasons why something we already believe, actually occurs.  In rhyme:
  • An argument says how we know. 
  • An explanation says why it is so.
This is an excellent general purpose way to think about the nature of explanation (and argument) and I recommend tattooing it somewhere special. 

But it isn't the whole story. To appreciate why, let's begin with Twain's lovely remark: Sometimes what we know just ain't so.  Of course, if you are accustomed to philosophical usage, you'll see that this is paradoxical: knowledge implies truth. So, in more quotidian terms, Twain is observing that we are often utterly convinced of things that turn out to be false.

I doubt any reader of this blog will need to be convinced of Twain's fundamental point, that passionate commitment to falsehoods can cause far greater harm than simple ignorance.  My point is that what we know that ain't so is also a nice way to  appreciate the fact that explanation has a larger role than simply accounting for the facts.

Consider an everyday example.  I get a check in the mail saying that I have just won 10 million dollars. Do I even bother to open it?  Nope.  I can partly defend this by appeal to probability and expected value: It is so fantastically unlikely that 10 million dollars would simply drop out of the sky that the time it would take to inquire is far more trouble than its worth. But the other, equally important way of accounting for it is explanatory in nature: Why the hell would anyone just give me 10 million dollars? If there is no plausible explanation, maybe that's because they didn't.

Now, what exactly am I doing when I put the matter this way?  Am I accepting it as actual fact that I won 10 million dollars and proceeding to explain said fact?  No, rather, I am engaging in what I will call an explanatory reductio ad absurdum.  

You are familiar with the standard reductio:  We accept a claim for the sake of argument, and show that it implies an absurdity.  In the explanatory reductio we accept something for the sake of explanation, and show that it rests on an absurd understanding of the world.

Here is one of my favorite New Yorker cartoons, which makes the point beautifully.

So our powers of explanation do not, as our initial definition suggests, exist simply to help us understand independently established facts. Rather, we often engage in explanation in order to determine whether we've got hold of the right ones. 

Some other examples:

Did you see the movie A Beautiful Mind?  There is a poignant moment in which John Nash, a (real life) brilliant mathematician and game-theoretician suffering from schizophrenia uses the power of pure reason to break the grip of his mental illness and convince himself that a young girl appearing to him over a long period of time is not real. 

Nash uses an explanatory reductio:  If she is real, why doesn't she get older?

Another.  I recently read Philip Roth's book American Pastoral.  It is predicated on a sensationally unlikely event: A teenage girl raised in an affluent New York family by two devoted and loving parents (allegedly) bombs a local post office, killing a local man, an (apparently) loco act of protest against the Vietnam War, and then (unquestionably) disappears. Almost the entire book is an act of excruciating soul searching in which the girl's father attempts to understand how a child he raised could have performed such an abominable act. There is just no explanation for it compatible with his understanding of the world. Consequently- he often confidently concludes, only to reverse himself a moment later- she simply could not have done it.

Or consider an example from science.  You are probably familiar with the Alvarez hypothesis (named after Walter Alvarez and his famous dad Luis) which claims that the massive extinction at the end of the Cretaceous period  (think dinosaurs) was caused by a massive asteroid. Although widely accepted today, it was initially met with scorn by the scientific community, especially by biologists who were deeply committed to the view that evolution necessarily occurs gradually. The gradualists employed an explanatory reductio: The very fact, Professors Alvarez, that you must appeal to Biblical scenarios like this one to explain this sudden massive extinction of the dinosaurs is a very good reason for thinking that no such extinction event ever occurred (i.e., that it is just an illusion created by an incomplete fossil record.)

The famous theoretical physicist Richard Feynman characterized the explanatory reductio about as elegantly as one can in this brief clip:

Ok, enough, now you are coming up with examples of your own. They're everywhere. So what is the subtler account of explanation that emerges here?

Try this: Explanation is fundamentally an attempt to improve our understanding of the world. Sometimes accepted facts will challenge our limited understanding and we are forced to develop better theories to account for them. Other times a better understanding of the world will challenge our 'facts', and we are forced to consider the possibility that what we know just ain't so. On those occasions, our understanding will be improved by explaining how we came to be convinced of a falsehood.  As I'll explain in a future post, a very large number of pivotal explanatory episodes in the history of science can be understood in this way, not as the explanation of accepted facts, but as the explanation of universal illusions.

G. Randolph Mayes
Department of Philosophy
Sacramento State