Sunday, December 8, 2013

How to frame a philosopher

Almost all human beings are susceptible to what psychologists call framing effects. This means that people will reverse their preferences and choices based simply on the way information is presented. For example, a doctor who recommends surgery to a patient based on learning that the surgery has a 90% survival rate may well have cautioned against it if she had learned instead that 1 out of every 10 patients will die. 

In this example the effect is partly due to differential vividness. Information about probabilities and percentages do not typically cause strong emotional reactions in normal people.  But when the doctor reads that 1 out of every 10 patients dies, she vividly imagines the death of her own patient.

Another cause is at least as potent. The first frame focused the doctor’s mind on the potential gain, whereas the second frame emphasized the potential loss.  Human beings appear to be loss averse, which means that it hurts a great deal more to lose something than it feels good to acquire it.

You probably know that most people are naturally risk averse, but loss aversion is an entirely distinct (and slightly more contested) phenomenon. People show their risk aversion when they opt for a guaranteed gain rather than an uncertain one of higher expected value.  For example, most people would choose a guaranteed 100 dollars over an 80% chance of 150. (The expected value of the latter is .8 x 150 = 120.) The insurance industry is entirely dependent on our aversion to risk. Most of us will pay significantly more than the expected value of an insurance policy for a guaranteed outcome of lesser value.

Loss aversion, on the other hand, can actually cause people to become risk seeking. For example, if you've just lost 200 dollars you may be more than normally attracted to an opportunity to bet 50 dollars on a 10% chance to win 250.  This is a bet you'd almost certainly pass over in other contexts, and rightly so since its expected value is a  loss of 20 dollars. What's at work here is our basic inability to simply ignore sunk costs and make decisions strictly on the basis of their value for the future.

Obviously people who are susceptible to framing effects are easily exploited.  What I'm curious about, though, is how susceptible philosophers are to framing. I'd love to believe we are generally less so, but it's an empirical question, and one that would not be too hard to study.  For example, you could take 500 members of the APA and divide them into two groups. Offer one group a discount for early payment of conference fees and inform the other that a penalty will be assessed for late payment. If philosophers were not at all susceptible to framing then we would see roughly the same proportion of early to later payments in each of these situations.

But until an enterprising X-phi doctoral student does the work, we're free to speculate. The obvious argument for thinking philosophers would show some resistance to framing effects is that we're supposed to be pretty good at detecting both logical equivalences and logical and performative inconsistencies. When philosophers wrestle with thought experiments and paradoxes (the Trolley Problem, Qualia Inversion, the Chinese Room, Twin Earth, the Gettier problem, the Raven Paradox) two of our central activities are determining whether descriptions of situations and outcomes are (a) logically equivalent and (b) logically coherent.

On the other hand, there are some features of the philosophical mind that could militate against this happy outcome. The one that stands out most for me is our continuing obsession with certainty. We officially denounced certainty as a criterion of knowledge in the early 20th century, but as a group we still pyne for it. We primarily speak the language of proof and necessity, not evidence and probability. Almost all professional philosophers have taken formal logic at some point in their career, but comparatively few have studied induction in a serious way. This suggests that we might be even more prone than similarly educated people to risk-based preference reversal.

I am also inclined to agree with Justin Smith that contemporary philosophers are not the most curious people in the world.  The X-phi movement may be a harbinger of change, but philosophy still seems to attract a lot of intellectual floogie birds, more interested in the comfort of justification than the thrill of discovery. Mad reasoning skills won't help with framing if your basic instinct is to keep circling until your intuitions are fully fortified. It will do the opposite.

If you are curious about your own sensitivity to framing, consider this example, which bounced off my forehead the first time I read it in Daniel Kahneman's book Thinking, Fast and Slow. The example is from the economist Thomas Schelling, and it shows how our strong moral intuitions can interfere with our ability to think clearly.

Schelling's example is this:  Consider the U.S. federal tax exemption for families with dependent children.  If you are even a slightly liberal-minded person you probably agree that this is a terrible idea:  The rich shall be given a larger exemption than the poor.

Fine, bad idea. But now consider that (just as with our discount vs. penalty example above) the tax code language is arbitrary. We can state an equivalent policy by expressing it as a surcharge that must be paid for each dependent child under a certain number you lack. (If you don't immediately see this, just consider what it is like to be a childless taxpayer not getting the exemption. You are literally being charged for not having children.)

Now consider this proposal for a system of surcharges. Rich people will pay a smaller surcharge than poor people. Well, that's obviously just as terrible an idea.  These are both policies that transparently favor the rich over the poor.  They are just stated differently.

If this point is perfectly obvious to you, then congratulations! You've just been framed.

G. Randolph Mayes
Professor, Department of Philosophy
Sacramento State University


  1. I don't feel framed Randy but He who frames the argument/decision wins. So the best way to get people to do X is to emphasize the disadvantages of not doing X, rather than emphasizing the advantages of doing X? I will try this with students, say, when I want them to take an online quiz, which 20% of them skip historically. Instead of telling them that they can earn 10 points or 5% of the total points available for the course if they complete the quiz, I will tell them that they can lose 10 points or 5% of the total points available if they do not complete the quiz. Am I using the framing effect correctly? Let’s see how well that experiment goes.

  2. Scott, yes, that's about right. But more precisely, every advantage that one can gain by doing x can be framed as a disadvantage that one suffers for not doing x. So choose the latter frame.

    I've considered going further than that and adopting a grading schema in which they begin with all the points available in the class and then just lose them each time they turn in an imperfect assignment. My main hesitation with that (and it would apply to a lesser extent to your proposal as well) is that it would also make the course on the whole quite a bit less pleasurable experience for the students. I'm not sure I would enjoy it as much either.

  3. R, you're most likely familiar with the phenomenon that when it comes to moral and cognitive tasks people aren't motivated by reward and punishment. I wonder what the interaction effect would be between the framing effect and the above phenomenon. We can prime people to take more risks or remain more cautious. But, what's actually going on (and what will actually happen) with the quality of their thought (moral/cog) when the task requires something more than just choosing to roll or not?

  4. Vadim, I'm not sure I understand the question. Interaction effects require three variables right? So is the question: does framing affect our ability to assess risk differently depending on the level of punishment/ reward associated with the outcomes? Could you give me an example?

    1. R, that's correct. Here the two independent variables would be framing x external motivation (we can also add in internal motivation); but the dependent variable would not how we assess risk; rather it would be the quality of student work. The reason why I bring this up is that the exchange between you and Scott seemed to be about framing in the classroom, and I worry that framing combined with a focus on external motivation could impact the quality of student work.

  5. Randy, Thanks for the really clear explanation of the phenomenon, and for some suggestions for how to exploit it... for good, of course! I use this method with grading by starting all students off with an F, 0/100. So they spend the best part of the semester seeing an F in their gradebook as they gradually complete assignments and the grade slowly eeks its long climb upward through the letters. By the end of semester, they're happy with the C or B or A- they clearly worked hard to get. Averaging the grades, on the other hand, frames the grades as more fluid and dynamic... and for students who are mathematically challenged, too much frustration watching their grade go up, then down, then stall. I've had students say in class how glad they were to finally see that F turn to a D- around mid-semester. This kind of framing can be really helpful in managing frustration, expectations, and, interestingly, achievement.

  6. Chris, that's a very interesting approach. It reminds me of the example Daniel Kahneman uses to explain why Daniel Bernoulli was wrong to think that a person's happiness with an outcome is a function of its objective utility. He uses this simple example:

    "Today Jack and Jill each have a wealth of 5 million. Yesterday, Jack had 1 million and Jill had 9 million. Are they equally happy? (Do they have the same utility?)"

    Of course the answer is that their outcomes have the same utility, but Jack is elated and Jill is miserable. And you're applying that insight to grades. What's interesting to think about though is what the actual effect of the policy is on motivation. According to Kahneman Jill is actually more miserable than Jack is elated. So there is a prima facie case to be made that from an achievement point of view it would be better to start them with an A than an F. But achievement just isn't the only thing we care about. We also want students to like philosophy enough to take another class.