Another cause is at least as potent. The first frame focused the doctor’s mind on the potential gain, whereas the second frame emphasized the potential loss. Human beings appear to be loss averse, which means that it hurts a great deal more to lose something than it feels good to acquire it.
Loss aversion, on the other hand, can actually cause people to become risk seeking. For example, if you've just lost 200 dollars you may be more than normally attracted to an opportunity to bet 50 dollars on a 10% chance to win 250. This is a bet you'd almost certainly pass over in other contexts, and rightly so since its expected value is a loss of 20 dollars. What's at work here is our basic inability to simply ignore sunk costs and make decisions strictly on the basis of their value for the future.
But until an enterprising X-phi doctoral student does the work, we're free to speculate. The obvious argument for thinking philosophers would show some resistance to framing effects is that we're supposed to be pretty good at detecting both logical equivalences and logical and performative inconsistencies. When philosophers wrestle with thought experiments and paradoxes (the Trolley Problem, Qualia Inversion, the Chinese Room, Twin Earth, the Gettier problem, the Raven Paradox) two of our central activities are determining whether descriptions of situations and outcomes are (a) logically equivalent and (b) logically coherent.
On the other hand, there are some features of the philosophical mind that could militate against this happy outcome. The one that stands out most for me is our continuing obsession with certainty. We officially denounced certainty as a criterion of knowledge in the early 20th century, but as a group we still pyne for it. We primarily speak the language of proof and necessity, not evidence and probability. Almost all professional philosophers have taken formal logic at some point in their career, but comparatively few have studied induction in a serious way. This suggests that we might be even more prone than similarly educated people to risk-based preference reversal.
I am also inclined to agree with Justin Smith that contemporary philosophers are not the most curious people in the world. The X-phi movement may be a harbinger of change, but philosophy still seems to attract a lot of intellectual floogie birds, more interested in the comfort of justification than the thrill of discovery. Mad reasoning skills won't help with framing if your basic instinct is to keep circling until your intuitions are fully fortified. It will do the opposite.
If you are curious about your own sensitivity to framing, consider this example, which bounced off my forehead the first time I read it in Daniel Kahneman's book Thinking, Fast and Slow. The example is from the economist Thomas Schelling, and it shows how our strong moral intuitions can interfere with our ability to think clearly.
Schelling's example is this: Consider the U.S. federal tax exemption for families with dependent children. If you are even a slightly liberal-minded person you probably agree that this is a terrible idea: The rich shall be given a larger exemption than the poor.
Fine, bad idea. But now consider that (just as with our discount vs. penalty example above) the tax code language is arbitrary. We can state an equivalent policy by expressing it as a surcharge that must be paid for each dependent child under a certain number you lack. (If you don't immediately see this, just consider what it is like to be a childless taxpayer not getting the exemption. You are literally being charged for not having children.)
Now consider this proposal for a system of surcharges. Rich people will pay a smaller surcharge than poor people. Well, that's obviously just as terrible an idea. These are both policies that transparently favor the rich over the poor. They are just stated differently.
If this point is perfectly obvious to you, then congratulations! You've just been framed.
G. Randolph Mayes
Professor, Department of Philosophy
Sacramento State University