Recent work in psychology and neuroscience has begun to show that emotion is not hostile to rational decision-making, but integral to it. One of the first to guess this correctly was the greatest philosopher of the modern period, David Hume. Hume, as you may know, argued that reason alone has no power to motivate, baiting his opposition with polemical pithiness like the now (in)famous:
Reason is, and ought only to be the slave of the passions, and can never pretend to any other office than to serve and obey them.and
'Tis not contrary to reason to prefer the destruction of the whole world to the scratching of my finger.Philosophers have since felled forests attempting to unpack the precise meaning of pronouncements like these, but the gist is clear enough. Hume advanced an empirical hypothesis, viz., that emotion (passion, desire) is what causes all human behavior. Hence, it is impossible for emotion and reason to be in opposition, and it is impossible to perform any action on the basis of reason alone.
Hume's hypothesis turns out to have been correct to some degree. For example, as a result of the work done by neuroscientists like Antonio Damasio we have learned that people who have certain forms of brain damage can reason well while having no ability to make rational decisions. This is roughly because the damage is to brain structures that process and transmit our emotions to those that produce reasoning. Afflicted people will reason expertly but not act, simply because they never get the emotional input that is required to pull the trigger.
On the other hand, Hume was wrong in thinking that emotions and reason can not come into conflict, and this is because he did not grasp that our emotions are themselves a means of conveying the output of unconscious inferential processes. (To be fair, such a view would have been received as incoherent at the time; the unconscious mind was, at best, a paradoxical plaything of poets, not natural philosophers.) According to what we now call dual process theories of cognition, human beings have two systems for making inferences and producing behavior. System 1 is responsible for the rapid, intuitive, effortless, massively parallel and mostly unconscious inferences that we need to survive in the natural and social world. Think, for example, about the amazing amount of information you immediately infer from a fleeting shadow or facial expression. System 2 is the laborious, conscious, serial and highly flexible process of inference we associate with calculation and conscious reasoning.
Our emotions, intuitions, hunches and gut feelings are primarily associated with the informational outputs of System 1. System 2, by contrast, is a capacity, perhaps unique to humans, to monitor the outputs of System 1 and to try to correct them when they go awry. System 1 and System 2 are both prone to error, but for different reasons. System 1 errs because rapidity is achieved by imprecise methods like association, stereotyping, bias and instinct. System 2 errs primarily because it requires sustained attention and effort to perform properly.
Without the information supplied by the rapid and typically trustworthy calculations of System 1, reason would be swamped with work it is incompetent to perform. (This, for example, is the predicament of autistics, who often have very high IQ's, but have extraordinary difficulty processing language.) It is true that, like our digestive tract, our emotions sometimes run amok and genuinely interfere with the normally smooth functioning of the mind. It is also true that System 1 does not know its own limitations. (Hell, it does not even know it exists!) But without it we would be lost.
What does all this mean for philosophical practice? Here are a few suggestions.
First, philosophers need to finally and fully reject the rationalist conceit that the best of us are people who draw conclusions and make decisions on the basis of reason alone. Vulcans are no more physically possible than philosophical zombies. All rational inference ultimately depends on emotional feedback.When the conclusions we reach through careful ratiocination don't feel right, we philosophers have a strong inclination to reject them, just like anyone else. For too long we have obscured this by calling the feelings philosophers appeal to 'intuitions' housed in a mythical rational region of the philosophical mind called the 'intellect.'
Second, we need to try to stop being frustrated (an emotional reaction :-) when people don't change their minds and their behaviors in response to arguments (ours, of course) they can not refute. This is just not the way the normal well-functioning mind works. People are as prone to being misled by delusive reasoning as they are to being blinded by the strength of their feelings and it is profoundly unwise to automatically privilege one over the other in any categorical sense.
Finally, we need to get comfortable with a vocabulary that explicitly grants emotional reports significance in the arena epistemic. My own view here is that we should learn to draw a clear distinction between our belief that a proposition is true and our feeling that it is true, and not just for the purpose of dismissing the latter from serious philosophical discussion. I would like to constrain the idea of believing that something is true in such a way that it indicates a System 2 inclination to assent to a proposition on the basis of careful consideration of explicitly formulated evidence. By contrast I would constrain the idea of feeling that something is true to an inclination of System 1 to assent to a proposition on the basis of inferential processes and information, much of which may not be consciously available.
As I see it, one of the primary benefits of this way of speaking is that it reduces our incentive to misdescribe and rationalize our feelings as evidentially based beliefs, just to get them taken seriously by others. If we respect feelings of truth and falsity from the beginning, then we can conduct more constructive inquiries aimed at feeling what we believe and believing what we feel.
Perhaps surprisingly, our traditional conception of the rational agent does not suffer greatly from giving emotion its due. Recognizing that our feelings can be important indicators of a truth that has eluded our reasoning does not in any way give feelings a veto when a conflict arises. When we just don't feel right about a conclusion produced by reason, the proper response is more reason, not more feeling. Sometimes System 2 will discover that System 1 was indeed picking up on evidence lurking under the surface. But other times, especially as scientific knowledge of the evolutionary and neurological basis of System 1 grows, we will discover that the the feeling is an illusion resulting from one of its intrinsic blindspots. In such cases, no matter how strong the feeling, rational inquiry must prevail.
G. Randolph Mayes
Department of Philosophy