Sunday, September 29, 2013

The very idea of Folk Psychology

by Thomas Pyne

Suppose that the phenomena historically associated with demonic possession can be explained as psychotic symptoms.  That would not tempt us in the slightest to adopt reductionist theoretical identities like:
  • Belial  =  Psychotic Condition X
  • Asmodeus = Psychotic Condition Y
Instead we would just say, “There are no demons.”  And eliminate them from our ontology.

What makes a sort of entity A a candidate for an eliminativist program rather than reduction via theoretical identification?  Two conditions:  (i) A does no work in a literal and complete account of Everything That Is So; (ii) A has dubious credentials.  That is, we have reason to suppose that our acceptance of A was based on some cognitive error, or confusion.  An eliminativist program then must account for the error by which we came, mistakenly, to think that there were A’s. 

Contemporary Eliminative Physicalism has been conscientious in its attempt to meet both conditions.  First condition: it claims that attributing mental states is not needed in a literal and complete account of the world. Rather, the place of those attributions will be taken, as Paul Churchland puts it, by the employment of the conceptual scheme of a matured neuroscience.  It’s not that, say, ‘believing’ will be revealed as a brain process; it’s that there is no such thing as believing.  With the science-based conceptual scheme we will be able to talk about what is really going on in the brain instead.   Adoption of the new scheme in place of the old will constitute a “quantum leap in self-apprehension.” (It will accomplish that, of course, only if our ordinary mental attributions really don’t do any work.)

Second condition:  our traditional attribution of mental states is consequent upon a conceptual scheme that embodies a mistaken and inadequate theory.  This conceptual scheme, “Folk Psychology,” is the same kind of error or confusion as invoking demons to explain the voices schizophrenics hear.

The two conditions on an eliminativist program are not independent.  If it turns out that accepting a sort  of entity A is not, after all, a cognitive error or confusion (which is required for meeting the second condition), this weakens our grounds for thinking that A will not figure in a literal and complete account of Everything That Is So.    

This trope of characterizing our mental concepts as ‘Folk Psychology’should be subjected to sterner questioning than it usually is.  In particular we should question the assumption that our ordinary mental concepts form a theory. Just on the face of it, this is an implausible piece of historical revisionism, and I have thought so from the very first time I encountered the phrase.

In any language with which I was familiar the common verbs for cognitive activity are of the same antiquity, and are as much semantic ‘roots,’ as the verbs for other common activities. Liddell & Scott’s  Greek-English Lexicon thoughtfully prints semantic roots in caps. GEN (‘become’:  the root of ordinary Greek epistemic terms, eg. gnosis) BOL (‘desire’ or ‘intend’), PEITHO (‘overcome,’ ‘persuade,’ or in the middle voice ‘believe’) are basic to the language as EDO (‘eat’), PNEO (‘breathe’), BDEO (‘fart’), and BINEO (‘mate’).  In Old English ‘think,’ ‘ween,’ ‘deem’ are “four-letter” words:  as ancient as ‘walk,’ ‘sleep,’ and ‘shit.’ 

The best abductive explanation of this fact is that mental terms, like the other terms, designate common ordinary human functions and actions.  There is no particular distinction made between ‘physical’ actions and functions and ‘mental’ ones.  Eating, sleeping, farting, thinking, believing, and desiring are all just Stuff People Do.

To describe someone as ‘believing that it will rain,” or “wanting to lie down” is not to offer some sophisticated – though mistaken –  explanation of what they’re doing;  it’s simply to describe it.  What is a piece of philosophical sophistication is distinguishing between the ‘mental’ and ‘physical’ in a way that makes such attributions seem conceptually troublesome.  But this philosophical sophistication doesn’t license our reading that distinction back into our ordinary conceptual scheme.

To use an analogy, ‘Zeus’s Spear is an explanatory concept (in ‘Folk Meteorology’) of a more basic phenomenon, lightning.  ‘Lightning,’ however, is not a term of Folk Meteorology: it does not convey an explanation of anything.  It names the phenomenon to be explained.  Likewise, there is no more basic phenomenon that ‘believe’ serves to explain:  it is the phenomenon.  ‘Believe’ is like ‘lightning,’ not like ‘Zeus’s Spear.’

Eliminative physicalism regarding the mental became a popular strategy in the 80’s and 90’s again when it grew increasingly clear that reductive physicalism was never going to work.  But candidates for eliminativist strategies are entities with dubious credentials, our belief in which is based on confusions.  Thinking, desiring, and believing hardly come with dubious credentials.  They are common human functions, among the most obvious and humdrum features of our being in the world.

They are, when you stop to think about it, the least likely candidates imaginable for an eliminativist program. After all, there are no philosophers trying to eliminate ‘shit.’

That's because it makes an indispensable contribution to a literal and complete account of Everything That Is So.

Thomas F. Pyne
Department of Philosophy
Sacramento State

Sunday, September 22, 2013

In which I compare myself to God

by Kyle Swan

There are still many people, mostly outside the academy, who think that moral and political obligations are tied to divine commands. People should (not) do certain things because God says so. This would mean that God has practical authority over people. He makes it the case that people have obligations by simply issuing a command. Or, what I think would be roughly the same thing, God can create reasons for people to act, reasons they didn’t have before, by simply issuing a command.

For example, the ancient tribes of Israel presumably didn’t have normative reason to avoid eating bbq baby back ribs before God said not to eat them. But, according to this account of divine authority, they acquired such a reason when God declared pork unclean. Moral philosophers often talk about this kind of reason being external because the source of the reason is external to the agent who the claim is directed at, or because the claim is grounded in such a way that the motivational states of mind of that agent are irrelevant. Perhaps many of the ancient Israelites really liked bbq baby back ribs. Too bad.

Here’s another example: if you take a class from me you have to write an assigned paper. Say I assign a paper on Hobbes. You thereby acquire a reason to write a paper on Hobbes. If I instead assign a paper on Rawls, you acquire a reason to write a paper on Rawls. I have practical authority (within this relatively limited domain) over you. Much like God (!) I create a reason for you to act a certain way, a reason you didn’t have before, by simply requiring the assignment. You don’t want to write a paper on Hobbes? Too bad.

Maybe there’s a difference here between God and I. The practical authority I have over my students is contingent on their having signed up for the class. They have voluntarily placed themselves under my (relatively limited) authority. If I assigned a paper on Hobbes to my mail carrier, she wouldn’t thereby acquire any reason at all to write it. But those who review my syllabus, see that there will be paper assignments, and sign up for the class agree to submit to my determinations about the content of those assignments. They presumably do this because taking the class somehow connects up with goals they have or things they care about. So they have internal reason to do it. That seems like an important difference.

I’m not sure these cases really are conceptually different, though. Perhaps God’s authority is similarly contingent, and people’s reasons to comply with his rules similarly grounded in their motivational states. Here’s a section of the narrative where God hands down his law to the ancient Israelites:

Exodus 19:3 Then Moses went up to God, and the LORD called to him from the mountain and said, “This is what you are to say to the descendants of Jacob and what you are to tell the people of Israel: 4 ‘You yourselves have seen what I did to Egypt, and how I carried you on eagles’ wings and brought you to myself. 5 Now if you obey me fully and keep my covenant, then out of all nations you will be my treasured possession. Although the whole earth is mine, 6 you will be for me a kingdom of priests and a holy nation.’ These are the words you are to speak to the Israelites.” 7 So Moses went back and summoned the elders of the people and set before them all the words the LORD had commanded him to speak. 8 The people all responded together, “We will do everything the LORD has said.” So Moses brought their answer back to the LORD.

This looks a lot like a summary of a contract (or covenant). There’s a brief preamble and then promises are made on both sides. The terms are reviewed and accepted and at least appear to be contingent on that acceptance. So suppose the people of Israel in verse 8 had instead said something like ‘Ummm… Thanks for all that, and we really appreciate your offer, but no thanks”? Plausibly, in that case they wouldn’t have had normative reason to comply with all of God’s rules and God wouldn’t have had the standing to demand compliance or to punish them for not complying. The same plausibly goes for surrounding nations that weren’t party to this covenant. The Edomites could eat all the bbq baby back ribs they wanted. It would have been puzzling for the Israelites to demand of the Edomites that they not eat bbq baby back ribs and to hold them accountable if they did. Just as puzzling, perhaps, as me demanding of my mail carrier that she write a paper about Hobbes and holding her accountable when she doesn’t.

I’m not a theologian (though sometimes I try to fake it) and I don’t have too much more to say about the ancient Israelites. But I think the narrative illustrates important things about the social contract tradition, current debates about the nature of practical reason and, perhaps most of all, just how difficult it can be for someone to come to have practical authority over another person. 

Kyle Swan
Assistant Professor
Department of Philosophy
Sacramento State

Monday, September 16, 2013

Ignoring the negative

by Matt McCormick

“Isn’t it weird how celebrity deaths always come in threes?”
“I’m telling you, Asians are bad drivers.” 
“I know it’s not politically correct, but it's true, women just aren’t good at science.”
“I swear I have special dreams.  I dreamt the night before that my mom was going to have a car wreck and she did.”

Confirmation Bias is the mistake of selecting evidence that corroborates a pet hypothesis while ignoring or neglecting evidence that would disprove it. It’s the mother of all fallacies.  And the reason it persists is that it feels so right.  When you’re making the mistake, the conclusion you’re drawing has that shiny, aura of truthiness to it.

Humans are guilty of committing it in a wide range of circumstances. At the end of each semester, many students, including students in my (Prof. McCormick’s) Critical Thinking and Theory of Knowledge where we study confirmation bias extensively, blunder into it. They get a grade for the course that is surprisingly low and send an email to their professor asking to know what happened. As far as they knew, they were doing great in the course. They recall getting an A on an assignment, and doing pretty well on the midterm, and feeling pretty optimistic, so they can’t understand the low grade. Here’s a couple of real emails: 

Student Email 1: I just checked my grades for the Spring semester and was surprised to have earned an F. I completed the major assignments for the course and did well on the midterm (90%) and well on the final (85%). I know I didn't participate in the online forum as much as was required but I'm still confused about the grade. I took the class material seriously and did my best on every assignment assigned.

Professor McCormick (notice the grades highlighted in red here): Here are the grades I have for you. This syllabus gives the details about the grade structure. Check the math and check your returned assignments to make sure it's all right. If there's a clerical error, I'll fix it right away:

Question Sets: 0, 82, 0, 75, 0, 95 (6% each)
First paper: 78
Midterm: 90.5
Second paper: 85
Final: 85
Outside projects: 7/8
Google Group: 0/8
Attendance and participation: 0/8

So between the skipped question sets, the Group discussion and attendance, you gave up 34% of the grade. Even if you were making an A on everything else, that would put it down to a D.

Student 2: I'm emailing you in regards to my final grades. I was hoping you could provide a detailed summary of my grades for the semester so that I can understand how I received a D. I felt as though I did fairly well, particularly improving on the more major assignments, so I would just like to know how I still failed to pass. If you could email me a detailed summary of my grades, I would greatly appreciate it. Thank you.

Professor McCormick: Yeah, I was disappointed in your grades too. It seemed to me that you are capable of doing much better work, and being more responsible about turning stuff in. Here are the grades I have. Check the math with the grade structure on the syllabus and let me know if there is a clerical error asap: 

Question sets: 76, 0, 82, 0, 85, 56, 80 (6% each)
Evil paper 65
Midterm 68.5
Second paper: 75
Final exam: 85
Outside Projects: 6/8
Google Groups: 4/8
Attendance and participation: 6/8

So the skipped question sets took 12% off of your grade. You got a D on the first paper and didn't take the opportunity to rewrite it that I gave the class. You could have brought that up substantially. The Google Group points would have helped too since your overall score came out at 68%.

When we commit confirmation bias, we cherry pick the evidence that suits us. The student actively remembers the good grades, but missed assignments and low scores are forgotten.  Someone picks out the bad Asian driver, or the woman who does poorly at science, and then uses that to fortify their mistake. 

One more example:  over 50% of people think they’ve had prescient dreams or premonitions.  So suppose that you have 20 dreams a night, 365 days a year, for 10 years.  That’s 73,000 dreams. 
Which ones are notable and remembered?  The ones that seemed to have something to do with what happened the next day.  The dream you had that seemed to anticipate your mother’s car wreck leaps out in your memory as an extraordinary coincidence.  In China, there’s a saying, “No coincidence, no story.”  But more importantly, there are 72,998 dreams that weren’t special or notable. 

Clearly, having an accurate and objective grasp of the relevant evidence would serve us well. We don’t want to ignore evidence indicating something negative, disastrous, or dangerous because it doesn’t suit what we want to be true. Imagine if a doctor acquired a skewed view of the evidence concerning a potentially fatal disease this way. Suppose the Secretary of State ignored significant negative indicators in the behavior of an aggressive and hostile foreign country. Suppose a potential employer asked you how you did in your Critical Thinking course in college, and then she checked your transcripts against your distorted memory. Suppose you spend thousands of dollars over the years on losing lottery tickets because the occasional wins stick out in your mind so prominently, while the loses are forgotten.  Suppose you spend time praying to God frequently, hundreds or thousands of times in your life, and on the rare occasion when something vaguely resembling what you prayed for came true, you count that as an answered prayer, while ignoring the thousands of misses.  

Further Reading

Matt McCormick
Department of Philosophy
Sacramento State

(Note: A version of this piece was published on Matt's own blog Atheism: Proving the Negative on 6.4.13)

Sunday, September 8, 2013

Feeling and believing

Philosophers are the designated defenders of reason, and for millenia we have carried out this charge with the understanding that our arch enemy is emotion. This view is plausible and easily motivated. All of us can point to episodes of fear, lust, pride and disgust overwhelming our ability to think clearly and behave rationally.  But to cast emotion as the enemy of rationality on this basis alone is like diagnosing the gut as the enemy of digestion because it sometimes produces cramps, dyspepsia, vomiting and diarrhea. We need to understand the function of emotion before we can know whether to treat it as a friend or foe of reason.

Recent work in psychology and neuroscience has begun to show that emotion is not hostile to rational decision-making, but integral to it. One of the first to guess this correctly was the greatest philosopher of the modern period, David Hume.  Hume, as you may know, argued that reason alone has no power to motivate, baiting his opposition with polemical pithiness like the now (in)famous:
Reason is, and ought only to be the slave of the passions, and can never pretend to any other office than to serve and obey them. 
'Tis not contrary to reason to prefer the destruction of the whole world to the scratching of my finger.
Philosophers have since felled forests attempting to unpack the precise meaning of pronouncements like these, but the gist is clear enough.  Hume advanced an empirical hypothesis, viz., that emotion (passion, desire) is what causes all human behavior. Hence, it is impossible for emotion and reason to be in opposition, and it is impossible to perform any action on the basis of reason alone.

Hume's hypothesis turns out to have been correct to some degree.  For example, as a result of the work done by neuroscientists like Antonio Damasio we have learned that people who have certain forms of brain damage can reason well while having no ability to make rational decisions. This is roughly because the damage is to brain structures that process and transmit our emotions to those that produce reasoning. Afflicted people will reason expertly but not act, simply because they never get the emotional input that is required to pull the trigger.

On the other hand, Hume was wrong in thinking that emotions and reason can not come into conflict, and this is because he did not grasp that our emotions are themselves a means of conveying the output of unconscious inferential processes. (To be fair, such a view would have been received as incoherent at the time; the unconscious mind was, at best, a paradoxical plaything of poets, not natural philosophers.) According to what we now call dual process theories of cognition, human beings have two systems for making inferences and producing behavior. System 1 is responsible for the rapid, intuitive, effortless, massively parallel and mostly unconscious inferences that we need to survive in the natural and social world. Think, for example, about the amazing amount of information you immediately infer from a fleeting shadow or facial expression. System 2 is the laborious, conscious, serial and highly flexible process of inference we associate with calculation and conscious reasoning.

Our emotions, intuitions, hunches and gut feelings are primarily associated with the informational outputs of System 1. System 2, by contrast, is a capacity, perhaps unique to humans, to monitor the outputs of System 1 and to try to correct them when they go awry.  System 1 and System 2 are both prone to error, but for different reasons.  System 1 errs because rapidity is achieved by imprecise methods like association, stereotyping, bias and instinct.  System 2 errs primarily because it requires sustained attention and effort to perform properly.

Without the information supplied by the rapid and typically trustworthy calculations of System 1, reason would be swamped with work it is incompetent to perform. (This, for example, is the predicament of autistics, who often have very high IQ's, but have extraordinary difficulty processing language.)  It is true that, like our digestive tract, our emotions sometimes run amok and genuinely interfere with the normally smooth functioning of the mind. It is also true that System 1 does not know its own limitations. (Hell, it does not even know it exists!) But without it we would be lost.

What does all this mean for philosophical practice? Here are a few suggestions.

First, philosophers need to finally and fully reject the rationalist conceit that the best of us are people who draw conclusions and make decisions on the basis of reason alone. Vulcans are no more physically possible than philosophical zombies. All rational inference ultimately depends on emotional feedback.When the conclusions we reach through careful ratiocination don't feel right, we philosophers have a strong inclination to reject them, just like anyone else. For too long we have obscured this by calling the feelings philosophers appeal to 'intuitions' housed in a mythical rational region of the philosophical mind called the 'intellect.'

Second, we need to try to stop being frustrated (an emotional reaction :-) when people don't change their minds and their behaviors in response to arguments (ours, of course) they can not refute. This is just not the way the normal well-functioning mind works.  People are as prone to being misled by delusive reasoning as they are to being blinded by the strength of their feelings and it is profoundly unwise to automatically privilege one over the other in any categorical sense.

Finally, we need to get comfortable with a vocabulary that explicitly grants emotional reports significance in the arena epistemic.  My own view here is that we should learn to draw a clear distinction between our belief that a proposition is true and our feeling that it is true, and not just for the purpose of dismissing the latter from serious philosophical discussion.  I would like to constrain the idea of believing that something is true in such a way that it indicates a System 2 inclination to assent to a proposition on the basis of careful consideration of explicitly formulated evidence. By contrast I would constrain the idea of feeling that something is true to an inclination of System 1 to assent to a proposition on the basis of inferential processes and information, much of which may not be consciously available.

As I see it, one of the primary benefits of this way of speaking is that it reduces our incentive to misdescribe and rationalize our feelings as evidentially based beliefs, just to get them taken seriously by others.  If we respect feelings of truth and falsity from the beginning, then we can conduct more constructive inquiries aimed at feeling what we believe and believing what we feel.

Perhaps surprisingly, our traditional conception of the rational agent does not suffer greatly from giving emotion its due. Recognizing that our feelings can be important indicators of a truth that has eluded our reasoning does not in any way give feelings a veto when a conflict arises. When we just don't feel right about a conclusion produced by reason, the proper response is more reason, not more feeling. Sometimes System 2 will discover that System 1 was indeed picking up on evidence lurking under the surface.  But other times, especially as scientific knowledge of the evolutionary and neurological basis of System 1 grows, we will discover that the the feeling is an illusion resulting from one of its intrinsic blindspots.  In such cases, no matter how strong the feeling, rational inquiry must prevail.

G. Randolph Mayes
Department of Philosophy
Sacramento State