Sunday, March 11, 2018

Why I love money

I’ve been a professional philosopher for a while now, and one thing I have noticed is that few of my kind think much of money. In one sense this is to be expected and, perhaps, admired. Philosophy has always been associated with a concern for something larger and more meaningful than the accumulation of material wealth.

But the sense I have in mind is neither expected nor admirable. What I mean is that we seem unduly unfascinated by money. It's as if our distaste for a life lived in pursuit of money inhibits our ability to appreciate its philosophical significance: what it is, how it works, what it suggests about human nature and society. This is unfortunate because money may be the most powerful invention, the most intriguing entity, and the greatest force for human cooperation this side of God. Every philosopher should try to understand why.

Neither Plato nor Aristotle were huge admirers of money, but they thought about it enough to know that it emerged as a way of overcoming the limitations of barter. They knew the obvious limitation: In a barter economy Socrates may desire a massage from Epione, but Epione may desire no instruction in philosophy from Socrates. So, to get worked on, Socrates must fetch Alcibiades, who desires Socrates' services, and who will gladly send several jugs of wine to Epione in return.

Another impediment, perhaps more dimly appreciated, is that even if Epione were desirous, she wouldn’t know how much philosophy to charge. Purveyors in a barter economy have to consider the exchange rate between their goods and every other thing they may be willing to accept in return.

Money solves both of these problems. First, money can be used to represent the value of every other good. In a money economy, Epione doesn’t have to compute the value of a massage in units of philosophy. She just needs to state her fee. Second, everyone accepts money as payment. As Yuval Harari points out, this is because “Money is a universal medium … that enables people to convert almost everything into almost anything else.”

If money had never been created, human societies would probably have remained small and commerce between them cautious, limited and infrequent. Money economies facilitated routine transactions, hence growing levels of trust between complete strangers. It enabled societies to become vastly larger, more complex and capable of previously inconceivable levels of cooperation. As a result, money replicates itself, causing ever more wealth to be crated. All this glorious complexity occurred because money vastly simplified the computational tasks individuals needed to perform to exchange goods and services.

Granted that money does all these things, the question is how. What makes us accept money as payment in the first place?

A simple answer would be that the stuff of which money is composed has independent utility. This is the correct answer for some forms of “commodity money” used in primitive money economies. Wheat, tea, candy, cigarettes, and cacao beans have all have been used as money specifically because they have independent value to humans. The same explanation holds TIC for gold coins. As Cortés explained to Moctezuma, Spaniards suffer from a “certain disease of the heart” that only gold can cure.

But advanced economies don’t use commodity money, they use “fiat money.” To understand this, consider the scraps of paper we call dollar bills. It used to make sense to accept these as payment. They were essentially just government-issued IOU’s. Theoretically all of the currency in circulation was redeemable for gold.

Paper bills “store the value” of an existing and universally desired commodity, making it possible to exchange the commodity without having to transfer it physically. Of course, a system like this works only because those who participate in it believe the bills will be honored. (When the government bounces notes, all hell breaks loose.) So this variation on a system of commodity money both requires and fosters even greater levels of trust than before.

A system of fiat money emerges when this cord is cut; when paper, coins, and (now) bits of electronic data are no longer tethered to an existing commodity. This occurred in virtually all major economies during the 20th century. Governments still maintain reserves of gold, but it is officially just another good, not something that underwrites the value of their currency.

Almost all economists believe this was a positive development (though some politicians do not.) It seems to be basically working. But why should it? In the past, money was clearly tied to a material reality. Now it is as if money exists only insofar as we believe that it does. Again Harari:
Money is [fundamentally] a system of mutual trust, and not just any system of mutual trust: money is the most universal and most efficient system of mutual trust ever devised. 
Money, then, is one our most salient examples of an intersubjective reality, a set of entities, structures and processes whose existence and causal powers are palpable, but which would vanish into thin air in the absence of mutual trust and belief. Nations, cities, constitutions, corporations, schools, legal systems, rights, obligations, roles and privileges are all putative examples of such.

Intersubjective reality, first described by Kant, is to be distinguished from subjective reality (isolated in a single mind) and objective reality (mind independent). It is a philosophically intriguing category partly because it is difficult to decide whether its members (a) really exist in virtue of being believed to exist or (b) really do not exist, even though the mutual illusion that they do is useful in producing cooperative behavior.

The second interpretation seems clearly appropriate for some kinds of entities. Gods, for example, are imaginary entities whose value is best explained in this way. So are morals, at least to the extent that they are represented as the deliverances of gods.

But money seems distinctly different. It just seems crazy to deny the existence of something that makes the entire world go round.

G. Randolph Mayes
Department of Philosophy
Sacramento State

Tuesday, March 6, 2018

Learning Moral Rules

While evolutionary psychology has led to a proliferation of (often outlandish and essentializing) claims about innate human traits and tendencies, the view that human morality is innate has a long and reputable history. Indeed, broadly evolutionary accounts of morality go back to Darwin and his contemporaries. Views that posit innate cognitive mechanisms specific to the domain of morality (viz., moral nativism) are of more recent vintage. The most prominent contemporary defenders of moral nativism adopt a perspective called the “linguistic analogy” (LA), which uses concepts from the Chomskian program of generative linguistics to frame issues in the study of moral cognition.[1] Here, I present one of LA’s key data points, and propose an alternative, non-nativist explanation of it in terms of learning. 

The data point on which I’ll focus concerns the proposed explanation for certain observed patterns in people’s moral judgments, including in response to trolley cases. In the sidetrack case (fig. 1), most people judge that it would be permissible for a bystander to save five people by pulling a switch that would divert the trolley onto a sidetrack where one person would be struck and killed. However, in the footbridge case (fig. 2), most people judge that it would not be permissible for a bystander to save five people on the track by pushing someone bigger than himself off the bridge into the path of the train to stop it.

The results from cross-cultural studies of the trolley problems and similar dilemmas suggest that subjects’ judgments are sensitive to principled distinctions like the doctrine of double effect, where harms caused as a means to a good outcome are judged morally worse than equivalent harms that are mere side effects of an action aimed at bringing about a good outcome.[2]

To explain the acquisition of these implicit rules, LA invokes an argument from the poverty of moral stimulus. For example, Mikhail argues that to judge in accordance with the doctrine of double effect involves tracking complex properties like ends, means, side effects, and prima facie wrongs such as battery. It’s implausible that subjects’ sensitivity to these abstract properties is gained through instruction or learning. Rather, a more plausible explanation is that humans are endowed with an innate moral faculty that enables the acquisition of a moral grammar (which includes the set of these rules).[3]

I believe that other research from language acquisition and the cognitive sciences more broadly points to the availability of a different explanation of how these implicit rules could be acquired, via learning mechanisms not specific to the moral domain. Evidence suggests that children employ powerful probabilistic learning mechanisms early in their development.[4] With these mechanisms, children are able form generalizations efficiently, on the basis of what might otherwise appear to be sparse data.

Consider the following example of a study on word learning: 3-4 year old subjects who heard a novel label applied, for example to one Dalmatian extend the label to dogs in general.[5] When applied to three Dalmatians, subjects extend the label to Dalmatians only. In the latter case, though the data is consistent with both candidate word meanings (dog, Dalmatian), the probability of observing three Dalmatians is higher on the narrower hypothesis.

I propose that a similar process of inference could account for the acquisition of implicit moral rules. There may be sufficient information contained in the stimuli to which individuals typically are exposed in the course of their early development – including the reasoning and response patterns of adults and peers in their environment – to account for their ability to make such moral distinctions. Consider the act/omission distinction. Cushman et al. found that subjects judge in accordance with what they call the ‘action principle’, according to which harm caused by action is judged morally worse than equivalent harm caused by omission.[6] Children observe this distinction in action. A child may be chided more harshly for upsetting a peer by taking a cookie away from her than for upsetting a peer by failing to share his own cookies, for example. With the probabilistic learning under consideration, it may take surprisingly few such observations for children to generalize to a more abstract form of this distinction. Observing the distinction at play in a few different types of scenarios may be sufficient for a learner to generalize, and go beyond tracking the distinction just in the particular cases observed to infer a general model that could have given rise to data they have encountered.

Of course, further investigation is needed to comparatively assess these two proposals. I’ll end by noting that the debate over moral nativism has both theoretical and practical implications. If the non-nativist account is right, this points to a view of our capacity for moral judgment as more malleable and amenable to intervention and improvement than the nativist account suggests. On the other hand, some (though not all) take the nativist account, if correct, to invite a skeptical view about morality.

Theresa Lopez
Department of Philosophy
University of Maryland

[1]Dwyer, S., Huebner, B., and Hauser, M. 2010: The linguistic analogy: motivations, results and speculations. Topics in Cognitive Science, 2, 486–510.
[2] Hauser, M., Young, L., and Cushman, F. 2008: Reviving Rawls’ linguistic analogy. In W. Sinnott-Armstrong (Ed.), Moral psychology, Vol. 2, The Cognitive Science of Morality: Intuition and Diversity. Cambridge, MA:
MIT Press, 107-144.
[3] Mikhail, J. 2011: Elements of Moral Cognition. Cambridge: Cambridge University Press.
[4] Xu, F., and Griffiths, T.L. 2011: Probabilistic models of cognitive development: Towards a rational constructivist approach to the study of learning and development. Cognition, 120, 299-301; Perfors, A., Tenenbaum, J. and Regier, T. 2011: The learnability of abstract syntactic principles. Cognition, 118, 306-338.
[5] Xu, F., and Tenenbaum, J. B. 2007: Word learning as Bayesian inference. Psychological Review, 114, 245–272.
[6] Cushman, F., Young, L. and Hauser, M. D. 2006: The role of conscious reasoning and intuition in moral judgment: testing three principles of harm. Psychological Science, 17, 1082-1089.

Sunday, March 4, 2018

Should the Washington Redskins Change Their Name?

A hot ethics topic in NFL football is whether or not the Washington Redskins should change their name in light of numerous requests for doing so from groups such as the National Congress of American Indians and the tribal council of the Cherokee Nation of Oklahoma. For, such groups consider ‘redskins’ to be a racial slur.

Records indicate that the first use of ‘redskins’ came in the mid-18th century, where Native Americans (NA) referred to themselves as ‘redskins’ in response to the frequent use of skin color identification by colonials in calling themselves ‘white’ and their slaves ‘black.’ In 1863, an article in a Minnesota newspaper used the term in a pejorative sense: “The State reward for dead Indians has been increased to $200 for every red-skin sent to Purgatory. This sum is more than the dead bodies of all the Indians east of the Red River are worth.” In 1898, Webster’s dictionary defines ‘redskin’ as “often contemptuous.” In 1933, the football team’s name was changed to ‘Redskins.’ Similar to the Oxford Dictionary, writes: “In the late 19th and early 20th centuries…use of the term redskin was associated with attitudes of contempt and condescension. By the 1960s, redskin had declined in use; because of heightened cultural sensitivities, it was perceived as offensive.”

The main argument used to support the use of this name relies on opinion polls. The Annenberg Institute 2004 poll and the Washington Post 2016 poll show that 90% of NAs do not perceive the term to be offensive. These have been used by the team owner, Dan Snyder, and the NFL commissioner to defend the name.

However, one problem is with the questions posed. For instance, similar to the Annenberg poll, The Washington Post asked, “As a Native American, do you find that name offensive, or doesn’t it bother you?” Notice that it still could be that NAs understand the name to be morally wrong or racist, but they don’t find it to be offensive or bothersome. The word ‘offensive’ doesn’t necessarily mean morally offensive. Perhaps, NAs maintain a sticks-and-stones-can-break-my-bones-but-words-will-never-harm-me mentality. They are not bothered or, in other words, “offended” by the name as words will never harm them, but they do find it to be morally reprehensible. The question on the survey needs to use terms like ‘morally offensive’ or ‘racist’ when asking about subjects’ attitudes to the name. Without this, there are plausible alternate interpretations of the results, and any strong conclusion drawn from the study will be invalid.

Also, when uncovering someone’s moral viewpoint, to get their real judgment, it is important that subjects have all the relevant facts to the case. This is a standard practice in ethics, where one should have the relevant facts to a situation before making an actual decision on it. For, facts, like for a juror in a trial, can change one’s verdict. As aforementioned, the term, ‘redskins,’ is a dated term that since the 1960’s is not in common use due to its racist connotation. It could be the case that most NAs today are not familiar with its history. A more accurate survey attempting to discover this population’s real moral judgment on the use of this name first should provide an accurate and comprehensive history of the use of this word, such as that it was used as a racist term to promote genocide against NAs, as indicated above. Once one makes sure that subjects know the relevant history of the issue, participants then should answer the question as to whether they find the use of this name to be morally wrong. As this has not been done, the conclusions in the above studies are not justified.

Additionally, a word that is rooted in hatred and genocide should not be used so trivially as the name of a sports team regardless of what most NAs believe on the matter. There are acts like genocide that are so utterly vile that the relevant negative terms during that time associated with it, like ‘redskins,’ should not be used today in the same country for a sports team, proudly marked on fan gear, and uttered in cheers during games of entertainment. The same would hold if a German soccer team wanted to adopt the swastika as their symbol 100 years from now, where most German Jewish people in the future are morally ok with it. It still would be wrong and should not be done.

Finally, the historical context of the intention for giving a team such a name matters. Gilbert claims the team was so named in order to honor NAs in general and some NAs associated with the team. However, in a 1933 Associated Press interview, the then team owner said he changed the name simply to avoid using the city’s baseball team’s name. Given that the name was widely understood to be a derogatory term during this time as noted above, I take it that an underlying intention of the use of the name, as with most instances when a team or university adopts a NA name, is to draw on a negative stereotype of NAs as being something like savages that are wild, fearless, and warriorlike. They are savages in the way bears, lions, and other animals are that occupy the names of other teams. The intention is of using a racial stereotype. Whether one can foresee it or not, such a stereotype is harmful to NAs and also can limit what they’re perceived as being capable of, like being kind and intelligent. Hence, the name should be changed. Just as the intention to do good that unintentionally leads to bad consequences can at times be enough to absolve all blame, when dubbing a team name, the intention to use a racial stereotype that is in fact a racist one, whether one realizes it or not, can be all that is needed to affirm that the name should be changed.

John J. Park

Philosophy Department

Oakland University

Tuesday, February 27, 2018

Excuse Me?

“I never knew!” That was my go-to excuse when I was a kid. Whenever I was caught doing something I wasn’t supposed to be doing, I would try to absolve myself from blame by suggesting that I didn’t know that I was doing something wrong. I thought that I shouldn’t be blamed if I didn’t know any better. But excuses come in many shapes and sizes. And in this blog post, I’m interested in a different kind of purported excuse: “I was manipulated!” Is manipulation a legitimate excuse?

A number of philosophers have suggested that being manipulated can excuse one from blame or responsibility. But many of these focus on bizarre thought experiments—involving evil neurosurgeons that can implant desires (Pereboom 2001) or omniscient demigods that can create an evil person by creating a particular zygote under deterministic conditions (Mele 2006). I tend to agree with the sentiment recently expressed by Prof. Merriam that we should be somewhat skeptical about what we can learn through such fanciful thought experiments, but the idea that manipulation diminishes or eliminates blameworthiness can be found is more realistic thought experiments.[1] One of Derk Pereboom’s cases (case 3 in his famous four-case argument against compatibilism) is not so outlandish:

       “[Plum] was determined by the rigorous training practices of his home and community so that he      is often but not exclusively rationally egoistic… His training took place at too early an age for him to have had the ability to prevent or alter the practices that determined his character… He has the general ability to grasp, apply, and regulate his behavior by moral reasons, but in these circumstances, the egoistic reasons are very powerful, and hence the rigorous training practices of his upbringing… result in his act of murder. Nevertheless, he does not act because of an irresistible desire.”

Plum ends up committing murder; he kills White for selfish reasons. To make the case even more forceful, let’s stipulate that Plum’s manipulators’ intentions were nefarious. They purposefully raised and trained him this way because they wanted him to end up killing White.

Many seem to think that Plum is not fully blameworthy, or at least less blameworthy than he would have been had he not been intentionally manipulated by some other agents. For some reason, if an agent was influenced by an intentional manipulator then she seems less blameworthy than she would be sans manipulator.

Note that the difference in blameworthiness cannot be accounted for by a difference in the actual psychologies of the agent in question. Empirical tests suggest that people tend to judge X as less blameworthy than Y when X and Y have identical psychologies and perform identical action types, but differ only in their personal histories where X’s psychology was partially due to intentional manipulation and Y’s psychology was not (Phillips and Shaw 2015).

This raises an important question. How can two people with identical psychologies performing identical action types in identical contexts not be identically blameworthy? This is difficult question for those who claim that being manipulated is a legitimate excuse. If two people have identical psychologies and perform identical action types, then it seems they should both be blameworthy to the same degree. However, if we accept this, then we must deny that manipulation is a legitimate excuse. And if we deny that manipulation is a legitimate excuse, then we have some explaining to do: if manipulated agents are blameworthy, why are we inclined to blame them less when we find out that they’ve been manipulated?

I think that manipulation is not a legitimate excuse. Plum is just as blameworthy for killing White as he would have been sans manipulation. And to meet the explanatory burden of why we’re tempted to think that manipulation diminishes responsibility, I have some suggestions.

First, I think that we downplay the blame of some agents in our search for ultimate blame. When someone is manipulated we take note that the manipulated agent becomes something like a pawn in the manipulator’s game. Suppose X manipulates Y into doing Z. When I ask whether Y is responsible for doing Z, I am tempted along this line of thinking: “It’s not really Y’s fault. X is the one to blame!” I think this line of thinking is misguided since it is possible for there to be plenty of blame and responsibility to go around—both X and Y can be blameworthy. But, when assigning blame, I am inclined to be most angry with the person that is ultimately responsible for whatever happened and I think that this clouds my judgment about the blameworthiness of the pawns.

Second, it’s important to note that the practice of blaming involves the moral emotions; it involves negative attitudes like resentment or indignation—what P. F. Strawson called reactive attitudes. To blame someone is not merely to have a belief about them; it is to take a negative affective stance toward them and regard them as deserving of some form of punishment. But there are other moral emotions, too. If someone is wronged, we feel sadness or compassion for them. And the manipulated agent occupies a strange place for our moral emotions. He is deserving of indignation and resentment, but he is also deserving of sadness or compassion; he is both victim (of manipulation) and victimizer (by committing the wrong that he was manipulated into doing). These moral emotions are in tension and I suspect that the compassionate attitude is inappropriately diminishing the indignation.

So if you don’t like what I’ve said here, you can blame me—even if I was manipulated into writing this.

Timothy Houk
Department of Philosophy
University of California, Davis

[1] Also, in criminal law a defendant’s adverse past is sometimes used as a kind of excuse to suggest that the defendant is less blameworthy or should get a more lenient sentence (Vuoso 1987). Although an adverse past is not exactly the same as being manipulated, I think these excuses share similar features.

Sunday, February 25, 2018

Analogies between Ethics and Epistemology

It’s increasingly common for epistemologists (both formal and traditional) to explore analogies between epistemic justification (rationality, warrant, etc.) and moral rightness.[1] These analogies highlight the normative character of epistemology; they’re also fun to think about.
This post is about a commonly discussed analogy between reliabilism about justification and rule consequentialism. I’ve started to think that reliabilists have good reason to reject this analogy . But I’m not sure how they should go about doing this. Let me explain.

Begin with reliabilism:

S’s belief that p is justified iff S’s belief that p is the output of a reliable belief-forming process.
A belief-forming process is reliable iff its immediate outputs tend—when employed in a suitable range of circumstances—to yield a balance of true to false belief that is greater than some threshold, T.

Compare this with satisficing hedonistic rule consequentialism:

S’s a-ing is right iff S’s a-ing conforms to a justified set of rules.
A set of rules is justified iff its internalization by most people would produce a balance of pleasure to pain that is greater than some threshold, T’.

The similarity in structure between these two theories speaks for itself. Further, it’s standard to assume that reliabilists endorse veritism, the claim that having true beliefs and not having false ones is the fundamental goal in epistemology. Reliabilism, then, might be said to be an instance of satisficing veritistic process consequentialism.

The starting point for many discussions of consequentialism about justification is a simple counterexample to a naïve consequentialist theory (e.g., Firth, Fumerton, Berker, among others). 
According to the naïve theory:

A belief is justified if, of the available options, it leads one to have the highest ratio of true to false beliefs.

Here’s the counterexample (originally inspired by Firth, 1981): [2]

I am an atheist seeking a research grant from a religious organization. The organization gives grants only to believers. I am a very bad liar. The only way for me to convince anyone that I believe in God is to form the belief that God exists. If I receive the grant, I will form many new true beliefs and revise many false ones. Lucky for me, I have a belief-pill. I take it and thereby form the belief that God exists.[3]

According to the naïve theory, my belief is justified. But my belief is obviously not justified. So much the worse for the naïve theory.

This brings me to my interest—or puzzlement—with the reliabilism/consequentialism analogy. It’s clear that reliabilism renders the intuitively right result in the grant-seeking case, namely,  the result that my belief is not justified. The belief-forming process that generated my belief in God—popping belief-pills—is not a reliable one, and for that reason my belief is not justified. So far, so good. But how can the reliabilist, qua veritistic consequentialist, say this? As far as I can tell, this question hasn’t really been discussed. And that seems strange to me. If reliabilists really are veritistic consequentialists, then shouldn’t they give my belief in God high marks?[4] And given the analogy—one that treats “justified” as analogous to “morally right”—wouldn’t this amount to saying the belief is justified?

One might think that in asking this I’m ignoring an important feature of reliabilism and rule consequentialism, namely, the fact that they are instances of indirect consequentialism. Indirect consequentialists aren’t interested in directly assessing the consequences of individual actions or beliefs, the response goes. Rather, they assess actions, beliefs, etc. indirectly, by reference to the overall consequences of the rules, processes, etc. that generate them.

This point is well-taken. But the problem persists. Satisficing hedonistic rule consequentialism loses its appeal as a consequentialist theory if it doesn’t at least sometimes allow us to break certain general moral rules when complying with them is disastrous (viz. Brandt 1992 87–8, 150–1, 156–7). Similarly for reliabilism, qua an instance of veritistic consequentialism, right? If the view doesn’t sometimes endorse jumping at an opportunity like the one presented in the grant case, it’s hard to see how it’s really committed to the idea that having true beliefs and not having false ones is the fundamental goal in epistemology.

So, I suspect the following: [5] if reliabilists are veritistic consequentialists, they must say something awkward about the grant-seeking case (or at least some case like it—maybe the demon possibility I mention in fn. 4). And I don’t think reliabilists should identify my belief in God as justified. Rather, I think they should push back on the reliabilist/consequentialist analogy itself. More specifically, they should deny—or maybe give a sophisticated reinterpretation of—at least one of the following:

      1. Epistemic justification is analogous to moral rightness
      2. Having true beliefs and not having false ones is the fundamental goal in epistemology
      3. If 1. and 2., then reliabilism is the epistemic analogue of satisficing hedonistic rule consequentialism
      4. If 3., then reliabilists have to say something awkward about the grant-seeking case (or some case like it).
And this is where I’m stuck. 1-4 seem quite reasonable to me. Thoughts?

Clinton Castro
Philosophy Department

[1] I’ve contributed to this trend myself, here (see especially section 4).
[2] This case is different from Firth’s; it is closer to Fumerton’s formulation.
[3] Berker thinks these cases can be generalized: “all interesting forms of epistemic consequentialism condone […] the epistemic analogue of cutting up one innocent person in order to use her organs to save the lives of five people. The difficult part is figuring out exactly what the epistemic analogue of cutting up the one to save the five consists in.”
[4] We can play with some details and make it epistemically disastrous to not take the pill—suppose that if I don’t get the grant the philosophy department will sic a Cartesian demon on me.
[5] I don’t think I’ve made an iron-clad case here!