Sunday, February 26, 2017

The trouble with moral thought experiments

In last week's post Garret Merriam argued that the famous brain-in-a-vat thought experiment is incoherent.  In this post I argue that many popular moral thought experiments are flawed as well. I won't argue that they are incoherent; rather, I claim that they tend to presume and promote a flawed understanding of human decision-making.

So first a few words about that:

Human beings are social animals. We have learned to cooperate with one another in order to acquire goods that we can not easily secure in isolation.  In every human society adults are expected to do two things: (1) manage their personal affairs, and (2) respect the rules that make the benefits of cooperation possible.

On any given day we make thousands of almost entirely self-interested decisions.  Most are trivial, such as which word I should use to finish this taco. Some are more significant, such as whether to head for the beach or the mountains on Sunday. In each case I am just doing my best to figure out which of two options will deliver the greatest personal utility. (I am not saying that these actions have no moral significance, only that we do not typically make moral considerations when deciding whether to perform them.) We also, though less commonly, make decisions that are almost entirely moral in nature.  For example, I may be completely committed to helping you move to a new apartment, deliberating only on how I can be of greatest assistance.

But the more interesting decisions occur when both types of considerations are salient. The magic of well-organized societies is that they tend to support the same conclusion. When my alarm rings in the morning I haul my butt out of bed and drive to work. This is because it would be bad for me not to and wrong of me as well. Sometimes, however, these considerations support different decisions.  It might be morally better to help you on move day; still, it is shaping up to be beautiful outside and I would much rather go for a hike. In situations like this I have to decide whether to do what is right, or to do what I like.

When doing moral philosophy we sometimes wrongly suppose that whenever considerations of morality and self-interest come into conflict, we ought to do the morally right thing.  But this is incorrect. Of course it is tautological that morally we ought to, but we are not always expected to sacrifice our own interests for the benefit of others. Rather, when decisions like this arise, we weigh what we ought to do morally against what we ought to do prudentially, and make the best decision we can. This is easier said than done, especially since these two types of value are not obviously fungible. But it is our task, nonetheless.

Now for the problem with moral thought experiments:

Most moral thought experiments are intended to bring out a conflict between different ways of thinking about morality, typically between a utilitarian and a deontological approach. In the  trolley problem, e.g., it is first established that most people judge that one ought to pull a switch that would divert a runaway trolley so that it kills the fewest possible people.  Later we see that most of us also judge that one ought not to push a fat man off a bridge to precisely the same effect. Some philosophers argue that this shows that we are prone to making inconsistent moral judgments. Others claim that we must be detecting morally relevant differences between the two cases.

I don't think either of these conclusions is warranted. This experiment, and others like it, are flawed.

The flaw is that the hypothetical situation described in thought experiments like these are presented as if they constitute a purely moral decision. As noted above, these do occur in everyday life, but scenarios like the trolley problem don't approximate them.  Rather, they provide a decision in which considerations of self-interest and morality are both salient.

This is easily seen in the trolley problem. In each case there is a non trivial question concerning what is best for society as well as what is best for me. In the switch-pulling version, considerations of morality and self-interest more or less coincide. I calculate that pulling the switch is the best outcome for society and also the result I can live with personally. In the fat man version, these considerations collide. Sure, pushing the man off the bridge will save lives. But in the future I suspect I will suffer nightmares too intense to bear.

Some may respond impatiently: This is just the familiar sophomoric complaint that the thought experiment is unrealistic. All thought experiments are unrealistic, that is why they are thought experiments rather than real ones. Philosophers know that considerations of self interest play a role in real life, but we ask that you do your best to bracket these considerations in an effort to develop a clearer understanding of morality.

That is not good enough.

Just how are we supposed to bracket considerations of self-interest in this case? Are we asked to disregard our moral emotions altogether?  It is these, after all, that predict a future I wish to avoid. But to do that is to squelch one of our main sources of moral evidence as well. Alternatively, should we allow ourselves to pay attention to the moral emotions, but only for the purpose of moral judgment, taking care (a) not to let considerations of self-interest infect these judgments, and (b) not to confuse the best decision with the morally correct one?

Wow. I have never heard the trolley problem presented like that. It is is not at all clear that we have this ability. But if it could somehow be trained up, I'm betting we would end up with a very different data set.

G. Randolph Mayes
Sacramento State
Department of Philosophy

Sunday, February 19, 2017

How to build a brain in a vat

Philosophers love thought experiments. They're fun, memorable, engaging tools for getting us to think about perplexing intellectual or moral problems. When engineered well, thought experiments can shed light on obscure concepts, raise challenging questions for dominant modes of thought, or guide us to recognize a conflict between two deeply held intuitions. When engineered poorly, however, they can instill a false sense of understanding or create needless confusion in the guise of profundity. Sadly, many of the most famous thought experiments in philosophy are engineered poorly.

Consider one of the most famous modern examples of this problem, Gilbert Harman's "Brain-in-a-Vat" thought experiment.[1] A descendant of Rene Descartes' "evil demon" hypothesis[2], this thought experiment is designed to motivate general skepticism about sense perception and the external world. What if, we are asked to imagine, you're not really here right now, but instead you are just a disembodied brain, suspended in fluid, with a complex computer stimulating your brain in all the right places to artificially create the experiences you take yourself to be having. For example, the computer could send signals to your visual cortex making you think you’re looking at a blog post on The Dance of Reason, when in fact you’re looking at no such thing, because you have no eyes. Hypothetically, the thought experiment says, there would be no way to tell the difference between a reality where your brain is directly stimulated in this way, and one where you actually have a body that interacts with the world at large. Given this indistinguishability, how can we ever really rely on our senses? How can we ever have any kind of empirical knowledge at all?



Many late-night hours have been spent trying to answer this skeptical riddle. As an intellectual puzzle, an amusing game to getting us thinking, or to kick start a conversation in an intro to philosophy course, it works just fine. But as a tool for trying to understand how humans know the world, it is deeply misleading.

The problem, in short, is that neurologically speaking conscious experience simply does not work the way this thought experiment presumes it does. The brain is a necessary, but not sufficient condition for having experiences. This is not because, as Descartes argued, we have some nonphysical aspect to our mental lives, but rather because a disembodied brain is physiologically incapable of producing the panoply of experiences that we all have every day.

Consider, for example, emotions. While the processing of emotions takes place in the brain the key ingredients that make up the neurocorrelates of emotions—hormones and neurotransmitters—are created by the endocrine system, the network of glands distributed throughout the body.[3] Without these glands you would never feel love, anger, sorrow, joy, lust, hunger or disgust. The absence of these feelings would be a dead giveaway that you were a disembodied brain in a vat.[4]

But it doesn’t stop there. In addition to an endocrine system, you would need circulatory and lymphatic systems to transport the hormones from the glands to the (very specific!) parts of the brain where they are needed in order to give rise to specific emotions. You would also need a digestive system to get the chemical precursors that fuel the endocrine system, while your integumentary system (skin, hair) is essential for flushing byproducts the other systems can’t use. Lastly, all those organs need to be supported by something, making a skeletal system indispensible as well.

In short, the only way to build a brain in a vat is to make the vat out of a human body.

I suspect two objections are occurring in your brain right now. First off, how do I know we need these systems to feel emotions? What if I only think that because the evil genius programming the computer controlling my brain has led me to believe this in the first place? Haven’t I failed to take the force of the skeptical argument seriously?

Okay, I reply, but how do we know we even need a brain in the first place? Why doesn’t the thought-experiment work if it’s just a vat and a computer? For that matter, how do we know there are such things as vats or computers or evil geniuses at all? In order to be expressible in language the thought experiment has to be grounded in something, some kind of experience that explains how our experiences might be systematically misled. If the skeptic can help themselves to a host of experience-based ideas to fund their thought experiment it seems disingenuous of them to object when I do the same to defund it.

The second objection charges me with taking the thought experiment too literally. The point of the thought experiment was to explore epistemology and the limits of our sense perception, not the neuroanatomical foundations of our emotions. We can acknowledge the facts about the physiological basis for hormones and still benefit from pondering fantastic hypotheticals such as these.

This objection precisely illustrates the problem with thought experiments I mentioned in the first paragraph. Epistemology is not bounded by the limits of our imaginations alone. Human beings come to know things by using our brains and bodies, and the empirical realities of those brains and bodies places constraints on what knowledge can be, how it can work, and how we can attain it. When we abstract away from real flesh-and-neuron human beings we are left with nothing human in our epistemology. Whatever is leftover has little bearing on anything worth caring about.

Thought experiments that are accountable only to our imaginations are unlikely to provide us with insight into complex topics like the true nature of minds, morality or metaphysics. As Daniel Dennett says, “The utility of a thought experiment is inversely proportional to the size of its departures from reality.”[5] If we want to contemplate skepticism and the limits of sense perception, there are plenty of ways to engineer realistic thought experiments based on the real-world limitations of the human brain.

Garret Merriam
Department of Philosophy
University of Southern Indiana


[1] Harman, Gilbert (1973). Thought, p5. Princeton University Press 

[2] Descartes, René, (1641), The Meditations Concerning First Philosophy, (John Veitch trans., The Online Library of Liberty 1901 Meditation II, paragraph 2.

[3] Ironically, the endocrine system includes the pineal gland, which Rene Descartes speculated was the point of contact between our immaterial minds and our material brains. Rather than serving as a magic intermediary between two metaphysical planes, the pineal gland is part of what grounds the brain squarely within the body itself.

[4] It is only fair to mention that three parts of the endocrine system—the hypothalamus, the pituitary gland, and the pineal gland—are technically housed inside the brain. The supporter of the Brain-in-a-Vat argument could perhaps lay fair claim to these, as they would be included in the terms of the original thought experiment. None the less, the other parts of the endocrine system (including the thyroid, the adrenal glands, the gonads, and other glands) are distributed throughout the body placing them well out of play for the original thought experiment.

[5] Dennett, Daniel C. (2014), Intuition Pumps and Other Tools for Thinking, p.183, Norton, W.W. & Company, Inc.

Friday, February 10, 2017

The other “One Percent”

Let us pause and reflect on the following: those who hold PhD degrees are the Warren Buffetts of epistemic resources. They have been privileged with more educational experience and access to intellectual activities than 99% percent of living humans. Consider that simply having been awarded a bachelor’s degree puts one in the top 30% of educated persons in the United States, a Master’s degree will put one in the top 7% and a PhD degree the top 1%. Worldwide, the statistics are much more striking.[1] Although there is plenty of criticism to direct at higher education, it is hard to argue against the following: those who hold college degrees have had an experience of great epistemic value that others have not. Notwithstanding, it is rarely, if ever, suggested that PhD’s ought to share this intellectual wealth.[2] But why not?

Given the importance of epistemic resources to a life well-lived, it seems a bit odd that epistemic generosity is not morally expected, especially of those who are of noticeable intellectual wealth.[3] In various ways epistemic resources are as valuable as financial resources. So why wouldn’t epistemic 1%ers have as much of an obligation to share their epistemic wealth as the financial 1%ers have to share their monetary wealth? This post argues that epistemic 1%ers do have this moral responsibility and that those who fail to share their unique type of wealth are in fact failing to do what they ought. This moral “oversight” can be understood as a vicious character trait, i.e., many of the intellectually wealthy are epistemically greedy.

I will use the term “epistemic greed” as follows. Epistemic greed is greed for epistemic resources. “Epistemic resources” should be understood broadly. Examples include, physical goods, epistemic services, cognitive states and intellectual abilities that are specially related to knowledge, understanding, rationality, etc. Those who are epistemically greedy keep, take, acquire, or stockpile epistemic goods which they might otherwise share with the epistemically less advantaged. Here is a first shot at defining epistemic greed:
Epistemic Greed (EG): To hoard, acquire, or use an excessive amount of epistemic resources with insufficient concern for those who less epistemically advantaged
While the above definition is on the right track, I think too much is left vague by the expression “excessive.” Let us try a definition with more specificity:
Epistemic Greed (EG): Sharing comparatively little of one’s total epistemic resources with those who are less epistemically privileged than oneself.
In line with Aristotle’s notion of generosity, this second definition places a higher moral obligation on those who are epistemically wealthy. Let us helpfully recall that Aristotle argued the followin:
 “[I]n speaking of generosity we refer to what accords with one’s means. For what is generous does not depend on the quantity of what is given, but on the state [of character] of the giver, and the generous state gives in accord with one’s means. Hence one who gives less than another may still be more generous, if he has less to give”(2014;51). 
This Aristotelian understanding seems to fit with our everyday, pre-theoretical, understanding of the “non-epistemic” concept of greed. We expect, for example, those who are rich to give more than those who are not rich. ”[4] And just as monetary greed influences the egalitarian (or lack thereof) make-up of society, so does intellectual greed have an effect on the societal distribution of epistemic goods. If this much is correct, then the paucity of discussion on epistemic greed is a noteworthy philosophical oversight.

For too long moral and political discussions have focused primarily on economic inequalities at the expense of ignoring other types of morally weighty inequalities. One reason for this oversight might be another oversight: we have overlooked that just as an improvement in one’s economic means makes it easier to acquire epistemic resources, the converse is true as well: bettering one’s epistemic position makes it easier to improve one’s economic position. Intelligence can help one get a job, get accepted into college, and in various other ways provide means to a more satisfying life. Educational accomplishments, especially degree accomplishment, are closely tied to lifelong income prospects. In such respects financial and epistemic resources are importantly similar. Both are effective means to a variety of ends helpful in achieving life goals. [5]  Not all goods are of this kind. While I may very much enjoy my leather couch, it cannot help me achieve my dream life of an enjoyable career and basic level of material comfort. Epistemic and financial goods, however, can indeed help me in this regard. Money and knowledge are general purpose tools for a variety of life goals.

Discussing these ideas with academic friends and colleagues, I have heard many object that those with lower educational levels or poor analytic skills have little desire for epistemic goods. “I see your point,” they would protest, “But no one wants what we (academics) have to share.” To me such assertions suggest a disconnect between epistemic elites and their less privileged counterparts. Academics seem prone to mistaken assumptions about those who are epistemically underprivileged. While it may be true that many “ordinary people” dislike college classes and love The Kardashians, I would surmise that even Kardashian fans have some areas of epistemic interest in which some academics could be of help. Yes, often these epistemic interests are pragmatic. Hence helping the disadvantaged might require the epistemic 1%ers to step out of their comfort zone. While many people (university professors, for instance) are capable of helping persons improve their resumes and learn basic computer skills, few are familiar with this type of tutoring. This is no excuse, however, because it is quite easy to become so familiar. Learning what the epistemically disadvantaged desire and how to help requires dedication and open-mindedness, but not much more. Hence the decision not to share is inexcusable. It is simply a socially accepted form of greediness. Society should accept this vice no longer.

Maura Priest
The Humanities Institute
University of Connecticut, Storrs


References


A., & Reeve, C. (2014). Nicomachean ethics. Indianapolis: Hackett Publishing Company.

Bailey, M. J., & Dynarski, S. M. (2011). Gains and gaps: Changing inequality in US college entry and completion (No. w17633). National Bureau of Economic

Belley, P., & Lochner, L. (2007). The changing role of family income and ability in determining educational achievement (No. w13527). National Bureau of Economic Research.

Data Sources: Key Takeaways from the 2014 Survey of Earned Doctorates | Council of Graduate Schools. (n.d.). Retrieved from http://cgsnet.org/data-sources-key-takeaways-2014-survey-earned-doctorates-0

Mayer, S. E. (2002). The influence of parental income on children's outcomes. Wellington,, New Zealand: Knowledge Management Group, Ministry of Social Development.

Footnotes

[1] See https://nces.ed.gov/programs/digest/d14/tables/dt14_104.20.asp, and https://www.census.gov/content/dam/Census/library/publications/2016/demo/p20-578.pdf, and http://cgsnet.org/data-sources-key-takeaways-2014-survey-earned-doctorates-0. Note that often the statistics are shown in terms of age-group.

[2] I will use the terms “epistemic” and “intellectual” interchangeably. While there are contexts in which this use would be inappropriate, this paper is not one of those.

[3] Long-ago when Aristotle discussed the virtue opposite greed (generosity) according to his specific virtue-theoretic framework, he had in mind a notion specifically associated with the giving of financial resources. Nonetheless, Aristotle’s opinion should not always be understood as the final word on virtue.

[4] One critical difference between the points I make in this post and many common discussions of distributive inequality is that I am not solely focused on governmental obligations and solutions. My focus, rather, is on the character of individual epistemic agents and how they ought to treat other epistemic agents. That said, this paper in no ways rules out either the possibility that the government might be obligated to rectify epistemic inequalities nor that it simply might be prudent to use the government for egalitarian ends.

[5] While there has long been a connection between wealth and education, recent empirical studies suggest that the last few decades have seen this correlation get much stronger. For a few studies on this increasing divide and more generals research into income and education see Belley, P., & Lochner, L. (2007), Bailey, M. J., & Dynarski, S. M. (2011), and Mayer, S. E. (2002).

Sunday, February 5, 2017

The Washington Paradox

The absurdly great musical Hamilton includes the following line from President Washington’s farewell address:
Though, in reviewing the incidents of my administration, I am unconscious of intentional error, I am nevertheless too sensible of my defects not to think it probable that I may have committed many errors.
This seems like an admirably humble thing to say, but one of the philosophically interesting things about it is that it also seems like a reasonable thing to say. That is, Washington does not seem to be describing an unreasonable or irrational attitude about his decisions as president. It is often the case that when examining our actions or beliefs, no one of them seems to be a mistake, and yet we know that we are fallible beings who have likely made at least some mistakes.

The trouble is that certain ways of expressing this general idea lead to puzzling conclusions. Suppose Washington had said something slightly different:
Having carefully reviewed each decision I made as President, I believe of each one that it was not a mistake. Nevertheless, I know that I am not perfect, and so I believe that I must have made some mistakes as President.
This also seems like a reasonable thing to say. Having evaluated all of the consequences, obligations, and whatever other relevant factors, Washington might reasonably believe, for example, that appointing Jefferson as Secretary of State was not a mistake. He might then do the same for each other decision that he made until, for each decision he made, he reasonably believed that it was not a mistake. To see the puzzle more clearly, let’s assign a name to each of Washington’s decisions. We’ll call the first decision ‘D1’, the second ‘D2’, and so on. So, we can represent Washington’s beliefs about his decisions like this:
D1 was not a mistake.
D2 was not a mistake.
D3 was not a mistake.

Dn was not a mistake.
Given that Washington’s careful examination of each decision has left him with good reasons to think that it was not a mistake, it seems reasonable for him to believe each proposition on the list. However, it also seems reasonable for Washington, aware of his own imperfections, to believe that some of D1-Dn were mistakes.

But these beliefs cannot all be true. If the beliefs on the list are all true, then none of D1-Dn were mistakes, and so the belief that some of them were mistakes is false. On the other hand, if some of D1-Dn really were mistakes, then some of the beliefs on the list must be false. More than that, with a little reflection, it should be obvious to Washington that these beliefs cannot all be true, and as a result it does not seem reasonable for Washington to believe all of them. So, now we have a puzzle, a version of the Preface Paradox. Each of Washington’s beliefs seems reasonable, and yet it seems unreasonable to hold all of them together.

And Washington is not alone here. You’re very likely in the same boat. Consider all of your beliefs about some topic—Biology, for example. Supposing you’re a good epistemic agent, each of those is a belief in a proposition that you have carefully considered the evidence for and concluded is true. So, each of those beliefs is reasonable. However, you know that you are imperfect. Sometimes, even after careful consideration, you misread the evidence and accidentally believe something false. So, you have good reason to believe that at least one of your many beliefs about Biology is false. And now you have obviously inconsistent beliefs, all of which seem reasonable. So, what should you do?

I think that you and Washington should keep all of your beliefs, even though you know that they are inconsistent. The trick is to explain why it is reasonable to maintain these particular inconsistent beliefs, even though it is generally unreasonable to have inconsistent beliefs. If I have just checked the color of a dozen swans, for example, and come to believe of each one that it is white, it would be unreasonable for me to believe that some of them were not white. So, what is it about Washington’s situation that makes it different from this swan case?

One interesting difference is that it is reasonable for me to think that if one of the swans had not been white, I would have some sign or evidence of that—if some of them were black, for example, I would have noticed. Washington, on the other hand, not only has good reason to think that he has made some mistakes, but also has good reason to think that he might not have noticed some mistakes in his evaluation of hundreds of complex decisions. But this fact does not seem to prevent him from believing that he would have noticed if, for example, Jefferson’s appointment had been a mistake. He might think, for example:
If appointing Jefferson had been a mistake, he would have been a poor Secretary of State, which is something I would notice. So, if it were a mistake, I would have noticed.
Given his careful inspection of all of his evidence about each decision, Washington could give a similar good reason for believing of each decision that he would have noticed if it were a mistake. In fact, the point of carefully inspecting the evidence about each decision seems to be that, in doing so, Washington would notice if it were a mistake.

So, even though, for any decision we pick, Washington has good reason to think he would have noticed if it were a mistake, he still has a good reason to think that he might not have noticed if some of his decisions were mistakes. Perhaps this is what makes it reasonable for him to believe that each particular decision was not a mistake while still believing that some of them were mistakes.

Brandon Carey
Department of Philosophy
Sacramento State

Sunday, January 29, 2017

Is Time Real?

There are four main reasons for saying time is not real: it is (a) subjective, (b) conventional, (c) inconsistent, and (d) emergent.

(a) Does time depend upon being represented by a mind? Without minds, nothing in the world would be surprising or beautiful or interesting. Can we add that nothing would be in time? Yes, said St. Augustine, who claimed time is nothing in reality but exists only in the mind’s apprehension of that reality.

(b) Philosophers generally agree that humans invented the concept of time, but some argue that time itself is invented as a useful convention, like when we decide that a coin-shaped metal object has monetary value. Money is culturally real but not objectively real because it would disappear if human culture were to disappear, even if the coin-shaped objects did not disappear.

Although it would be inconvenient to do so, our society could eliminate money and return to barter transactions. In the article, “Who Needs Time Anyway?”, Craig Calendar said:

Time is a way to describe the pace of motion or change, such as the speed of a light wave, how fast a heart beats, or how frequently a planet spins…but these processes could be related directly to one another without making reference to time. Earth: 108,000 beats per rotation. Light: 240,000 kilometers per beat. Thus, some physicists argue that time is a common currency, making the world easier to describe but having no independent existence.

(c) Bothered by the contradictions they claimed to find in our concept of time, Parmenides, Zeno, Plato, Spinoza, Hegel, and McTaggart said time is not real. McTaggart believed he had a convincing argument for why a single event is a future event, a present event and also a past event, and that since these are contrary properties, our concept of time is self-contradictory.

In the mid-twentieth century, Gödel argued for the unreality of time because the equations of general relativity allow for physically possible universes in which all events precede themselves. It shouldn't even be possible for time to be like this, Gödel believed, so whatever the theory of relativity is about, it is not about time.

(d) It also has been argued that time is not real because it is emergent. Leibniz argued it emerges from the order relations between pairs of events, and Minkowski argued it emerges from spacetime.

In 1994, Julian Barbour said, “I now believe that time does not exist at all, and that motion itself is pure illusion.” He argued that there does exist objectively an infinity of individual, instantaneous moments, but there is no objective happens-before ordering of them, no objective time order. There is just a vast, jumbled heap of moments. Each moment is an instantaneous configuration (relative to one observer's reference frame) of all the objects in space. If the universe is as he describes, then space (the relative spatial relationships within a configuration) is ontologically fundamental, but time is not, and neither is spacetime. In this way, time is removed from the foundations of physics and emerges as some measure of the differences among the existing spatial configurations.

The above arguments are not trivial, but I would like to respond to them.

(a) Regarding subjectivity, notice that our clock ticks in synchrony with other clocks even when no one is paying attention to the clocks. Second, notice the ability of the concept of time to help make such good sense of our evidence involving change, persistence, and succession of events. Consider succession. This is the order of events in time. If judgments of time order were subjective in the way judgments of being interesting vs. not-interesting are subjective, then it would be too miraculous that everyone can so easily agree on the temporal ordering of so many pairs of events.

(b) A good reason to believe time is not merely conventional is that our universe has so many periodic processes whose periods are constant multiples of each other over time. For example, the frequency of rotation of the Earth around its axis, relative to the "fixed" stars, is a constant multiple of the frequency of oscillation of a fixed-length pendulum, which in turn is a constant multiple of the frequency of a vibrating violin string. The existence of these sorts of relationships—which cannot be changed by convention—makes our system of physical laws much simpler than it otherwise would be, and it makes us more confident that there is something convention-free that we are referring to with the time-variable in those physical laws.

(c) Regarding the inconsistencies in our concept of time that Zeno, McTaggart, Gödel, and others claim to have revealed, I suggest we say that either there is no inconsistency, or else their complaint be handled by revising the relevant concepts. For example, Zeno's paradoxes were treated by requiring time to be a linear continuum, very much like a segment of the real number line. Yes, the mathematicians changed important characteristics of Zeno’s concept of time, but the change was very fruitful and not ad hoc and so cannot be accused of violating time’s very essence. Gödel's complaint can be treated by saying he should accept that time might possibly be circular; he needs to change his intuitions about what is essential to the concept.

(d) Suppose time does emerge from events, or spacetime, or even Barbour’s moments. Scientists once were very surprised to learn that water emerges from H2O molecules. But having learned that molecules are more fundamental than water, should we make the metaphysical leap to saying water is not real? Should we not say instead that now we more deeply understand what water is? If so, we can draw a similar conclusion for time.

So, let’s say that time is real, that it is objective rather than subjective, that it is not primarily conventional, that any inconsistency in its description is merely apparent or inessential, and that time is real regardless of whether it is emergent.

Brad Dowden
Department of Philosophy
Sacramento State

Monday, December 5, 2016

A Groundhog Day argument before Christmas

Last Sunday the advent sermon topic was how God’s foreknowledge might partly
explain how he could reliably speak about the future, which raises the question: Is divine
foreknowledge consistent with human freedom?


Here is what I will call a Groundhog Day Argument For the Compatibility of Divine
Foreknowledge and Creaturely Freedom:

1. By the end of the movie Groundhog Day, it is possible for Bill Murray’s character to foreknow some of the free actions of others, without his foreknowledge in any way compromising the freedom of those actions. 
2. What’s possible for Bill Murray’s character is possible for God.
Therefore, 
3. It is possible for God to foreknow some of the free actions of others,
without his foreknowledge in any way compromising the freedom of
those actions.
A longer treatment would argue that the two premises are true and that the most
promising objections to these premises are unsatisfactory. 

This post will focus entirely on supporting the first premise. 

Groundhog Day is a comedy about a TV weatherman, played by actor Bill Murray, who
has to visit a small town to cover their annual morning Groundhog Day festival. He finds
himself unable to leave the town that evening because of a snowstorm. He spends the
night in a local bed and breakfast. He wakes up the next morning only to discover that it
is Groundhog Day again. However, there is a catch: whereas Bill Murray’s character
remembers living through Groundhog Day yesterday, no one else in the town
remembers this. Although it seems to Bill Murray’s character that he lived through a
“first” Groundhog Day yesterday, and is living through a “second” Groundhog Day
today, everyone else in the town perceives the present day to be just the regular old
Groundhog Day, which comes around once—and only once—each year. 

(This is not just an epistemological breakdown of the memory of the other characters.
Rather, the sober but strange metaphysical fact of the matter is that everything in the
town, with the exception of Bill Murray’s character’s memory, is, at the beginning of the
“second” Groundhog Day, precisely as it was at the beginning of the “first” Groundhog
Day. For example, if Bill Murray’s character and another character both had a huge
greasy breakfast omelet on the “first” Groundhog Day, neither of them would have an
increased cholesterol number on the “second” Groundhog Day, but Bill Murray’s
character would remember having eating the breakfast on the “first” Groundhog Day.) 

Bill Murray’s character again tries to leave the town, the snowstorm again prevents him
from leaving, he again spends the night in the bed and breakfast, and, to his dismay, he
wakes up again the next morning only to discover that it is, once again, Groundhog Day.
The rest of the movie chronicles his attempts to escape this predicament, to
communicate it to other characters, and to cope with it in a number of different ways.
The connection between this movie and the first premise is seen in one of the strategies
Bill Murray’s character uses to cope with his predicament of being trapped in a cycle of
re-living the same day over and over again.

He begins to learn what each of the other characters in the movie would do in a certain
situation, and he begins to use what he learns in subsequent live-throughs of the same
day. 

For example, on the first “live-through” of Groundhog Day, he is greeted by an old
acquaintance on the street corner, and this old acquaintance tries to sell him some life
insurance. The exact same thing happens on the second live-through. On different live-
throughs, Bill Murray’s character responds differently to the greeting of the life
insurance salesman. The life insurance salesman, in turn, has a different comeback for
each of Bill Murray’s different responses to his greeting. If Bill Murray’s character does
A in response to the salesman’s greeting, the salesman comes back with action B. If Bill
Murray’s character does C in response to the salesman’s greeting, the salesman comes
back with action D. And so on. Bill Murray’s character, however, can remember how
the life insurance salesman comes back to different responses to his original greeting. 

In this way, Bill Murray’s character gains a very detailed knowledge of what different
characters in the movie will do in different situations. 

After a while, he uses this knowledge to take advantage of the other characters: for
example, he knows exactly when and where a certain security guard is going to look
away from a bagful of money, so he can easily slip in and steal the bag undetected. But
by the end of the movie, he has learned to use his newfound knowledge to benefit
others as well: for example, he knows exactly when and where a certain girl is going to
fall from a tree in her neighborhood, so he can catch the girl and save her from injury.
It seems, then, that by a certain point in the movie, Bill Murray’s character has
foreknowledge of some of the actions of others. For example, on what seems to him to
be the 8th live-through of Groundhog Day, Bill Murray’s character foreknows what
another character is going to do at 10:00 that morning. Bill Murray’s character also
foreknows what that other character is going to do at 10:00 in the morning on the next
live-through of Groundhog Day—what will seem to him to be the 9th live-through of
Groundhog Day. And so on. 

The peculiar features of the movie entail that the same types of actions done on the
“first” Groundhog Day will be repeated on “subsequent” Groundhog Days. 

What makes this argument for the compatibility of foreknowledge and freedom
interesting is that it simply collapses the difference that usually exists between
knowledge of the past and knowledge of the future. Bill Murray can foreknow the
future free actions of others because, in a sense, he’s already seen the other characters
perform these actions in the past.


Russell DiSilvestro
Department of Philosophy
Sacramento State

Monday, November 28, 2016

Racism and a presumption of credibility

What I want to try to argue for here is a presumption of credibility for claims of racism.

For the purpose of this post, let’s define a presumption Px as a deliberative advantage in favor of a finding that x is true based on supporting justification for Px, such that, in a dispute involving x, the person trying to show that x is false (x-opponent) bears the burden of proof and, if x-opponent offers no evidence, the person trying to show that x is true (x-proponent) succeeds. A presumption is a deliberative advantage, but not a required conclusion; an initial deliberative advantage may be overcome by sufficient evidence to the contrary.

Presumptions usually are based on some supporting justification. A very common example is the presumption of innocence. This presumption operates in favor of a finding that the x-proponent, a criminal defendant, is innocent and the x-opponent, the prosecutor, bears the burden of proof. The supporting justification is the value in American jurisprudence that it is better for nine guilty men to go free than for one innocent man to be wrongly convicted.

Here my supporting justification for a presumption of credibility for claims of racism is that, because whether one recognizes racism depends crucially on one’s first-personal perspective, the person claiming racism should be deemed credible, absent sufficient evidence to the contrary.

Claims of racism generally present unique challenges for proof. Consider the new movie, directed by Jeff Nichols, based on Loving v. Virginia (1967).[1] The trailer for the movie (here) includes a scene where Virginia state police officers barge into the Lovings bedroom and, when Mildred Loving explains that she’s Richard Loving’s wife, one officer responds, “that’s no good here.” The scene portrays well the racist attitudes that may have been prevalent at that time in certain places.

One thing to clarify here is that Virginia’s law and enforcement of its law in the Loving case is a separate wrong from the officer’s attitude toward the Lovings in the scene mentioned above. The case involved Virginia’s miscegenation law, but my focus here is the officer’s attitude in the movie scene.

If we were examining a charge of racism against the officer, it would be difficult to prove. You often can’t find racism from a professor’s armchair or a judge’s bench. In real life, the scene is not reconstructed for us exactly as it was. Instead, you have the words or actions, the features of the surrounding circumstances, and the accused racist before you in his best Sunday clothes (see Bannon).

Even if the scene was reconstructed for you exactly as it was, from a detached third-personal perspective, unless you yourself have experienced similar attitudes, you may not see racism. We may see the identical scene, but our perspectives are shaped by our prior observations and experiences. It is as if we see the world through our own theoretical lens, shaped by our past, which picks up certain features and ignores others.

But, for the person experiencing racism, the attitude expressed and the wrong left in its wake is as real as real can be. It's not about the exact words spoken or any particular feature of the circumstance. If we examined the words or actions, they could be interpreted as innocent. If we examined the features of the circumstance, they may seem morally insignificant. If we saw the person again in a different setting, he may appear to be the farthest thing from what we would expect from a racist. From the first-person perspective of the target of racism, the attitude conveys to its target: you don't belong here or there is something deeply wrong with you. From those with other perspectives: you’re just making the whole thing up.

If this is accurate and racist attitudes are seen differently from first-personal and third-personal perspectives, then this seems to be a good reason to presume the credibility of a first-personal claim of racism.

Again, a presumption of credibility is not proof of credibility or proof of racism. A presumption is usually defeasible. It simply places the burden of proof on the other party to prove otherwise.

Three objections come to mind. I’ll take them in the order from easiest to the most difficult, leaving the third for further discussion in the comments.

First, what about false claims? A presumption is not proof and can be overridden with evidence to the contrary, including evidence that the claim was false.

Next, what about innocent “racists”? People are accused of racism all the time and they’re merely good people who were brought up to view the world a certain way; they don’t intend any harm. As stated by Shelby and others, I don’t think racism requires specific intent. Racial bias and prejudice may be “subtle, implicit, or even unconscious.”[2] Individual attitudes or behaviors usually are the product of institutional racism and, first and foremost, it is our institutions that require critical examination and reform.

Finally, the nature of racism and the fact that racist attitudes are seen differently from first-personal and third personal perspectives, for some, may not seem like sufficient justification for a presumption. My response for now is this: against the backdrop of our nation’s history of slavery, segregation, internment, and dispossession, the fact that the targets of racism are in a unique position to see the wrongness of racism, which may not be visible to others, again seems a good reason to give them the benefit of the doubt.

Chong Choe-Smith
Department of Philosophy
Sacramento State



[1] Loving v. Virginia (1967) 388 U.S. 1.
[2] Tommie Shelby, “Race and Social Justice” (2004) 72 Fordham L. Rev. 1697, 1706.