Monday, March 13, 2017

A dilemma for elementalism

Teaching history of philosophy always produces in me a lively sense of contingency in intellectual history. One example stands out: Leucippus and Democritus proposed an atomic theory around 440 B.C. This striking proposal ran into the sand, however, and never produced a viable research program.

Why not? All you need, technically speaking, are devices for measuring and weighing small amounts of material fairly precisely. The ability to do that was not significantly greater in the early 1800’s when Dalton and Avogadro succeeded in reviving atomism.

I used to think that the Greek atomists just had the rotten luck to live at the same time as two of history’s greatest philosophers, Plato and Aristotle. Atomism, though right, lost the dialectical battle.

I now think that western philosophy tacks between two strategies for explaining What Goes On.

One strategy explains the properties and behavior of entities by appeal to composition: the properties and behavior of the fundamental objects that make the entity up. Call this strategy ‘Elementalism’.

The other strategy explains the properties and behavior of entities by appeal to their kind: and a full understanding of the essential properties of the kind will explain what the entity does. Call this strategy ‘Formalism’.

Formalism dominated from 400 BC until 1400 AD. Our course in early modern philosophy is the story of its overthrow by elementalism in the Scientific Revolution.

Either strategy yields philosophical problems, problems which the other strategy tends to solve – at least for a time.

The main problem besetting any elementalism is composition.

Matthew hits a baseball toward the house and breaks a window. [1]

Elementalism explains this by the bonds which hold atoms together into molecules: the molecules forming the baseball (B) and the window (G). The velocity of B on G is sufficient to disturb the molecular bonds of G.

This explanation makes no reference to the baseball, the window, or to Matthew. What matters for causal explanation is what goes on at the level of B and G. Baseballs, windows, and teenagers with poor judgment may as well not be there. Since we need not countenance them in order to make our best (elementalist) theories true, we can be anti-realists about macro-objects.

If we wish to believe in baseballs and grandchildren – and we do – we could suppose they supervene on the fundamental particles. But then they're epiphenomenal: their presence makes no difference in the world.

If they’re to make a difference, we seem to be committed to saying not only that “Events involving B cause events involving G,” but also “Events involving the baseball cause events involving the window.”

But that gives us two causal relations: double determination. What goes on is over-oomphed.

Indeed, since we have many levels operating:
  • atomic
  • molecular
  • macro- (baseballs and windows)
  • physiological (Matthew’s hitting the baseball)
  • neural (events in Matthew’s brain causing his motor movements)
  • mental (Matthew’s ill-considered decision to hit the ball toward the house)
what goes on is umptly over-oomphed.

Here’s the dilemma.


The higher-level events purportedly involving higher-level entities are not, strictly and literally, real events at all, or derivatively real.

But then they would be entirely epiphenomenal – not really part of the causal story of the world.


They are real, with their own causal relations.

But then ‘what goes on in the world‘ is over-determined.

In short, the constant dilemma with elementalism is that it threatens to give us either too little reality (no grandchildren) or way too much. Indeed, since there may be no limit to the number of levels at which causal explanation can take place unintelligibly too much.

But we thought elementalism was superior as an explanatory strategy.

Does formalism do any better? Arguably, yes.

One promising formalist strategy currently emerging is a revival of Aristotelian-style Hylomorphism. Robert Koons gives a nifty summary of the current state of this research project. [2]

On Koons’ version, the causal powers of a substance’s parts become powers of the whole substance, and so the over-determination problem doesn’t arise:

“(I)f I stand on a scale, is it I (as a whole) or my parts (collectively) that cause the pointer to move? If the powers associated with weight have migrated from my proper parts to me, my weight can be the unique and non-redundant cause of the scale’s response.” (Koons 2014, 8).

Tom Pyne
Department of Philosophy
Sacramento State

Sunday, March 5, 2017

Diagnosing the abortion debate

Reporting back in January showed that abortion rates have fallen to levels lower than any year since 1973, the year of the Roe v. Wade decision, and reflect about a 50% decrease in the rate from its peak in 1981. The study, conducted by the Guttmacher Institute, which favors legal protections for a right to have abortions, cites as causal factors greater access to contraception as well as laws in many states that restrict clinics or require ultrasounds. Of course, pro-life groups still ultimately want to see Roe overturned. This would mean that individual states would determine what legal restrictions, if any, would apply to people seeking and providing abortions.

Abortion rights and restrictions pose a special challenge to the justificatory test for legitimate coercion in the law that I’ve defended in my work in political philosophy. That principle says, roughly, that for coercion to be legitimate, it must be publicly justified, or based on reasoning that is public in the requisite sense. For example, on Rawls’ view, public justification is grounded in reasons that all can reasonably accept. The test excludes reasons that are not reasonably acceptable to all from being able to do justificatory work and so prevents these reasons from being a legitimate basis for law.

For a variety of reasons I won’t rehearse, I favor a more recent specification of the principle by Jerry Gaus at the University of Arizona. His view is more permissive in that any of a citizen’s reasons may play a part in justifying laws. Laws can be justified to each citizen on the basis of their complete set of beliefs, values and commitments, so long as these are intelligible. Any intelligible reason can feature within the overall public justification of a law. However, people whose beliefs, values and commitments provide them with an intelligible rationale for rejecting it have a defeater for it and, when that’s true, it’s illegitimate to subject them to it. The law would lack authority from their point of view.

What does this view make of the abortion controversy? First, notice how easy things would be if this disagreement weren’t based on reasonable considerations. If it were flatly unreasonable to deny fetal personhood, then it would be much easier to justify a law against abortion. And if it were flatly unreasonable to ascribe personhood to fetuses, then there would no accounting for such a law. But reasonable people disagree about fetal personhood.

More: both parties to this disagreement reasonably believe that the other side is involved in imposing serious harms to the interests of others. This means that abortion law will lack authority for pro-choicers if pro-lifers have their way politically. It’s relatively obvious how this is so: pro-choicers have an intelligible rationale grounded in their beliefs, values and commitments for rejecting various restrictions on abortion. These restrictions violate privacy or bodily autonomy. But abortion law will also lack authority for pro-lifers when pro-choicers have their way politically. The reason is that, since pro-lifers reasonably believe that fetuses are persons with a right not to be killed, they have adequate justification for protecting them by imposing coercive measures that increase the costs of people killing them. Pro-lifers have an intelligible rationale for rejecting laws that carve out space for women to kill their child.

This situation, then, describes something like a moral state of nature between the two sides. We’ve failed to achieve coordination. Pro-choicers know that, even after engaging in careful and respectable reflection on the relevant moral and empirical evidence, pro-lifers won’t acknowledge the right of women to have an abortion. Public reason has run out. But this doesn’t mean that they just let pro-lifers violate women’s bodily autonomy. Pro-choicers are left as their only option taking up what P.F. Strawson called the objective attitude towards pro-lifers. They will see pro-lifers as a force to be reckoned with, managed and kept at bay as best they can as they go about their affairs, but that’s different than exercising genuinely normative authority over them.

In the same way, pro-lifers know that abortion-seekers won’t acknowledge the personhood of fetuses, even after careful and respectable reflection on the relevant moral and empirical evidence. Public reason has run out for them, too. But this doesn’t mean that pro-lifers just let women seeking abortions kill their children. From the pro-life perspective, women seeking abortions are doing something similar to driving a car towards a person in the street they can’t see. They have reason to stop her or make her swerve. In other words, pro-lifers similarly must treat abortion-seekers as mere objects of social policy rather than people with whom they are interacting on genuinely moral terms.

Disagreement doesn’t always lead to this kind of social breakdown of public reasoning and moral community. I can think of some other examples, but it’s relatively rare, which is a good thing. It also seems pretty isolated most of the time -- thankfully, a disagreement and breakdown in this area hasn’t led to a more general breakdown of moral relations among people who are on opposite sides of the abortion issue. Most people even have friends who disagree with them about abortion.

In fact, I suspect that this lends some credibility to the explanation of the abortion debate that I’ve offered here. It’s a case where we are forced to take the objective attitude towards our opponents because it turns out that they aren’t true moral subjects of the proposed requirements. Strawson’s participant reactive attitudes wouldn’t be warranted since they suggest serious culpability for violating something everyone is in on and knows better than to do. Sure, one side calls the other “murderers” and the other calls the one “women-haters” but we don’t generally act like we regard each other as such.

Kyle Swan
Department of Philosophy
Sacramento State

Sunday, February 26, 2017

The trouble with moral thought experiments

In last week's post Garret Merriam argued that the famous brain-in-a-vat thought experiment is incoherent.  In this post I argue that many popular moral thought experiments are flawed as well. I won't argue that they are incoherent; rather, I claim that they tend to presume and promote a flawed understanding of human decision-making.

So first a few words about that:

Human beings are social animals. We have learned to cooperate with one another in order to acquire goods that we can not easily secure in isolation.  In every human society adults are expected to do two things: (1) manage their personal affairs, and (2) respect the rules that make the benefits of cooperation possible.

On any given day we make thousands of almost entirely self-interested decisions.  Most are trivial, such as which word I should use to finish this taco. Some are more significant, such as whether to head for the beach or the mountains on Sunday. In each case I am just doing my best to figure out which of two options will deliver the greatest personal utility. (I am not saying that these actions have no moral significance, only that we do not typically make moral considerations when deciding whether to perform them.) We also, though less commonly, make decisions that are almost entirely moral in nature.  For example, I may be completely committed to helping you move to a new apartment, deliberating only on how I can be of greatest assistance.

But the more interesting decisions occur when both types of considerations are salient. The magic of well-organized societies is that they tend to support the same conclusion. When my alarm rings in the morning I haul my butt out of bed and drive to work. This is because it would be bad for me not to and wrong of me as well. Sometimes, however, these considerations support different decisions.  It might be morally better to help you on move day; still, it is shaping up to be beautiful outside and I would much rather go for a hike. In situations like this I have to decide whether to do what is right, or to do what I like.

When doing moral philosophy we sometimes wrongly suppose that whenever considerations of morality and self-interest come into conflict, we ought to do the morally right thing.  But this is incorrect. Of course it is tautological that morally we ought to, but we are not always expected to sacrifice our own interests for the benefit of others. Rather, when decisions like this arise, we weigh what we ought to do morally against what we ought to do prudentially, and make the best decision we can. This is easier said than done, especially since these two types of value are not obviously fungible. But it is our task, nonetheless.

Now for the problem with moral thought experiments:

Most moral thought experiments are intended to bring out a conflict between different ways of thinking about morality, typically between a utilitarian and a deontological approach. In the  trolley problem, e.g., it is first established that most people judge that one ought to pull a switch that would divert a runaway trolley so that it kills the fewest possible people.  Later we see that most of us also judge that one ought not to push a fat man off a bridge to precisely the same effect. Some philosophers argue that this shows that we are prone to making inconsistent moral judgments. Others claim that we must be detecting morally relevant differences between the two cases.

I don't think either of these conclusions is warranted. This experiment, and others like it, are flawed.

The flaw is that the hypothetical situation described in thought experiments like these are presented as if they constitute a purely moral decision. As noted above, these do occur in everyday life, but scenarios like the trolley problem don't approximate them.  Rather, they provide a decision in which considerations of self-interest and morality are both salient.

This is easily seen in the trolley problem. In each case there is a non trivial question concerning what is best for society as well as what is best for me. In the switch-pulling version, considerations of morality and self-interest more or less coincide. I calculate that pulling the switch is the best outcome for society and also the result I can live with personally. In the fat man version, these considerations collide. Sure, pushing the man off the bridge will save lives. But in the future I suspect I will suffer nightmares too intense to bear.

Some may respond impatiently: This is just the familiar sophomoric complaint that the thought experiment is unrealistic. All thought experiments are unrealistic, that is why they are thought experiments rather than real ones. Philosophers know that considerations of self interest play a role in real life, but we ask that you do your best to bracket these considerations in an effort to develop a clearer understanding of morality.

That is not good enough.

Just how are we supposed to bracket considerations of self-interest in this case? Are we asked to disregard our moral emotions altogether?  It is these, after all, that predict a future I wish to avoid. But to do that is to squelch one of our main sources of moral evidence as well. Alternatively, should we allow ourselves to pay attention to the moral emotions, but only for the purpose of moral judgment, taking care (a) not to let considerations of self-interest infect these judgments, and (b) not to confuse the best decision with the morally correct one?

Wow. I have never heard the trolley problem presented like that. It is is not at all clear that we have this ability. But if it could somehow be trained up, I'm betting we would end up with a very different data set.

G. Randolph Mayes
Sacramento State
Department of Philosophy

Sunday, February 19, 2017

How to build a brain in a vat

Philosophers love thought experiments. They're fun, memorable, engaging tools for getting us to think about perplexing intellectual or moral problems. When engineered well, thought experiments can shed light on obscure concepts, raise challenging questions for dominant modes of thought, or guide us to recognize a conflict between two deeply held intuitions. When engineered poorly, however, they can instill a false sense of understanding or create needless confusion in the guise of profundity. Sadly, many of the most famous thought experiments in philosophy are engineered poorly.

Consider one of the most famous modern examples of this problem, Gilbert Harman's "Brain-in-a-Vat" thought experiment.[1] A descendant of Rene Descartes' "evil demon" hypothesis[2], this thought experiment is designed to motivate general skepticism about sense perception and the external world. What if, we are asked to imagine, you're not really here right now, but instead you are just a disembodied brain, suspended in fluid, with a complex computer stimulating your brain in all the right places to artificially create the experiences you take yourself to be having. For example, the computer could send signals to your visual cortex making you think you’re looking at a blog post on The Dance of Reason, when in fact you’re looking at no such thing, because you have no eyes. Hypothetically, the thought experiment says, there would be no way to tell the difference between a reality where your brain is directly stimulated in this way, and one where you actually have a body that interacts with the world at large. Given this indistinguishability, how can we ever really rely on our senses? How can we ever have any kind of empirical knowledge at all?

Many late-night hours have been spent trying to answer this skeptical riddle. As an intellectual puzzle, an amusing game to getting us thinking, or to kick start a conversation in an intro to philosophy course, it works just fine. But as a tool for trying to understand how humans know the world, it is deeply misleading.

The problem, in short, is that neurologically speaking conscious experience simply does not work the way this thought experiment presumes it does. The brain is a necessary, but not sufficient condition for having experiences. This is not because, as Descartes argued, we have some nonphysical aspect to our mental lives, but rather because a disembodied brain is physiologically incapable of producing the panoply of experiences that we all have every day.

Consider, for example, emotions. While the processing of emotions takes place in the brain the key ingredients that make up the neurocorrelates of emotions—hormones and neurotransmitters—are created by the endocrine system, the network of glands distributed throughout the body.[3] Without these glands you would never feel love, anger, sorrow, joy, lust, hunger or disgust. The absence of these feelings would be a dead giveaway that you were a disembodied brain in a vat.[4]

But it doesn’t stop there. In addition to an endocrine system, you would need circulatory and lymphatic systems to transport the hormones from the glands to the (very specific!) parts of the brain where they are needed in order to give rise to specific emotions. You would also need a digestive system to get the chemical precursors that fuel the endocrine system, while your integumentary system (skin, hair) is essential for flushing byproducts the other systems can’t use. Lastly, all those organs need to be supported by something, making a skeletal system indispensible as well.

In short, the only way to build a brain in a vat is to make the vat out of a human body.

I suspect two objections are occurring in your brain right now. First off, how do I know we need these systems to feel emotions? What if I only think that because the evil genius programming the computer controlling my brain has led me to believe this in the first place? Haven’t I failed to take the force of the skeptical argument seriously?

Okay, I reply, but how do we know we even need a brain in the first place? Why doesn’t the thought-experiment work if it’s just a vat and a computer? For that matter, how do we know there are such things as vats or computers or evil geniuses at all? In order to be expressible in language the thought experiment has to be grounded in something, some kind of experience that explains how our experiences might be systematically misled. If the skeptic can help themselves to a host of experience-based ideas to fund their thought experiment it seems disingenuous of them to object when I do the same to defund it.

The second objection charges me with taking the thought experiment too literally. The point of the thought experiment was to explore epistemology and the limits of our sense perception, not the neuroanatomical foundations of our emotions. We can acknowledge the facts about the physiological basis for hormones and still benefit from pondering fantastic hypotheticals such as these.

This objection precisely illustrates the problem with thought experiments I mentioned in the first paragraph. Epistemology is not bounded by the limits of our imaginations alone. Human beings come to know things by using our brains and bodies, and the empirical realities of those brains and bodies places constraints on what knowledge can be, how it can work, and how we can attain it. When we abstract away from real flesh-and-neuron human beings we are left with nothing human in our epistemology. Whatever is leftover has little bearing on anything worth caring about.

Thought experiments that are accountable only to our imaginations are unlikely to provide us with insight into complex topics like the true nature of minds, morality or metaphysics. As Daniel Dennett says, “The utility of a thought experiment is inversely proportional to the size of its departures from reality.”[5] If we want to contemplate skepticism and the limits of sense perception, there are plenty of ways to engineer realistic thought experiments based on the real-world limitations of the human brain.

Garret Merriam
Department of Philosophy
University of Southern Indiana

[1] Harman, Gilbert (1973). Thought, p5. Princeton University Press 

[2] Descartes, René, (1641), The Meditations Concerning First Philosophy, (John Veitch trans., The Online Library of Liberty 1901 Meditation II, paragraph 2.

[3] Ironically, the endocrine system includes the pineal gland, which Rene Descartes speculated was the point of contact between our immaterial minds and our material brains. Rather than serving as a magic intermediary between two metaphysical planes, the pineal gland is part of what grounds the brain squarely within the body itself.

[4] It is only fair to mention that three parts of the endocrine system—the hypothalamus, the pituitary gland, and the pineal gland—are technically housed inside the brain. The supporter of the Brain-in-a-Vat argument could perhaps lay fair claim to these, as they would be included in the terms of the original thought experiment. None the less, the other parts of the endocrine system (including the thyroid, the adrenal glands, the gonads, and other glands) are distributed throughout the body placing them well out of play for the original thought experiment.

[5] Dennett, Daniel C. (2014), Intuition Pumps and Other Tools for Thinking, p.183, Norton, W.W. & Company, Inc.

Friday, February 10, 2017

The other “One Percent”

Let us pause and reflect on the following: those who hold PhD degrees are the Warren Buffetts of epistemic resources. They have been privileged with more educational experience and access to intellectual activities than 99% percent of living humans. Consider that simply having been awarded a bachelor’s degree puts one in the top 30% of educated persons in the United States, a Master’s degree will put one in the top 7% and a PhD degree the top 1%. Worldwide, the statistics are much more striking.[1] Although there is plenty of criticism to direct at higher education, it is hard to argue against the following: those who hold college degrees have had an experience of great epistemic value that others have not. Notwithstanding, it is rarely, if ever, suggested that PhD’s ought to share this intellectual wealth.[2] But why not?

Given the importance of epistemic resources to a life well-lived, it seems a bit odd that epistemic generosity is not morally expected, especially of those who are of noticeable intellectual wealth.[3] In various ways epistemic resources are as valuable as financial resources. So why wouldn’t epistemic 1%ers have as much of an obligation to share their epistemic wealth as the financial 1%ers have to share their monetary wealth? This post argues that epistemic 1%ers do have this moral responsibility and that those who fail to share their unique type of wealth are in fact failing to do what they ought. This moral “oversight” can be understood as a vicious character trait, i.e., many of the intellectually wealthy are epistemically greedy.

I will use the term “epistemic greed” as follows. Epistemic greed is greed for epistemic resources. “Epistemic resources” should be understood broadly. Examples include, physical goods, epistemic services, cognitive states and intellectual abilities that are specially related to knowledge, understanding, rationality, etc. Those who are epistemically greedy keep, take, acquire, or stockpile epistemic goods which they might otherwise share with the epistemically less advantaged. Here is a first shot at defining epistemic greed:
Epistemic Greed (EG): To hoard, acquire, or use an excessive amount of epistemic resources with insufficient concern for those who less epistemically advantaged
While the above definition is on the right track, I think too much is left vague by the expression “excessive.” Let us try a definition with more specificity:
Epistemic Greed (EG): Sharing comparatively little of one’s total epistemic resources with those who are less epistemically privileged than oneself.
In line with Aristotle’s notion of generosity, this second definition places a higher moral obligation on those who are epistemically wealthy. Let us helpfully recall that Aristotle argued the followin:
 “[I]n speaking of generosity we refer to what accords with one’s means. For what is generous does not depend on the quantity of what is given, but on the state [of character] of the giver, and the generous state gives in accord with one’s means. Hence one who gives less than another may still be more generous, if he has less to give”(2014;51). 
This Aristotelian understanding seems to fit with our everyday, pre-theoretical, understanding of the “non-epistemic” concept of greed. We expect, for example, those who are rich to give more than those who are not rich. ”[4] And just as monetary greed influences the egalitarian (or lack thereof) make-up of society, so does intellectual greed have an effect on the societal distribution of epistemic goods. If this much is correct, then the paucity of discussion on epistemic greed is a noteworthy philosophical oversight.

For too long moral and political discussions have focused primarily on economic inequalities at the expense of ignoring other types of morally weighty inequalities. One reason for this oversight might be another oversight: we have overlooked that just as an improvement in one’s economic means makes it easier to acquire epistemic resources, the converse is true as well: bettering one’s epistemic position makes it easier to improve one’s economic position. Intelligence can help one get a job, get accepted into college, and in various other ways provide means to a more satisfying life. Educational accomplishments, especially degree accomplishment, are closely tied to lifelong income prospects. In such respects financial and epistemic resources are importantly similar. Both are effective means to a variety of ends helpful in achieving life goals. [5]  Not all goods are of this kind. While I may very much enjoy my leather couch, it cannot help me achieve my dream life of an enjoyable career and basic level of material comfort. Epistemic and financial goods, however, can indeed help me in this regard. Money and knowledge are general purpose tools for a variety of life goals.

Discussing these ideas with academic friends and colleagues, I have heard many object that those with lower educational levels or poor analytic skills have little desire for epistemic goods. “I see your point,” they would protest, “But no one wants what we (academics) have to share.” To me such assertions suggest a disconnect between epistemic elites and their less privileged counterparts. Academics seem prone to mistaken assumptions about those who are epistemically underprivileged. While it may be true that many “ordinary people” dislike college classes and love The Kardashians, I would surmise that even Kardashian fans have some areas of epistemic interest in which some academics could be of help. Yes, often these epistemic interests are pragmatic. Hence helping the disadvantaged might require the epistemic 1%ers to step out of their comfort zone. While many people (university professors, for instance) are capable of helping persons improve their resumes and learn basic computer skills, few are familiar with this type of tutoring. This is no excuse, however, because it is quite easy to become so familiar. Learning what the epistemically disadvantaged desire and how to help requires dedication and open-mindedness, but not much more. Hence the decision not to share is inexcusable. It is simply a socially accepted form of greediness. Society should accept this vice no longer.

Maura Priest
The Humanities Institute
University of Connecticut, Storrs


A., & Reeve, C. (2014). Nicomachean ethics. Indianapolis: Hackett Publishing Company.

Bailey, M. J., & Dynarski, S. M. (2011). Gains and gaps: Changing inequality in US college entry and completion (No. w17633). National Bureau of Economic

Belley, P., & Lochner, L. (2007). The changing role of family income and ability in determining educational achievement (No. w13527). National Bureau of Economic Research.

Data Sources: Key Takeaways from the 2014 Survey of Earned Doctorates | Council of Graduate Schools. (n.d.). Retrieved from

Mayer, S. E. (2002). The influence of parental income on children's outcomes. Wellington,, New Zealand: Knowledge Management Group, Ministry of Social Development.


[1] See, and, and Note that often the statistics are shown in terms of age-group.

[2] I will use the terms “epistemic” and “intellectual” interchangeably. While there are contexts in which this use would be inappropriate, this paper is not one of those.

[3] Long-ago when Aristotle discussed the virtue opposite greed (generosity) according to his specific virtue-theoretic framework, he had in mind a notion specifically associated with the giving of financial resources. Nonetheless, Aristotle’s opinion should not always be understood as the final word on virtue.

[4] One critical difference between the points I make in this post and many common discussions of distributive inequality is that I am not solely focused on governmental obligations and solutions. My focus, rather, is on the character of individual epistemic agents and how they ought to treat other epistemic agents. That said, this paper in no ways rules out either the possibility that the government might be obligated to rectify epistemic inequalities nor that it simply might be prudent to use the government for egalitarian ends.

[5] While there has long been a connection between wealth and education, recent empirical studies suggest that the last few decades have seen this correlation get much stronger. For a few studies on this increasing divide and more generals research into income and education see Belley, P., & Lochner, L. (2007), Bailey, M. J., & Dynarski, S. M. (2011), and Mayer, S. E. (2002).

Sunday, February 5, 2017

The Washington Paradox

The absurdly great musical Hamilton includes the following line from President Washington’s farewell address:
Though, in reviewing the incidents of my administration, I am unconscious of intentional error, I am nevertheless too sensible of my defects not to think it probable that I may have committed many errors.
This seems like an admirably humble thing to say, but one of the philosophically interesting things about it is that it also seems like a reasonable thing to say. That is, Washington does not seem to be describing an unreasonable or irrational attitude about his decisions as president. It is often the case that when examining our actions or beliefs, no one of them seems to be a mistake, and yet we know that we are fallible beings who have likely made at least some mistakes.

The trouble is that certain ways of expressing this general idea lead to puzzling conclusions. Suppose Washington had said something slightly different:
Having carefully reviewed each decision I made as President, I believe of each one that it was not a mistake. Nevertheless, I know that I am not perfect, and so I believe that I must have made some mistakes as President.
This also seems like a reasonable thing to say. Having evaluated all of the consequences, obligations, and whatever other relevant factors, Washington might reasonably believe, for example, that appointing Jefferson as Secretary of State was not a mistake. He might then do the same for each other decision that he made until, for each decision he made, he reasonably believed that it was not a mistake. To see the puzzle more clearly, let’s assign a name to each of Washington’s decisions. We’ll call the first decision ‘D1’, the second ‘D2’, and so on. So, we can represent Washington’s beliefs about his decisions like this:
D1 was not a mistake.
D2 was not a mistake.
D3 was not a mistake.

Dn was not a mistake.
Given that Washington’s careful examination of each decision has left him with good reasons to think that it was not a mistake, it seems reasonable for him to believe each proposition on the list. However, it also seems reasonable for Washington, aware of his own imperfections, to believe that some of D1-Dn were mistakes.

But these beliefs cannot all be true. If the beliefs on the list are all true, then none of D1-Dn were mistakes, and so the belief that some of them were mistakes is false. On the other hand, if some of D1-Dn really were mistakes, then some of the beliefs on the list must be false. More than that, with a little reflection, it should be obvious to Washington that these beliefs cannot all be true, and as a result it does not seem reasonable for Washington to believe all of them. So, now we have a puzzle, a version of the Preface Paradox. Each of Washington’s beliefs seems reasonable, and yet it seems unreasonable to hold all of them together.

And Washington is not alone here. You’re very likely in the same boat. Consider all of your beliefs about some topic—Biology, for example. Supposing you’re a good epistemic agent, each of those is a belief in a proposition that you have carefully considered the evidence for and concluded is true. So, each of those beliefs is reasonable. However, you know that you are imperfect. Sometimes, even after careful consideration, you misread the evidence and accidentally believe something false. So, you have good reason to believe that at least one of your many beliefs about Biology is false. And now you have obviously inconsistent beliefs, all of which seem reasonable. So, what should you do?

I think that you and Washington should keep all of your beliefs, even though you know that they are inconsistent. The trick is to explain why it is reasonable to maintain these particular inconsistent beliefs, even though it is generally unreasonable to have inconsistent beliefs. If I have just checked the color of a dozen swans, for example, and come to believe of each one that it is white, it would be unreasonable for me to believe that some of them were not white. So, what is it about Washington’s situation that makes it different from this swan case?

One interesting difference is that it is reasonable for me to think that if one of the swans had not been white, I would have some sign or evidence of that—if some of them were black, for example, I would have noticed. Washington, on the other hand, not only has good reason to think that he has made some mistakes, but also has good reason to think that he might not have noticed some mistakes in his evaluation of hundreds of complex decisions. But this fact does not seem to prevent him from believing that he would have noticed if, for example, Jefferson’s appointment had been a mistake. He might think, for example:
If appointing Jefferson had been a mistake, he would have been a poor Secretary of State, which is something I would notice. So, if it were a mistake, I would have noticed.
Given his careful inspection of all of his evidence about each decision, Washington could give a similar good reason for believing of each decision that he would have noticed if it were a mistake. In fact, the point of carefully inspecting the evidence about each decision seems to be that, in doing so, Washington would notice if it were a mistake.

So, even though, for any decision we pick, Washington has good reason to think he would have noticed if it were a mistake, he still has a good reason to think that he might not have noticed if some of his decisions were mistakes. Perhaps this is what makes it reasonable for him to believe that each particular decision was not a mistake while still believing that some of them were mistakes.

Brandon Carey
Department of Philosophy
Sacramento State

Sunday, January 29, 2017

Is Time Real?

There are four main reasons for saying time is not real: it is (a) subjective, (b) conventional, (c) inconsistent, and (d) emergent.

(a) Does time depend upon being represented by a mind? Without minds, nothing in the world would be surprising or beautiful or interesting. Can we add that nothing would be in time? Yes, said St. Augustine, who claimed time is nothing in reality but exists only in the mind’s apprehension of that reality.

(b) Philosophers generally agree that humans invented the concept of time, but some argue that time itself is invented as a useful convention, like when we decide that a coin-shaped metal object has monetary value. Money is culturally real but not objectively real because it would disappear if human culture were to disappear, even if the coin-shaped objects did not disappear.

Although it would be inconvenient to do so, our society could eliminate money and return to barter transactions. In the article, “Who Needs Time Anyway?”, Craig Calendar said:

Time is a way to describe the pace of motion or change, such as the speed of a light wave, how fast a heart beats, or how frequently a planet spins…but these processes could be related directly to one another without making reference to time. Earth: 108,000 beats per rotation. Light: 240,000 kilometers per beat. Thus, some physicists argue that time is a common currency, making the world easier to describe but having no independent existence.

(c) Bothered by the contradictions they claimed to find in our concept of time, Parmenides, Zeno, Plato, Spinoza, Hegel, and McTaggart said time is not real. McTaggart believed he had a convincing argument for why a single event is a future event, a present event and also a past event, and that since these are contrary properties, our concept of time is self-contradictory.

In the mid-twentieth century, Gödel argued for the unreality of time because the equations of general relativity allow for physically possible universes in which all events precede themselves. It shouldn't even be possible for time to be like this, Gödel believed, so whatever the theory of relativity is about, it is not about time.

(d) It also has been argued that time is not real because it is emergent. Leibniz argued it emerges from the order relations between pairs of events, and Minkowski argued it emerges from spacetime.

In 1994, Julian Barbour said, “I now believe that time does not exist at all, and that motion itself is pure illusion.” He argued that there does exist objectively an infinity of individual, instantaneous moments, but there is no objective happens-before ordering of them, no objective time order. There is just a vast, jumbled heap of moments. Each moment is an instantaneous configuration (relative to one observer's reference frame) of all the objects in space. If the universe is as he describes, then space (the relative spatial relationships within a configuration) is ontologically fundamental, but time is not, and neither is spacetime. In this way, time is removed from the foundations of physics and emerges as some measure of the differences among the existing spatial configurations.

The above arguments are not trivial, but I would like to respond to them.

(a) Regarding subjectivity, notice that our clock ticks in synchrony with other clocks even when no one is paying attention to the clocks. Second, notice the ability of the concept of time to help make such good sense of our evidence involving change, persistence, and succession of events. Consider succession. This is the order of events in time. If judgments of time order were subjective in the way judgments of being interesting vs. not-interesting are subjective, then it would be too miraculous that everyone can so easily agree on the temporal ordering of so many pairs of events.

(b) A good reason to believe time is not merely conventional is that our universe has so many periodic processes whose periods are constant multiples of each other over time. For example, the frequency of rotation of the Earth around its axis, relative to the "fixed" stars, is a constant multiple of the frequency of oscillation of a fixed-length pendulum, which in turn is a constant multiple of the frequency of a vibrating violin string. The existence of these sorts of relationships—which cannot be changed by convention—makes our system of physical laws much simpler than it otherwise would be, and it makes us more confident that there is something convention-free that we are referring to with the time-variable in those physical laws.

(c) Regarding the inconsistencies in our concept of time that Zeno, McTaggart, Gödel, and others claim to have revealed, I suggest we say that either there is no inconsistency, or else their complaint be handled by revising the relevant concepts. For example, Zeno's paradoxes were treated by requiring time to be a linear continuum, very much like a segment of the real number line. Yes, the mathematicians changed important characteristics of Zeno’s concept of time, but the change was very fruitful and not ad hoc and so cannot be accused of violating time’s very essence. Gödel's complaint can be treated by saying he should accept that time might possibly be circular; he needs to change his intuitions about what is essential to the concept.

(d) Suppose time does emerge from events, or spacetime, or even Barbour’s moments. Scientists once were very surprised to learn that water emerges from H2O molecules. But having learned that molecules are more fundamental than water, should we make the metaphysical leap to saying water is not real? Should we not say instead that now we more deeply understand what water is? If so, we can draw a similar conclusion for time.

So, let’s say that time is real, that it is objective rather than subjective, that it is not primarily conventional, that any inconsistency in its description is merely apparent or inessential, and that time is real regardless of whether it is emergent.

Brad Dowden
Department of Philosophy
Sacramento State