Monday, December 11, 2017

Evidence of sexual harassment and assault

The epistemology of testimony has received some attention lately, though not by that name. The epistemology of testimony is the part of philosophy in which we study when you should believe what people say (and why). Recent high-profile reports of sexual harassment and assault have resulted in a lot of discussion about whether and when we should believe these reports, which is really just a case of applied epistemology. So, if anyone ever tells you that philosophy doesn’t matter, here’s a good example of when it does.

I’m an evidentialist, which means that I think that what you should believe is determined entirely by your evidence. ‘Evidence’ is a tricky word, but as I use it, it refers to a specific kind of reason for believing something. We can have all kinds of reasons for believing things: I can believe that unicorns exist because it’s fun, believe what Maria says because she’s my friend, or believe that a job interview will go well because that will make me more confident. None of those reasons are evidence for those beliefs, though. Evidence consists of reasons that indicate the truth of something. Seeing a bunch of unicorns would give me evidence that unicorns exist, for example, because that experience would indicate that unicorns really do exist. Having experience with Maria being very reliable would give me evidence that what she says is true, but just the fact that she is my friend would not. And, sadly, if my last dozen job interviews have gone poorly, that gives me evidence that this next one will also go poorly. According to Evidentialism, you should believe something just in case you have these kinds of truth-indicating reasons for doing so.

There’s an argument for initially believing reports of sexual harassment and assault, though, that relies on a different kind of reason. It goes like this:

When we first hear a report of sexual harassment or assault, we must either believe it or disbelieve it. Disbelieving these reports has very bad consequences. Among other things, it sends the message that these reports are not taken seriously or treated as credible, which makes future victims less likely to report assault or harassment. So, we should initially believe reports of sexual harassment or assault.

The conclusion here does not require that we always believe reports of sexual assault or harassment, regardless of any other information we might discover. That would be unreasonable, as, on investigation, some (low) percentage of reports will turn out to be false. The conclusion is only that, when we first hear a report, prior to any further investigation, we should believe it.

If Evidentialism is true, though, this is a bad argument. The fact that believing or disbelieving something has very bad consequences is not evidence for or against that thing. The consequences of a belief do not indicate whether it is true. So, the premises here do not support the conclusion.

And even if Evidentialism is false, this is still a bad argument, because the first premise is false. On hearing a report, we aren’t forced to choose between believing it and disbelieving it. We also have the option of suspending judgment, of forming no belief either way, which does not obviously have the same bad consequences as disbelief. Disbelieving a report requires believing that the reporter is either lying or mistaken about their own experience, but suspending judgment does not. And suspending judgment while looking for further evidence seems to, at least in some important respects, take the report seriously.

As is often the case, though, this is a bad argument for a true conclusion. We should initially believe reports of sexual assault or harassment, but not because failing to do so would have bad consequences. We should believe them because a report of harassment or assault is evidence—it is a truth-indicating reason to believe that the harassment or assault occurred. It takes no special insight to know this, just the familiar principle that when someone says that something happened, that is evidence that it happened. We appeal to this principle all the time. It’s how I have evidence that Abraham Lincoln was shot, that Taylor Swift has received ten Grammys, and that my grandfather went to the gym last week. It’s also very often how we have our first evidence that any kind of crime has occurred.

Of course, this evidence can be defeated by further evidence. We might uncover a reason to doubt a particular reporter’s reliability or sincerity, or we might have a general reason to doubt the credibility of a certain kind of report. Absent this evidence, though, failing to initially believe reports of harassment or assault is failing to believe what is supported by our evidence. It is, also, believing that a report is not credible without any evidence for that belief—a further violation of Evidentialism and an (epistemic) injustice to the reporter. So, according to Evidentialism, we should believe reports of sexual harassment or assault, unless we have some other evidence to doubt them.

We might worry, though, that believing reports of sexual assault or harassment by default would also have bad consequences, raising another kind of challenge to Evidentialism. Those believed to have committed sexual assault or harassment face a range of possible consequences, including loss of employment and prison time, and when the reports are false, these consequences will be unjust. Shouldn’t we at most suspend judgment until a thorough investigation is completed, so as to avoid these unjust consequences?

But this worry confuses the evidence required for belief with the evidence required for action. It’s true that we shouldn’t terminate or imprison people without first conducting a thorough investigation, but it doesn’t follow that we shouldn’t believe the reports that lead to those investigations. If Evidentialism is true, then if you have a good evidence to believe something, you should. This is consistent with saying that you should seek more evidence before taking action.

Brandon Carey
Department of Philosophy
Sacramento State

Sunday, December 3, 2017

Upgrade to Dance of Reason Prime! get a better version of this post.

Actually, my colleague Clifford Anderson addressed net neutrality on this blog here. The issue has been getting so much attention recently because of an upcoming Federal Communications Commission vote, which will likely reverse the Commission’s June 2015 reclassification of internet service providers as Title II (of the 1934 Communications Act) telecommunication common carriers.

In internet-speak, “edge providers” (like FAMGA -- Facebook, Apple, Microsoft, Google, Amazon) are contract carriers. They, like a typical market service, provide their content and apps subject to mutual agreement (either with us or, more usually, advertisers). From the beginning of the commercial internet until 2015, this was true of internet service providers, too. Their reclassification as common carriers, however, made ISPs subject to FCC regulation, like telecommunications companies and other public utilities are. The primary stated aim of this reclassification was to ensure access neutrality for online content. 

If the December 14th Commission vote goes the way most people expect it to go, ISPs will basically return to the status quo ex ante. Title II will no longer apply to them. Instead, ISPs will again be under the jurisdiction of the Federal Trade Commission, which will again be responsible for pursuing cases to protect consumer privacy and data security, including cases involving fraudulent, deceptive or otherwise unfair and anti-competitive business practices.

Notice, then, that this isn’t a move from public utility-style regulation to no regulation. Rather, the effect will be a shift in the regulatory modus operandi: from a set of prescriptive rules under the FCC, to a framework based on case-by-case enforcement by the FTC to ensure ISP transparency, including transparency about how ISPs handle customers under various service plans.

Which of these regulatory regimes best ensures that internet access is broadly available, and not subject to unreasonable restrictions or abuse of ISP market power in certain areas, is an empirical question. Unfortunately, it’s become a source of ideological hand-wringing, often giving rise to speculations about a dystopian internet future.

One consideration in examining the empirical question is the history of the development of the internet in the period from the 90s to the 2015 reclassification. The most significant tech policy for the early internet was the 1996 congressional update to Title 47 telecom law in Section 230, which carved out significant legal space for an environment of permissionless online innovation.

The early 90s internet was basically just a few big newspapers and a bit of porn (with pics like this taking upwards of 45 frustrating seconds to load). You might check your email or maybe buy a few books (just books!) on Amazon, too, but you’d shortly free up your telephone line and turn on whatever Must See TV (literally must, since streaming options didn’t exist) happened to be programmed viewing at that precise moment. But in this deregulated environment, in the span of only about 20 years, we now have access to an amazingly creative, entertaining, dynamic and connected virtual world that now even extends into meatspace (with Uber, AirBnB, etc., and driverless cars just around the corner). All this happened, of course, without FCC-enforced net neutrality regulations.

Actually, the early 90s internet is a strikingly accurate picture of the worst dystopian fears of net neutrality advocates who want public utility regulation for ISPs. Then, we purchased access to the internet by purchasing access to content centers, like CompuServe or AOL. These ISPs only provided access to their associated content, forum sites and users. The rest of the WWW was blocked. Over time though, and pretty quickly, ISPs have come round to the current model where they provide genuine access, and FAMGA, et al. provide content, apps, and other services.

Again, this happened without public utilities-style FCC prescriptive regulations. Are there reasons to think the pre-2015 environment was a bad basis for internet innovation and access to continue apace?

One worry I have already alluded to concerns limited ISP competition. Most customers have at least two wireline competitors, but some still only have one. ISPs will, if they can, abuse situations where they have market power. They have it, of course, largely because we’re still living with the structure leftover from when local telephone and cable networks were public monopolies.

But this is why we have an FTC -- to address complaints about anti-competitive practices (and, by the way, the FTC doesn’t have legal enforcement authority over Title II public utilities). I think more should be done to promote ISP competition, but in addition to wireline services, there is also competitive pressure from cellular and satellite providers. As long as barriers to market entry are sufficiently low, I would expect this pressure to provide consumers with the internet they want.

Even if these worries are more serious than I’ve credited them, are there reasons to think FCC regulations would address them in ways better than FTC oversight? Would the FCC be a better guarantor of openness and access neutrality? I’m doubtful. After all, one of the FCC’s primary functions is to be a media regulator concerned with content and I’d rather not have the FCC anywhere near internet content. More, it turns out that the open internet rules the FCC devised based on its Title II authority expressly permit ISPs to block, filter and curate content. Finally, if the regulatory structure administered by the FCC is more costly for ISPs than what it takes to satisfy the FTC, then it’s possible companies will have less revenue to devote to infrastructure investments in areas currently underserved.

Look, I’m just a philosopher, not a tech analyst or economist. Some of this might be off (but I’m happy to have a go defending it). I more hope to have convinced you that this is one of those policy debates that’s not about ends, but means. Whatever side of this you’re on, it’s quite probable that the people you’re demonizing want the same things you want.

Kyle Swan
Department of Philosophy
Sacramento State

Sunday, November 26, 2017

Are you an Oughtist or a Noughtist?

People who have beliefs about the way the universe fundamentally is can be divided into two distinct groups.

The first, and by far largest, is comprised of folks who believe that the universe is organized normatively. Roughly speaking, they believe that the most comprehensive true account of why things happen the way they do will make essential reference to the way things ought to be. Call them Oughtists. The second group is composed of those who deny this. They believe that the most comprehensive true account of why things happen the way they do will tell us the way things fundamentally are, not the way they ought to be. Call these folks Noughtists.

There are different sorts of normativity, the moral sort being the most familiar. Moral Oughtists believe that the universe is organized according to principles of right and wrong. Almost all religious people are moral Oughtists, as are many others who decline to describe themselves as religious but do believe in a moral order: fate, destiny, karma, etc. Traditional religious Oughtism rests on the belief that the universe was created by a supremely good deity. But you'd be no less an Oughtist for believing that it was created by a supremely evil one.

Occidentally speaking, Oughtism can be traced to Plato. Plato developed an account of the universe according to which everything aspires to the form of the Good. Noughtism is most commonly traced to Plato’s most famous student. Aristotle argued that, as Plato’s forms do not belong to this world, they can have no explanatory significance for this world.

But Aristotle was only slightly noughty. He subscribed, e.g., to fundamentally normative principles of motion. In particular he believed that the heavens are a place of perfection and that celestial bodies move uniformly in perfect circles for eternity. They don’t just happen to do this; they do it because this is the most perfect way. Aristotle’s Oughtism persisted for 2000 years, during which time human understanding of the universe increased very little.

It would be handy to say that the death of Oughtism coincided with the birth of science. But Oughtism is not dead, so this is clearly not true. What’s truer is that the birth of science resulted from an increasing inclination on the part of a very small number of very odd ducks to inquire into the world without judging it.

People like Galileo, Kepler and Newton remained Oughtists in the sense that they sincerely believed the universe to be of divine origin. But they took an unprecedentedly noughty turn in ceasing to believe that we could come to know how the universe works by thinking about how a divine being might go about building one. This peculiar mixture of hubris and humility lit the fuse that produced the epistemic explosion that, in a few short centuries, created the modern world.

The story of the growth of scientific understanding is the story of the full retreat of Oughtism. It slinked over the scientific horizon with the general acceptance of Darwin’s theory of evolution. Darwin expressed the vaguely oughty opinion that “there is grandeur in this view of life.” But ordinary folks see it for what it is: a ghastly story of nature “red in tooth and claw,” devoid of any overarching purpose or meaning. Indeed, it is so offensive to our moral intuitions that most moral Oughtists continue to reject it as an account of the true origin of people.

Modern scientists still sometimes speak of their theories in normative terms, especially aesthetic ones. Einstein, e.g., was not religious, but he insisted that “God does not shoot dice,” an oughty expression of his conviction that randomness is too ugly to be an essential feature of the way the world works. But Einstein didn’t arrive at the general theory of relativity by contemplating the nature of Beauty; nor did a single one of the experiments by which it was subsequently confirmed attempt to ascertain whether it is beautiful enough to be true.

So, epistemically speaking, we live in a pretty weird world. We owe it to the expulsion of Oughtism from the playground of science. If this had not occurred, we would all still believe oughty theories of reproduction, disease, poverty, war, social hierarchy, famine and natural disasters. We would still believe in witches and the efficacy of curses. We would know absolutely nothing of galaxies, germs, cells, molecules, atoms, electrons, radiation, radioactivity, mutation, meiosis, or genes. Quotidian items like light bulbs, cameras, watches, automobiles, airplanes, phones, radios, computers, vaccines and antibiotics would not even exist in our imaginations. Yet knowing all of this causes very few to reject Oughtism as a general worldview.

Why is an interesting question, and not one I mean to discuss.

I conclude with the following observation: Most philosophers, even those who believe themselves to be very noughty indeed, are Oughtists at heart. This is because almost all of us, even the most “analytic,” assume that our normative intuitions are a reliable guide to the nature of reality.

There are several reasons for this, but I think the most important one is that philosophers are naturally drawn to features of the world that are normatively non-neutral. This is obvious in the case of intrinsically normative concepts like justice, virtue, responsibility and reason. But it is also true of most other traditional philosophical topics: free will, personal identity, mind, meaning, causation, consciousness, knowledge, thought, intelligence, wisdom, love, life, liberty, autonomy, happiness. All of these carry a positive valence (and their opposites a negative one) that we presume to be essential to them. Hence, we confidently evaluate any proposed theory according to whether it causes us to experience the correct level of (dis)approbation.

This is why, for example, most of us instinctively recoil from theories that propose to reduce phenomena associated with life, mind and spirit to the “merely” physical. They do not have normative implications and therefore do not satisfy the Oughtist need to understand these phenomena as exalted states of being.

G. Randolph Mayes
Department of Philosophy
Sacramento State

Monday, November 13, 2017

It’s Time To Pull The Switch On The Trolley Problem

Back in August Germany became the first nation to institute federal guidelines for self-driving cars. These guidelines include criteria for what to do in the case of an impending accident when split-second decisions have to be made. Built into these criteria are a set of robust moral values, including mandating that self-driving cars will prioritize human lives over the lives of animals or property, and that the cars are not allowed to discriminate between humans on the basis of age, gender, race or disability.

Philosophers have an obvious interest in these sorts of laws and the moral values implicit in them. Yet in spite of the wide range of potentially interesting problems such technology and legislation pose, one perennial topic seems to dominate the discussion both amongst philosophers and the popular press: The Trolley problem. So electrifying has this particular problem become that I suspect I don’t need to rehash the details, but just in case, here is the basic scenario: a run-away trolley is hurtling down the tracks towards five people. You can save the five by throwing a switch diverting the trolley to a side track, where there is only one person. Should you throw the switch, saving the five and sacrificing the one, or should you do nothing, letting the one live and letting the five die?

Initially developed over 60 years ago by Philippa Foot and modulated dozens of times in the intervening decades, the Trolley problem was long a staple of intro to ethics courses, good for kick starting some reflection and conversation on the value of life and the nature of doing vs. allowing. Hence, you could practically feel philosophy departments all over the world jump for joy when they realized this abstract thought experiment finally manifest itself in concrete, practical terms due to the advent of self-driving cars.

This excitement fused with some genuinely fascinating work in the neuroscience of moral decision making. The work of scholars like Josh Green has provided genuine insight into what occurs in our brains when we have to make decisions in trolley-like situations. Out of the marriage of these two developments—along with some midwifery from psychology and economics—the field of ‘trolleyology’ was born. And it is my sincere hope that we can kill this nascent field in its crib.

Why should I, as an ethicist, have such a morbid wish for something that is clearly a boon to my discipline? Because despite the superficial appeal there is really not very much to it as far as a practical problem goes. It is marginally useful for eliciting conflicting (perhaps even logically incompatible) intuitions about how the value of life relates to human action, which is what makes it a useful tool for the aforementioned intro to ethics courses. But the Trolley problem does precious little for illuminating actual moral decision making, regardless of whether you’re in a lecture hall or an fMRI.

To see this, take a brief moment to reflect on your own life. How many times have you ever had to decide between the lives of a few against the lives of the many? For that matter, take ‘life’ out of the equation: how many times have you had to make a singular, binary decision between the significant interests of one person against the similar interests of multiple people? Actual human beings face real-world moral decisions every day, from the food choices we make and the the products we purchase, to the ways we raise our children and how we respond to the needs of strangers. Almost none of these decisions share the forced binary, clear 1-vs.-5 structure of a trolley problem.1

What then of the self-driving car example I opened with? Does this not demonstrate the pragmatic value of fretting over the Trolley problem? Won’t the ‘right’ answer to the Trolley problem be crucial for the moral operation of billions of self-driving cars in the future to come? In short, no. Despite all the press it has gotten, there is no good reason to think the development of self-driving cars requires us to solve the Trolley problem any more than the development of actual trolleys required it almost 200 years ago. Again, check your own experience: how often behind the wheel of a car did you—or for that matter anyone you know, met, or even read about—ever have to decide between veering left and killing one or veering right and killing five? If humans don’t encounter this problem when driving, why presume that machines will?

In fact, there’s very good reason to think self-driving cars will be far less likely to encounter this problem than humans have. Self-driving cars have sensors that are vastly superior to human eyes—they encompass a 360-degree view of the car, never blink, tire or get distracted, and can penetrate some obstacles that are opaque to the human eye. Self-driving cars can also be networked with each other meaning that what one car sees can be relayed to other cars in the area, vastly improving situational awareness. In the rare instances where a blind spot occurs, the self-driving car will be far more cognizant of the limitation and can take precautionary measures much more reliably than a human driver. Moreover, since accidents will be much rarer when humans are no longer behind the wheel, much of the safety apparatus that currently exists in cars can be retooled with a mind to avoiding situations where this kind of fatal trade off occurs.2

Both human beings and autonomous machines face an array of serious, perplexing and difficult moral problems.3 Few of them have the click-bait friendly sex appeal of the trolley problem. It should be the responsibility of philosophers, psychologists, neuroscientists, A.I.-researchers, and journalists to engage the public on how we ought to address those problems. But it is very hard to do that when trolleyology is steering their attention in the wrong direction.

Garret Merriam
Department of Philosophy
Sacramento State

[1] There are noteworthy exceptions, of course. During World War II, Winston Churchill learned of an impending attack on the town of Coventry and he decided not to warn the populace, for fear of tipping the Germans that their Enigma code had been cracked by the British. 176 people died in the bombing, but the tactical value of preserving access to German communications undoubtedly saved many more by helping the Allies to win the war. If you’re like most people, you can be thankful that you never have to make a decision like this one.

[2] For example, much of the weight of the car comes from the steal body necessary to keep the passengers safe in the event of a pileup or a roll-over. As the likelihood of those kinds of accidents become statistically insignificant this weight can be largely removed, lowering the inertia of the car making it easier to stop quickly (and more fuel efficient, to boot), thus avoiding the necessity of trolley-type decisions.

[3] Take, for example, the ethics of autonomous drone warfare. Removing human command and control of drones and replacing it with machine intelligence might vastly reduce collateral damage as well as PTSD in drone pilots. At the same time, however, it even further lowers valuable inhibitions against the use of lethal force, and potentially creates a weapon that oppressive regimes—human controlled or otherwise—might use indiscriminately against civilian populations. Yet a google search for “autonomous military drones” yields a mere 6,410 hits, while “autonomous car” + “trolley problem” yields 53,500.

Monday, November 6, 2017

What famous philosophical argument gets too much love?

This week we asked philosophy faculty the following question:
What famous philosophical argument (observation, distinction, view etc.) is given entirely too much attention or credit? Why?
Here's what they said:

Matt McCormick: Searle's Chinese room

Does a computer program that correctly answers thoughtful questions about a story actually understand it?

In Searle’s thought experiment, a human, playing the part of a CPU, uses the computer code equivalent of instructions for answering questions about a story in Mandarin. The human doesn't know Mandarin, but through the instructions in the code, can, by hypothesis, answer questions as if she understands the story.

Searle maintains that when we imagine ourselves in this position it is intuitively obvious that we don't understand the story in Mandarin. He concludes that this shows that machines accurately modeled by this process (i.e., Turing Machines) don't think or understand.

The thought experiment capitalizes on gross oversimplifications, misdirection, and a subtle equivocation. Several implicit assumptions are false once we draw them out:
  • My armchair imaginings about this caricatured scenario accurately capture what a sophisticated artificial neural net computer is doing. 
  • My intuitions about what I would and wouldn't understand in this imaginary scenario are reliable indicators of the truth in reality; 
  • People are reliable judges of when they do and don't understand; 
  • If I was playing the role of a dumber part of a larger, smarter system, I would be apprised of whether or not the system itself understands.
Once we unpack what would comprise such a system, particularly with modern artificial neural networks trained with machine learning, then we realize how cartoonish Searle’s story is, and the intuition that these machines cannot understand evaporates.

Randy Mayes: The Euthyphro dilemma

The original form of this dilemma concerns piety, but in today’s ethics classes the word “good’ is usually inserted for “pious,” and it is reformulated for monotheistic sensibilities: Is something good because God commands it, or does God command it because it is good?

If we choose the first horn, we must allow that it would be good to eat our children, assuming God willed us to do so. Choose the second and we admit that goodness is a standard to which God himself defers.

Almost always the lesson drawn is that morality is (a) objective and (b) something whose nature we may discover through rational inquiry, regardless of our religious beliefs. Which is just what traditional moral philosophy assumes and does. Hurrah!

It’s a lovely piece of sophistry.

Socrates has created a false dilemma that also begs the question against his opponent. Euthyphro has complied with Socrates' request for a definition. A definition of P is a set of necessary and sufficient conditions, Q, for P. If correct, it is neither the case that P because Q or that Q because P. This question only makes sense if P and Q are simply presumed to to be different.

The truth: it is fine to define the good as what a morally perfect being commands (or wills.) However, it provides no insight into the content of such commands. It provides no reason to believe that such a being exists or that we could recognize it or know its will if it did.

Tom Pyne: Determinism

Determinism is the source of much mischief in philosophy.

Thus Determinism:

For every event there is a cause such that, given that cause, no other event could have occurred.

The mischief stems from its early modern formulation. Peter van Inwagen’s is representative:
  • P0 = a proposition giving a complete state description of the universe at any time in the past.
  • L = all the laws of nature.
  • p = a proposition stating some event that occurs (Electron e’s passing through the left slit; Pyne’s walking home by way of D Street on November 6, 2017)
  • N = the operator ‘it is a natural necessity that’
Determinism is:
If P0 and L, then Np
It is impossible for e not to pass through the left slit.

It is impossible for Pyne to go home by F Street instead.

Now Determinism is true.

It’s this formulation that’s wrong.

Notice that it appeals to laws of nature, but nowhere to causes.

But are there laws of nature? Not literally. Scientific ‘laws’ are (heuristically valuable) idealizations of the causal powers of objects.

This consideration enables us to avoid the natural necessity of p. Which is just as well, since we are committed to denying it in the electron/slit case by statistical mechanics and in my case by everyday experience.

I have the causal power sufficient to go D Street and the causal power sufficient to go F Street. Determinism properly understood won’t rule this out. Whichever way I go it’s not a miracle.

Garret Merriam: The emotion/reason distinction

The distinction between cold-calculating reason and hot-blooded emotion runs deep in Western thought. The distinction has caused hectic debates in moral psychology and philosophy of mind. It strikes us as obvious that the faculty we engage when doing math is a fundamentally different faculty than the one we engage when reading love poetry. So obvious, we assume there’s no good reason to doubt the distinction.

There’s good reason to doubt the distinction.

For starters the distinction is more prominent in Western thought than in Eastern. In classical Chinese philosophy the word xin refers to both the physical heart and seat of emotions, but also the locus of perception, understanding and reason. The closest approximate translation in English is ‘heart-mind.’ When conceptual categories blur across geographical boundaries that suggests the distinction might be a cultural artifact rather than a fundamental categorical one.

Functional neuroanatomy also casts doubt. While it’s common to refer to (so-called) emotional vs. rational ‘centers’ of the brain, closer examination shows our brains are not so neatly parsed. For example, the amygdala (traditionally an emotional center) is active in certain ‘cognitive’ tasks, such as long-term memory consolidation, while the prefrontal cortex (traditionally the rational center) is active in more ‘emotional’ tasks, such as processing fear.

The line between thinking and feeling doesn’t cut cleanly across cultures or brains. Perhaps this is because, rather than two fundamentally different faculties, there is instead a vague set of overlapping clusters of faculties that, upon reflection, resist a simple dichotomous classification.

Kyle Swan: Property owning democracy

John Rawls argued against wealth inequalities by arguing that they lead to political inequalities. The wealthy will use their excess wealth to influence political processes and game the system in their favor. Economists call this regulatory capture. To eliminate these political inequalities, eliminate economic inequalities.

But when we task the state to eliminate economic inequalities, we give it a lot of discretionary power to regulate our economic lives. This makes influence over political processes worth more to those who would game the system in their favor, giving them more incentive to capture it. The policies could backfire.

Rawlsians tend to invoke ideal theory here. They’re describing a regime where efforts to realize economic and political equality are implemented by cooperative actors who are in favorable conditions for compliance, so they can “abstract from...the political, economic, and social elements that determine effectiveness.” Policies don’t backfire in magical ideal-theory world.

Rawls can use idealizing assumptions if he wants, but he shouldn’t be so selective about it. For why do we need the state interventions associated with “liberal socialism” or a “property-owning democracy” in the first place? Well, remember, because the rich in “laissez-faire capitalism” and “welfare-state capitalism” use their wealth to game the system.

But this means that idealizing assumptions have gone away from his consideration of the disfavored regime-types. Otherwise, the wealthy there would be riding their unicorns to visit all the affordable housing they’ve built (or whatever), not trying to illicitly game the system in their favor. 

Russell DiSilvestro: Intuition and inevitability 

“It seems to me,”
The man said slowly,
“Your intuition’s no good.”

He quickly added,
“Nor mine, nor anyone’s,”
As if that helped things.

For if no one’s intuitions are any good
Why should I
Or anyone
How stuff seems
To you?

Perhaps his point was just that
And not that
P because it seems to me that P.

But then why say it?

Perhaps he was just
Being conventional
And pragmatic
And friendly.

But then why believe him?

After all
Nothing is more unbelievable than
At least the way he said it.

At least
That’s how
It seems
To me.

David Corner: Reason is slave of the passions 

In the Treatise, Book II, Part III, Sec III, Hume argues that
reason alone can never be a motive to any action of the will; and secondly, that it can never oppose passion in the direction of the will.
I will focus on the second claim.

As one of my seminar students observed this semester, Hume qualifies this claim by providing an exception: Sometimes our passions are founded on false suppositions. An example: I suppose this glass to contain beer, and so I desire to drink it. When I judge that the glass actually contains turpentine, this desire vanishes

My desire to drink the contents of the glass is what TM Scanlon refers to as a “judgment-sensitive attitude.” Its judgment-sensitivity is like that of a belief; I revise my belief that the glass is filled with beer when I am given reasons for thinking that it is filled with turpentine. Indeed my desire to drink the contents of the glass seems entirely dependent on a factual judgment about its contents. Nearly all of what Hume calls “passions” are actually judgement-sensitive attitudes. The exceptions Hume cites would appear to be the rule.

Hume fails to see that the suppositions that provide the basis for most of our passions are really judgments, and that these judgments motivate us by providing reasons for acting- i.e. my motivation for drinking this liquid depends on reasons for thinking it is beer. The distinction between reason and passion may be more tenuous than Hume realizes.

Monday, October 30, 2017

Don't worry, I won't eat you! An intentionally provocative defense of conscientious omnivorism

A couple weeks ago, Professor Saray Ayala-L√≥pez wrote a post entitled The ethics of talking about the ethics of eating, to which I offered a somewhat tangential comment about the ethics of eating meat. So as not to take away from the main point there, I decided to develop this issue separately here. I’ll include some of our dialogue there to begin the discussion.

Here are a few different views on eating meat:
  • Veganism: no use of animal products
  • Vegetarianism: no eating of meat, but use of some animal products
  • Conscientious omnivorism: conscientious and selective eating of meat and use of animal products
Here's my initial defense of conscientious omnivorism. 

“…I apply a standard of justice that relies on a baseline on nonhuman animals in their natural habitats or species-appropriate environments. A violation of justice occurs when we intentionally do something that places nonhuman animals below the baseline. I also do not assume that death is itself a bad thing; there can be good and bad deaths….[M]y standard of justice is violated:

  (1) when we consume more meat than necessary or healthy; 

  (2) when we engage in practices that involve additional pain and suffering beyond what an animal would experience in its natural habitat, or 

  (3) when we contribute to conditions that:
  • (a) create dependency (e.g., captivity) [and] invoke additional duties (of care, including with respect to (2) above) and;
  • (b) we violate these additional duties.
My view is motivated by the practices of some indigenous peoples, who also ate meat (and engaged in other practices involving nonhuman animals) in a way that avoided (1), (2), and (3). If a Native American hunted and killed a buffalo to feed his family, was this morally wrong? If a grizzly bear seeking food attacked and killed me, would that be this morally wrong? What makes these acts *morally* wrong depends on an intentional violation of some standard of evaluation.”

Saray noted:

“...Some people would respond to you that if you can afford avoiding inflicting the pain, objectification, and/or death involved in meat eating, then you have good reasons to stop eating meat….” 

Here, I want to address the objections to eating meat on the grounds that it causes pain, death, and objectification.

As a deontologist, I don’t think the consequences alone are morally relevant. Pain and death in and of themselves are neither good nor bad (e.g., pain of a medical intervention that is necessary for health or death of a soldier who sacrifices his life to save his troop). What makes the infliction of pain or death morally wrong, as mentioned in my initial defense, is when a person causes pain beyond what nonhuman animals would experience in their natural habitat. 

Objectification, unlike pain and death, is not morally neutral. A deontologist may believe some principle P that a person ought to treat another consistently with the other’s species-specific capabilities. Objectification may be defined as a violation of P or, specifically, a violation of P in which a person treats another as less than appropriate given the other’s species-specific capabilities. If I step on a cockroach, I am not objectifying the cockroach because I am not treating it as less than appropriate given the cockroach’s species-specific capabilities. If I use a chimpanzee as a test dummy for testing vehicles (which involves isolation, captivity, and other physical and psychological harms), then I am treating another as less than appropriate given the other’s species-specific capabilities.

One can argue that objectification does not occur when we kill animals for food. Consider a world where there are extreme and isolated conditions (e.g., base camp of Mount Everest) and a small population of advanced intelligent animals, A1, A2, A3, A4, A5, etc. After the A's deplete all their other natural resources, they turn to each other for food. They start with the sick, weak, and elderly among them and, because their rate of consumption exceeds their rate of reproduction, the population eventually dies. If A1 hunted and killed A100, who was elderly, we may say that A1 objectified the elderly. But what about when A1 hunted and killed her equal, A2? A1 did what was necessary for her survival. Setting aside other possible moral offenses, she did not treat A2 as less than appropriate given A2’s species-specific capabilities. Indeed, A1 may have had to devise inventive traps knowing that A2 was her equal in intelligence and the typical traps used on the sick and elderly were useless. She also may have had to ensure a quick kill, not wanting A2 to experience any unnecessary physical or psychological harm.

An objector may might say "Well, this may be fine for your imaginary world of scarcity, but that’s not our world. Today we have sufficient plant-based sources of protein as well as new and improved synthetic sources of meat."

Here are two responses:

First, when the synthetic sources of meat become as accessible as (and qualitatively similar to) real meat, I think there is good reason to transition to these synthetic sources (over a long period of time, given our evolutionary preference for meat).

Second, whether we are primitive and small in number or advanced and 7.6 billion in number, the killing of another for food does not necessarily involve objectification. It can be viewed as involving a kind of competition: a survival of the fittest. When one competitor defeats another, the intent is not to objectify (i.e., treat the other as less than appropriate given the other’s species-specific capabilities), but to win the contest for survival. In the same way that A1 does not objectify A2, humans do not necessarily objectify other intelligent animals. While spears and open plains have been replaced with large-scale farms and ranches, what is morally wrong is not that animals are killed for food, but that we have cut corners to save money rather than doing what is right and, as a result, have placed animals in conditions that are inadequate given their species-specific capabilities.

Chong Choe-Smith
Department of Philosophy
Sacramento State

Monday, October 23, 2017

Why we won't fix health care

The American health care system is insanely complicated. It is dysfunctional and corrupt in many ways. But there is one simple reason that it is so much more expensive than the systems of similarly well-off countries, and that is that we lack a mechanism for controlling spending.

By and large U.S. health insurance companies pay for the interventions that doctors prescribe, and U.S. doctors prescribe pretty much everything that can be justified. This is partly because most doctors work on a fee for service basis: the more diagnostic tests and interventions they order, the better their take-home pay. But it is also partly because this is what patients demand. When we are broken we want our physicians to pull out all the stops in an effort to make us well again.

This is easy to sympathize with. Health is a big deal. The problem is that today- compared to even 50 years ago- doctors can do a heck of a lot. They will be able to do even more tomorrow. That is the main reason why our health care premiums have been rising and why they will continue to rise without a dramatic change in the way that health care is administered.

In a sense it is odd to call this a problem. We do not complain much about the fact that the money we spend on home entertainment and dining out has gone through the roof during the last 50 years. There is nothing wrong with paying a larger portion of our budget for X if what we want (or need) is more X. That’s how things are supposed to work.

The real problem, then, is that, unlike Super Mario and Banh Mi sandwiches, most of the new stuff that medicine offers to suffering patients isn’t that great. (Of course, some, like artificial joints and cataract surgery are miraculous.) When someone has a chronic or life-threatening condition that resists standard treatment options, ordering every possible test, trying every possible medication, procedure or surgery tends to produce roughly the same result as doing nothing. (Sometimes, in fact, far worse.) This is simply because there is a world of difference between a possible outcome and a probable one.

At bottom, every country that has dealt effectively with this problem has found a way to tell very sick or broken people that certain medical interventions aren’t worth the money. Americans are not comfortable with this. Faced with the specter of socialized medicine, conservatives convulse at the prospect of death panels. Faced with market-based approaches that would encourage individuals to shop for the best value, liberals bellow about the moral necessity of equal access to the highest quality of care.

The political histrionics belie a fundamental agreement, viz., that we all want a health care system in which everyone, no matter how ill, how old, or how effective the available options, gets the full monty. Of course, we do not have anything like such a system, but the fact that we aspire to it is one of the main reasons it is killing us. And I don’t mean this figuratively. As health care consumes an ever-increasing percentage of personal, corporate and public budgets, the money available to do other things that save lives and promote well-being (education, infrastructure, public safety) dwindles proportionally. And the more we insist on an absolute right to treatments of little or no value, the less we are able to promote preventive practices of proven value.

What makes this problem particularly acute is that we, like every other industrialized country, have an aging population. People in developed countries are living longer and reproducing at ever decreasing rates. Hence, every year that goes by, the percentage of old people rises. Old people break constantly, and thus require medical attention and hospitalization far more often. This means that escalating health care costs are in large part due to our commitment to (a) keeping a growing percentage of old people alive as long as possible, committing us to (b) the use of expensive and ineffective means for doing so and, consequently, (c) spending a huge portion of the health care budget (e.g., about 25% of Medicare) on costs incurred during the year that people die.

What’s weird about this (and here I speculate irresponsibly) is that it's not obviously what most old people even want. Of course, most of us don’t want to die, but, given that we have no choice in the matter, I think we would prefer an end in which we accept death gracefully, feel sincere gratitude for the time we were given, and go gentle into that good night. (Bite me, Dylan Thomas.)

My feeling is that it is mostly the young who make this so very difficult. It is so hard to lose the people we love, and it hurts us to see once vital parents and grandparents just giving up the ghost. So, we insist that they fight and that the rest of the world fight for them, grasping at any straw the medical establishment has to offer. In this sense we are dealing with a problem of cooperation. It is easy for me to see how we are wasting money on useless interventions for old people. Just not the ones I care about.

I wish we had a system that would allow those close to death to transfer the money that would otherwise be spent attempting to prolong their own lives to the welfare of others who could really benefit from it. Childcare for a struggling single parent, or a home in a safer neighborhood, or an educational fund. That way people who are ready to pass on could make their deaths more meaningful and their acceptance of it as an occasion for sincere admiration rather than culpable capitulation. It would allow those of us who suspect we could have lived better to do something truly loving and helpful during our final days.

It wouldn’t fix anything, I know.

G. Randolph Mayes
Department of Philosophy
Sacramento State

My thanks to Steven D. Freer, M.D., for many illuminating conversations on this topic.