Monday, October 17, 2016

God is good?

I’m puzzled about the attribution of goodness to God. There are vastly detailed issues in the background, but this rough sketch works to illustrate the point. (I am deliberately conflating acting and failures to act, and leaving some issues concerning duties to rescue in the background for clarity.)

In introductory moral theory discussions, we make four standard distinctions:
1. How should we understand the category of morally wrong actions? These are acts (and sometimes omissions or failures to act) where if you commit them, then you are deserving of moral blame and even punishment. Agents have a moral obligation to refrain from doing these. And people, the would-be victims, have a right to not have these acts committed deliberately against them. Murder, rape, child abuse, etc. fall into the morally wrong category, for example.

2. What acts are morally permissible? these are acts that a moral agent may do or may refrain from doing without violating any duties. Committing them, or not, does not warrant any moral praise or blame. Having toast for breakfast is morally neutral this way, unless perhaps you killed someone for the toast.

3. Which acts are morally obligatory? These are acts that an agent has a moral obligation or duty to perform. If he fails to do them, then he deserves moral blame. Failing to feed your kids, or ignoring a drowning person while there's a life preserver there on the dock that you could toss to him are examples. People have a right to receive these things from you.

4. Which acts are morally supererogatory? These are acts that you do not have a moral obligation to perform. But if you do them, you deserve moral praise. People don't have a right to expect these of you. You violate no moral duty by doing them or refraining. But we hold them in high moral esteem. When someone runs into a burning building to save a child, they are going above and beyond the call of duty. We praise them as heroes, but if he had not done the act, we would not find moral fault.
God, it is alleged, is good. He is morally just, infinitely good, or morally perfect. How can we understand this description in the light of the distinctions above? We typically have the highest moral praise for those individuals who make the greatest personal sacrifices in order to perform morally supererogatory acts. Mother Teresa, Martin Luther King, Gandhi, and many others are praised widely for their morally supererogatory acts.

God is alleged to be all powerful and all knowing too. So there will be no opportunities for supererogatory action that are unknown to him, or that are beyond his power to perform. Does God perform all of the supererogatory acts that we might expect from an infinitely good, all powerful, and all knowing being? The short answer appears to be no. There are countless supererogatory acts that God could have done that he has not done. There are countless supererogatory acts that God could have performed but he did not, but if a human had done them we would hold them in the highest moral esteem.

Does God perform all of those acts which we ordinarily hold to be morally obligatory for moral agents? Again, the simple answer appears to be no. There have been countless opportunities to perform actions that we would consider to be morally obligatory for moral agents, but the action was not performed by God. Again, God would not be limited by his power or knowledge in these cases.

Has God committed morally wrong actions? If God is the almighty creator of the universe, then there are countless instances where there was an event that God was either directly or indirectly causally responsible for that we would ordinarily identify as morally wrong. Consider the class of actions or omissions that we would identify as morally wrong if a moral agent had been present and had committed them or allowed them to happen. A person drowns by herself near a dock on a lake where a life vest sits on the dock. If a person had been standing next to the life vest and saw her drowning in the lake, but refrained from tossing the life vest to her, we would think of that failure to act as morally abhorrent. There are countless other events like these where it does not appear that God did what we would ordinarily have identified as the morally obligatory act. Therefore, it would appear that God has committed (or by omission allowed to happen) countless morally wrong acts.

So it appears that God has failed to perform countless supererogatory acts that we would otherwise identify as morally praiseworthy. And God has apparently failed to do many of the actions that we would ordinarily consider to be morally obligatory and good. And God has apparently committed (or by omission allowed to happen) countless morally wrong actions or events.

The implication may be that we cannot accept the claim that God is good unless some suitable and sensible way to cash out what that means is forthcoming. We might ask, given how things appear, what is the difference between a world that has an infinitely good God in it and one without? That is, what sense can we make of the claim that God is good? In what regard is he deserving of the attribution? And a related question is, what sorts of behaviors would God have to engage in for us to reasonably attribute moral evilness to him (if it is not the behaviors we have seen)?

In our ordinary, daily affairs, we invoke a set of straight-forward and clear criteria for what sorts of things are wrong, which things are heroic, which things are morally good, and which are morally wrong. But God, it would appear, is either not good, or has goodness that doesn’t manifest in any of the familiar ways.

Matt McCormick
Department of Philosophy
Sacramento State

Tuesday, October 11, 2016

Extinction or unfair survival of a few?

There seems to be something especially bad about the humankind going extinct. Human extinction appears significantly different from the extinction of any other species, so its badness is not only about the loss of an entire species. And it is qualitatively different from just having most people on Earth die, so its badness goes beyond the loss of a large number of human lives.

The rapid development of technologies that are as powerful as fragile (e.g. nuclear weapons, genetically modified organisms, superintelligent machines, powerful particle accelerators) have made some people (e.g. Nick Bostrom) worry (a lot) about human extinction. According to them, it is not the possibility of a giant extraterrestrial entity impacting the Earth that might be the biggest threat to human existence, but the possibility of our human-made technology going wrong, either due to some intentional misuse, or to our losing control over it (e.g., a too-intelligent but amoral machine taking control of humans; a self-replicating nanobot that eats the biosphere). Human extinction is the top of the so-called existential risks, which are receiving increasing attention. Centers and institutes have recently been founded to study existential risk and the threat to humans that new technologies pose (here, here & here.) According to some extinction-worried philosophers, existential risk, and in particular human extinction, is the worst sort of risk we are exposed to, because it destroys the future. And we should worry about it. More importantly, according to them, preventing this risk should be a global priority.

I would like to share with you some thoughts about human extinction – thoughts that, I confess, are not motivated by worry but by philosophical curiosity. Let’s consider a comment by Derek Parfit (when reading it, you can fix his sexist language substituting “mankind” for “humankind”):

“I believe that if we destroy mankind, as we now can, this outcome will be much worse than most people think. Compare three outcomes:
1. Peace. 
2. A nuclear war that kills 99 per cent of the world’s existing population. 
3. A nuclear war that kills 100 per cent.
2 would be worse than 1, and 3 would be worse than 2. Which is the greater of these two differences?” (1984, 453).

Parfit states that while for many people the greater difference lies between scenarios 1 and 2, he believes the difference between 2 and 3 to be “very much greater”. He argues that scenario 3 is much worse than scenario 2, and not only because more people would die, but because it destroys the potential of the millions of human lives that could live in the future. Assuming we give value to human life, that means loosing a lot of value. And even more so if we attribute value to what humans do (the art they create, the technology they design, the ideas they generate, the relationships they build). Scenario 3 destroys and prevents a lot of value. Extinction-worried philosophers conclude that preventing scenario 3 should be humanity’s priority.

Let’s now add a twist to Parfit’s scenarios. I take Parfit’s scenario 2 to assume that the 1% who survive are a random selection of the population: during the nuclear explosions some people might have accidentally happened to be underground doing speleology, or underwater, and as a lucky consequence survived. Let’s modify this element of randomness:

1. Peace 
2. Something (a nuclear war or any other thing) kills 99% of people, and the 1% that survives is not a random selection of the Earth’s population. The line between the ones who die and those who survive tracks social power: the survivors, thanks to their already privileged position in society, had privileged access to information about when and how the nuclear catastrophe was going to happen, and had the means to secure a protected space (e.g. an underground bunker, a safe shelter in space). 
3. Something kills 100% of humans on Earth.

These scenarios raise at least two big questions: is 3 still much worse than 2?; and should we prioritize preventing it?

Let’s focus on the second question. I hypothesize that (i) the probability of a scenario like 2 (i.e. a few people survive some massive catastrophic event) is at least as high as that of 3, and (ii) the probability of a non-random 2 is higher than a random 2. We can tentatively accept (i) given the lack of evidence to the contrary. In support of (ii) we just need to acknowledge the existence of pervasive social inequality. The evidence of unequal distribution of the negative effects of climate change (here and here) can give us an idea of how this would work

If this is right, then human extinction is as likely as the survival of a selected group of humans along the lines of social power.

Extinction is bad. Now, how bad is a non-random 2? And how much of a priority should its prevention be? Unless we agree with some problematic version of consequentialism, non-random 2 is pretty bad: it involves achieving good ends via morally wrong means. Even if it were the case that killing everyone over fifty years old would guarantee the well-being of everyone else, most would agree that killing these people is morally wrong. “Pumping value” in the outcome is not enough. Similarly, even if non-random 2 produces the happy outcome of the survival of the human species, the means to get there are not right. We could even say that survival at such price would cancel out the value of the outcome.

My suggestion is to add a side note to extinction-worried philosophers’ claims that avoiding human extinction should be a global priority: if the survival of a selected group of humans along unfair lines is as likely to happen as extinction, avoiding the former should be as high a priority, and we should invest at least as much resources in remedying dangerous social inequalities as we do in preventing disappearance of the human species. I personally worry more about the non-random survival, than about extinction.

Saray Ayala-López
Department of Philosophy
Sacramento State