The rapid development of technologies that are as powerful as fragile (e.g. nuclear weapons, genetically modified organisms, superintelligent machines, powerful particle accelerators) have made some people (e.g. Nick Bostrom) worry (a lot) about human extinction. According to them, it is not the possibility of a giant extraterrestrial entity impacting the Earth that might be the biggest threat to human existence, but the possibility of our human-made technology going wrong, either due to some intentional misuse, or to our losing control over it (e.g., a too-intelligent but amoral machine taking control of humans; a self-replicating nanobot that eats the biosphere). Human extinction is the top of the so-called existential risks, which are receiving increasing attention. Centers and institutes have recently been founded to study existential risk and the threat to humans that new technologies pose (here, here & here.) According to some extinction-worried philosophers, existential risk, and in particular human extinction, is the worst sort of risk we are exposed to, because it destroys the future. And we should worry about it. More importantly, according to them, preventing this risk should be a global priority.
I would like to share with you some thoughts about human extinction – thoughts that, I confess, are not motivated by worry but by philosophical curiosity. Let’s consider a comment by Derek Parfit (when reading it, you can fix his sexist language substituting “mankind” for “humankind”):
“I believe that if we destroy mankind, as we now can, this outcome will be much worse than most people think. Compare three outcomes:
2. A nuclear war that kills 99 per cent of the world’s existing population.
3. A nuclear war that kills 100 per cent.2 would be worse than 1, and 3 would be worse than 2. Which is the greater of these two differences?” (1984, 453).
Parfit states that while for many people the greater difference lies between scenarios 1 and 2, he believes the difference between 2 and 3 to be “very much greater”. He argues that scenario 3 is much worse than scenario 2, and not only because more people would die, but because it destroys the potential of the millions of human lives that could live in the future. Assuming we give value to human life, that means loosing a lot of value. And even more so if we attribute value to what humans do (the art they create, the technology they design, the ideas they generate, the relationships they build). Scenario 3 destroys and prevents a lot of value. Extinction-worried philosophers conclude that preventing scenario 3 should be humanity’s priority.
Let’s now add a twist to Parfit’s scenarios. I take Parfit’s scenario 2 to assume that the 1% who survive are a random selection of the population: during the nuclear explosions some people might have accidentally happened to be underground doing speleology, or underwater, and as a lucky consequence survived. Let’s modify this element of randomness:
2. Something (a nuclear war or any other thing) kills 99% of people, and the 1% that survives is not a random selection of the Earth’s population. The line between the ones who die and those who survive tracks social power: the survivors, thanks to their already privileged position in society, had privileged access to information about when and how the nuclear catastrophe was going to happen, and had the means to secure a protected space (e.g. an underground bunker, a safe shelter in space).
3. Something kills 100% of humans on Earth.
These scenarios raise at least two big questions: is 3 still much worse than 2?; and should we prioritize preventing it?
Let’s focus on the second question. I hypothesize that (i) the probability of a scenario like 2 (i.e. a few people survive some massive catastrophic event) is at least as high as that of 3, and (ii) the probability of a non-random 2 is higher than a random 2. We can tentatively accept (i) given the lack of evidence to the contrary. In support of (ii) we just need to acknowledge the existence of pervasive social inequality. The evidence of unequal distribution of the negative effects of climate change (here and here) can give us an idea of how this would work
If this is right, then human extinction is as likely as the survival of a selected group of humans along the lines of social power.
Extinction is bad. Now, how bad is a non-random 2? And how much of a priority should its prevention be? Unless we agree with some problematic version of consequentialism, non-random 2 is pretty bad: it involves achieving good ends via morally wrong means. Even if it were the case that killing everyone over fifty years old would guarantee the well-being of everyone else, most would agree that killing these people is morally wrong. “Pumping value” in the outcome is not enough. Similarly, even if non-random 2 produces the happy outcome of the survival of the human species, the means to get there are not right. We could even say that survival at such price would cancel out the value of the outcome.
My suggestion is to add a side note to extinction-worried philosophers’ claims that avoiding human extinction should be a global priority: if the survival of a selected group of humans along unfair lines is as likely to happen as extinction, avoiding the former should be as high a priority, and we should invest at least as much resources in remedying dangerous social inequalities as we do in preventing disappearance of the human species. I personally worry more about the non-random survival, than about extinction.