Friday, February 5, 2016

Robot friends? Ethical issues in social robotics

This week's post is by guest blogger Alexis Elder.

Relationships between robots and humans have fascinated filmmakers and storytellers for decades.
In Blade Runner, several human characters find themselves in relationships with replicants, androids so sophisticated that even they don’t always realize they aren’t human. On Star Trek: The Next Generation, Data is recognized as artificial by his crewmates, but accepted as a friend, which Data reciprocates in his own robotic way.

Today’s robots are a long way off from such complicated constructs. However, relatively simple but appealingly cute robots are fulfilling companionate roles, from the robotic seal Paro, who keeps senior citizens company in nursing homes, to NAO, a little humanoid robot that holds users’ hands and retrieves small items.

Deciding whether Rachel from Blade Runner could be a friend might require us to decide whether she’s a person, introducing thorny questions about what that involves. Data might just be an example of what it would take for a robot to be both a person and a friend.

But there is another character in Blade Runner whose situation more closely parallels ours: the eccentric inventor J.F. Sebastian.

When Pris, one of the replicants, encounters J.F., who lives in an abandoned building, she comments, “Must get lonely here, J.F.”

“Not really,” he replies. “I MAKE friends. They're toys. My friends are toys. I make them. It's a hobby.” And he does. His living space is populated by an assortment of creatures much closer to Paro or NAO than Pris or Rachel.

Is J.F. right? Are his toys his friends? What should we think of his claim that he isn’t lonely because he’s got them?

These questions aren’t merely speculative. Robots are being used in nursing homes and extended care facilities to alleviate patients’ loneliness, and research suggests that they are effective: patients report less subjective loneliness after interacting with them, and show fewer physical markers of stress.

But although I am a fan of using technology to improve our lives, I have a worry about these technologies, one that dates back to well before we started telling stories about robots. Is what they provide an improvement, on balance?

Grant that these robots can make people feel less lonely. May they, in doing so, introduce another problem?

To answer this, we need to think a bit about the value we place on social relationships versus the feelings they induce in us.

Aristotle claimed that “without friends no one would choose to live, though he had all other goods”. Even if he overstates the case a bit, what he seems to have meant is that, given the choice, we would opt for a life with friends over one that included all the other goods but no friends at all.

In that spirit, imagine being given a choice between two lives. You know at the time of the choice how the lives will differ. But once you begin your chosen life, you will forget – it will be as though things have always been this way.

In one option, the people you consider your friends are actors, although if this life were chosen you wouldn’t discover their illusory nature. These friend-facsimiles would not use the appearance of friendship to exploit you, or betray your confidence. But neither would they care for you or find pleasure in interacting with you. Call this the Truman Show option.

In the other life, your closest friends are exactly as they appear to you to be. Call this the Genuine option. It is my guess that most of us would prefer Genuine over Truman Show.

In Truman Show, “friends” provide the same external appearances as in Genuine. They do not cause harms associated with “false friends”. And yet Truman Show is less choice-worthy than Genuine. It seems the best lives involve reciprocal caring of genuine agents - something today’s robots can’t pull off. (This does not mean it’s always bad to be alone. Some peace and quiet might also be important for the good life.)

Feelings of loneliness can be relieved in many ways, from taking Tylenol to a hot shower, without addressing social isolation. But when lonely patients are at risk for cognitive disorders, social-robotic interventions may be ethically bad. They work because they look and feel enough like companions that they hit the right emotional buttons, in populations that are already predisposed to confusion.

A good movie can hit one’s emotional buttons without being immoral. But social robots are different here, because lonely and compromised residents of long-term care facilities are not in a good position to distinguish the genuine article from a compelling facsimile – one that makes them feel like they’ve got a friend.

About deception and friendship, Aristotle said,
when a man has deceived himself and has thought he was being loved for his character, when the other person was doing nothing of the kind, he must blame himself; when he has been deceived by the pretences of the other person, it is just that he should complain against his deceiver; he will complain with more justice than one does against people who counterfeit the currency, inasmuch as the wrongdoing is concerned with something more valuable.
Causing lonely patients to think they have friends when they don’t makes us counterfeiters of something more important than money. This seems like something we ought to avoid.

To combat this, while taking advantage of the benefits such robots offer, several things will be important:
· Distinguishing treatment of patients’ subjective loneliness from their social isolation. (This is especially important when we must make good decisions on their behalf.) 
· Being aware of individual patients’ susceptibility to mistake robot “friends” for real ones. 
· Where possible, designing robots that are unlikely to fool people. (Paro is a good example of this – we rarely encounter seals in ordinary life. A realistic robot baby or child might more easily confuse geriatric patients.) 

Until robots are capable of real friendship, designing and using them wisely and well will require us to avoid manufacturing false friends.

Editor's note: In case you love this topic, today on The Splintered Mind, Eric Schwitzgebel writes on an overlapping theme. The problem of making fully conscious robots that will always cheerfully sacrifice themselves for humans.


Alexis Elder
Department of Philosophy & Women's Studies
Southern Connecticut State University

Tuesday, January 26, 2016

Do plants challenge our notion of cognition?

Welcome back everyone. This week's post is by guest blogger Saray Ayala.

A recent book by plant scientist Stefano Mancuso and science writer
Alessandra Viola, Brilliant Green, tells us that plants are intelligent. To many people, “plants + intelligence” is not the most intuitive combination. They will be surprised to find out that the notions of plant intelligence and plant cognition are no news, and no joke either, for a number of plant scientists (in the image, a New York Times article reporting Francis Darwin’s defense of plant intelligence in 1908 at the British Association for the Advancement of Science), and for a group of philosophers who work on minimal cognition (e.g. Paco Calvo, Fred Keijzer). However, these proposals are typically met with intense skepticism and thus remain largely underground. It’s worth unpacking this resistance to the notion of plant cognition. Let’s see how our notion of cognition fares under the green challenge. So, why can’t plants be intelligent/cognitive?

One tempting way of rejecting the possibility of plant cognition is to say, “they are too different from humans.” An anthropocentric notion of cognition makes it easy to declare plants incapable of what by definition is the sole territory of (a select group of) animals. That is an uninteresting, and also unfair, strategy. If we are to offer a serious resistance to plant cognition, let’s do it relying on a more substantive notion of cognition than “whatever humans characteristically do”.

Another class of common responses to the plant cognition challenge is along the lines of “they are too simple”, “they don’t engage in higher-level cognitive activities such as problem-solving”. The idea of plants as simple organisms is common, but studies in plant science show that plants are very complex systems, in terms of both behavior and physiology (examples abound, see reviews here and here). Plants also have proven to be outstanding at solving adaptation problems faced by living organisms on this planet – at the very least, good enough to become the class of organisms constituting the vast majority of the biomass on Earth. Plants may be looked down upon in the hierarchy of living things, but let’s face it: plants are the true dominant species on this planet!

Mancuso & Viola would say we can stop here: their main argument is that complexity and adaptability are all we need to establish plant intelligence. Indeed, if any form of successful complex-problem-solving ability counts as cognition, then it is hard to deny plants are cognitive. But is that an interesting notion of cognition? According to it, any living species has cognition, for it has successfully adapted. Unless we’re content with cognition meaning simply “being alive”, there are more questions to answer: how do plants solve those problems? What are the mechanisms they use? Solving a survival problem by merely reacting to stimuli is quite different from solving it through a set of computations over representational states. Figuring out whether and, if so, how plants compute information is what some philosophers are trying to articulate (see here, here, and here). Now, how do we evaluate plants’ abilities? How do we decide whether plants’ adaptive strategies are complex in the right way? Is there a standard measure of cognition that we can use? In spite of huge progress in figuring out how particular cognitive skills work (e.g. memory, attention, decision-making), Cognitive Science is still lacking consensus about what exactly cognition is. But “we know it when we see it” is not a good working definition, especially when we start disagreeing about what we see. Plants’ reputation as simple reactive organisms may blur our ability to “know cognition” when we “see” it in plants. After all, intellectual abilities have been denied, in spite of blatant evidence to the contrary, to many groups in the past, from women to different racial groups. We should at least try to avoid falling into the same pit again.

A reasonable strategy is to compare plants’ strategies to paradigmatic cases of cognition. This raises new questions, e.g., at which level should the comparison take place? At the functional (computational) level? Algorithmic level? At the physiological implementation level instead? Do we need to find a central controller, á la mammalian brain? But even research in human cognition (e.g. distributed and extended cognition) is abandoning its obsession with the brain. And if cognitive processes (and mental events in general) are multiply realizable in different structures (as Putnam argued long ago), searching for animal-like structures in plants is misguided. To escape the ghost of anthropocentrism, we have to acknowledge that a good and unbiased test for plant cognition can’t set similarity to humans as one of its major criteria. The search for human-like cognition in a clump of stems and roots might be doomed from the start; we should perhaps search, instead, for plant-like cognition. But again, what would it be like?

Plant cognition presents a great and timely challenge for our concept of cognition (and related concepts, such as mind and intelligence). Will our concept of cognition be able to survive the challenge without changing? Conceptual change is a common move, and a necessary one, in scientific progress. Our notions of cognition and mind have changed dramatically over the last century, and the idea of cognition in non-human animals is no longer laughable. Could plants be next in line?

The consequences of acknowledging cognition in plants will go far beyond revolutionizing Cognitive Science and, possibly, restructuring the traditional hierarchy of living things. Just as cognition in non-human animals has been used in arguments for animal rights, the idea of plant rights would gain traction (Switzerland, by the way, is already there). It could reinforce and expand the arguments in favor of deep ecology; it would introduce new parameters into arguments for environmental justice (e.g., justifications of agricultural biotechnology for the benefit of humans would have to meet an extra challenge); finally, it would definitely complicate food ethics. Whatever happens, we can expect the questions about plant cognition to sprout into an exciting and fruitful debate.

Saray Ayala-López
Department of Philosophy
San Francisco State University

Wednesday, December 16, 2015

What's getting read over winter break

Have a great break everyone, and congratulations to all of our graduates!  Here's what some of your professors are planning to read over break.


Kevin Vandergriff
The Poverty of the Linnaean Hierarchy, by Marc Ereshefsky

Patrick Smith

Monday, November 30, 2015

Social transparency and the epistemology of tolerance

Last week I learned a new word- apotropaic -and darned if I haven't heard it three times since then!

Everyone is familiar with this sort of thing and has at least briefly experienced it as uncanny. It is called the Baader-Meinhof Phenomenon. Generalized, the BMP is our inclination to mistake an increased sensitivity to P for an increase in the number or frequency of P itself.

Lately I've been thinking about the BMP in relation to social transparency. The free flow of social information is a defining characteristic of the current era, and I tend to be far more sanguine about its effects than most. But I have started to think that the BMP presents a serious challenge to my optimism.

Most of my peers tend to be very possessive about their personal information. They feel like they own their beliefs, ideas, tastes, interests and habits. Consequently, they regard those who acquire knowledge of such without their permission as thieves. They are also haunted by Orwellian metaphors, and tend to react to increasing levels of social transparency in the public sphere with alarm as well. The idea of cameras at every street corner, shop window and traffic intersection feels dirty to them, despite its obvious value for public safety.

I dislike snoops as much as they do, but I distinguish between my preferences and my rights. I see unrestricted access to information as a cornerstone of liberal democracy. For me, the most fundamental human right is the right to learn. Whenever we choose to prevent or punish learning of any kind, there has to be an excellent reason for it. For some kinds of highly sensitive information these reasons exist, but they are consequentialist by nature and do not spring from any fundamental right to control information about ourselves.

I like glass houses. I think a world in which it is nearly impossible to hide the fact that you are an abusive husband or a pederast cleric is clearly preferable to one in which what goes on behind closed doors is nobody else’s business. In a liberal society, there is no greater disincentive to such transgressions than the certainty of others finding out. My friends are all yesbut. As in yes, but this is exactly what concerns them. They follow Orwell in thinking that a socially transparent society is fundamentally an informant society, conformist by nature.

But the evidence is that they are just wrong about this. We are living in a time of unprecedented tolerance for diversity and self-regarding eccentricities. This has not been achieved in spite of increasing social transparency. As long as homosexuals, transgenders, apostates, recreational drug users and the mentally disabled were confined to the darkness of the closet we could ridicule them with impunity. But it is difficult to continue in this vein when the clear light of day reveals that many of them are people we love.

Now here is my concern.

If increasing social transparency is not managed very carefully, it could backfire spectacularly, thanks to the BMP. When social transparency increases quickly, we suddenly become aware of the many intolerable things that have been happening right under our noses. Consequently, we get the impression that the world is going to hell in a handbasket and we become receptive to irrationally harsh responses.

What do I mean by careful management? Two things, at least.

First, it means creating future generations of adults who are more epistemologically sophisticated than mine. We grew up thinking that being responsible and informed citizens meant paying careful attention to reliable news sources, caring about the less fortunate and following our conscience. But that is a serious error.

The news is almost entirely about relating recent interesting events; it rarely provides a statistical context in virtue of which the general significance of these events may be responsibly evaluated. This is why it is possible to be an informed and conscientious citizen by the standards of my generation and still be completely unaware of essential global facts, such as that we are living in a period of unprecedented world peace or that the global poverty rate has been cut in half during the last 20 years.

If we aren’t aware of the role BMP plays in our reaction to constant reports of police brutality against minorities in the U.S, gang rapes of girls in India, the persecution of homosexuals in Russia, the public whipping of atheists in the Third World, and terrorism everywhere, then our reactions are likely to be intemperate and counterproductive.

Second, we are going to need to find the moral strength to punish wrongdoing less severely. What? Yes. To see why, consider that whenever someone decides whether to do wrong she makes an implicit expected value calculation in which the probability of being caught figures centrally. For this reason, the severity of the current punishment is itself a function of the probability of detection. In an increasingly transparent society, the probability of detection rises. Hence the previous levels of punishment are now intemperate and must be recalibrated.

As an example, consider new surveillance capabilities which can detect every single traffic light violation. Many people oppose the proliferation of this kind of technology, despite its obvious ability to save lives. Why? I think it is partly because they foresee an intolerable rise in the cost of innocent mistakes. In this sense, Orwellian concerns are absolutely on point. If we are unwilling to attenuate the severity of our punishments, applying the technology of transparency to crime detection is the road to the police state.

Social transparency has so far been part of the recipe for a more tolerant society, but so far it is tolerance for things that we are learning to hate less. Adopting more temperate responses to crimes we perhaps hate even more than before is a whole nother thing.

I hope future generations will be enlightened enough to do it, but in the meantime some apotropaic magic would come in real handy.


G. Randolph Mayes
Department of Philosophy
Sacramento State

Monday, November 23, 2015

How to stop trying to be a zombie

Samkhya is one of six orthodox schools in the Vedic tradition of Indian philosophy. It is associated with the Yoga tradition. Yoga is a meditative discipline that is not primarily concerned with attempting to bend the human body into the shape of a pretzel.

Samkhya is usually counted as a dualistic philosophy. When we think of dualism in the West we think of René Descartes, who was a substance dualist. Descartes held that there are two kinds of things in the world: Mind and Matter. It's tempting to try to appropriate Asian philosophical notions to Western categories, but caution is warranted. For one thing, substance dualism seems to encounter a serious problem. For it seems as though our minds and bodies interact in various ways, e.g. with physical events (like hitting one's thumb with a hammer) causing mental events (like pain). But it's hard to see how a physical event can have any effect on the mind unless the mind is also a physical thing.

The dualism we find in Samkhya is a dualism between Purusha and Prakriti- between the subject of experience and all of the possible objects of experience. Purusha is the Self, which is identified with consciousness. This is not intentional consciousness- consciousness of this thing or that. It is pure consciousness. The assumption here is that, if we withdraw our attention from all objects of consciousness, a pure, or object-less, consciousness will remain. This is Self-Realization, and it is the goal of Yoga.

Prakriti, on the other hand, consists of all the possible objects of consciousness: Rocks, trees, penguins, #2 pencils, and so on. But according to Samkhya, the mind is also among the objects of consciousness. In addition to being conscious of the external world, I am also conscious of my own mind and its contents. Of course, this is not a novel claim. What is novel is that Samkhya ends up with a different division than the one we find in Descartes. It posits no distinction between mind and body; instead it distinguishes between consciousness and the body-mind. Thus Samkhya appears to be in rough agreement with the materialist tradition in Western philosophy by placing mind and body in the same category.

Samkhya takes the mind to have the ability to discriminate environmental phenomena (e.g. telling the difference between red and green light), focus attention, and control bodily movements- all of the functions normally associated with what has been called the “easy problem of consciousness.” However, according to Samkhya, the mind is not actually conscious. The body-mind, without Purusha, is what some Western philosophers have referred to as a philosophical zombie: It would be capable of performing all of the usual functions of a human being, without their being accompanied by any conscious experience. Conscious experience is made possible by Purusha.

(My reference to zombies may cause some of my readers to compare Samkhya's dualism to property dualism. Property dualism does not suppose that mind and body are separate substances; it insists instead on a distinction between mental and physical properties. There is much to be said about this comparison, but I cannot explore it here.)

Think of Prakriti, the world of experience and particularly the mind, as being like a machine that is functioning in a dark room. Now imagine a light drawing near to the machine. This light represents Purusha, the Self- it is consciousness, and it illuminates the machine of the mind. Shining in the light of consciousness, the mind appears to be conscious. It thinks, “I am the light.” But this is a mistake. At best, the mind only participates in consciousness, giving it concrete expression. Hence a Sanskrit term for mind, “citta,” which as I understand it - I am no Sanskrit scholar - refers to reified consciousness, or consciousness made concrete, as opposed to the “pure” or “root” consciousness (cit) of Purusha.

All of this is interesting theory, but problems lurk, particularly if we suppose that Samkhya's dualism is a form of substance dualism. There does not seem to be any problem here with mind-body interaction, since mind and body fall under the same category in Samkhya. But the interaction problem seems to emerge at a different level- as a problem with the interaction of consciousness and the body-mind. The analogy I have used of the light shining on the machine- which is rooted in an analogy made in the classic Yoga literature- suggests that we should understand the conscious light of Purusha as interacting causally with an otherwise-unconscious Prakriti. It seems to me that this is not possible if Purusha and Prakriti turn out to be different substances.

However, it seems to me that Samkhya need not embrace substance dualism. The distinction it makes between Purusha and Prakriti is a practical one, and the practice in which it is grounded is the practice of yoga. Samkhya, like much of Indian philosophy, is concerned to give an analysis of the human condition and in particular, of human suffering and the means to remedy it. (Its account competes with the one given by Buddhism, which insists on the nonexistence of any transcendental self.)

The cause of suffering, according to Samkhya, is the association of Purusha, the conscious Self, with the body-mind. Though we are the subjects of experience, we mistakenly identify with the objects of our experience- with our mental life, with our bodies, and to some extent with the people and things we take to be ours. We are conscious beings who are, in a sense, trying to be something that is unconscious. We are trying to be zombies, and this is painful. The dualism of Samkhya is committed to nothing more than the possibility of psychologically disassociating ourselves from mental and physical objects. This disassociation begins when we notice that there is, at least, a conceptual distinction that can be made between ourselves and the objects of our experience, and it finds its fruition in yoga practice.

David Corner
Department of Philosophy
Sacramento State


Further Reading:

The Yoga Sutras of Patanjali, tr. Vivekananda

Sunday, November 15, 2015

Why we should lie about Santa


“When I was a child, I spoke as a child, I understood as a child, I thought as a child: but when I became a man, I put away childish things.”       ~1 Corinthians 13:11

I once believed that lying to children about Santa was morally wrong but I no longer do. Cynics find much good in Santa-culture, our mass media-corporate retail complex deploys the lie seasonally, it fuels the perpetual acquisitiveness a flourishing economy requires. But I seek benefits beyond the materialistic. Propagating the Santa story is among the most instructional, least harmful deceptions we can share with our kids that will teach them not to believe what people tell them on trust-alone. It is a culturally-transmitted misbelief with adaptive, epistemic, and ethical value.

I’m not talking about the Santa myth as allegory, where Santa represents loving kindness. Myths are just stories that may or may not be true and, hey, they are entertaining and connect people. But we can teach the values and limits of hope, love, and charity more clearly without help from Santa. The story to which I refer goes like this: Santa exists, not merely in concept or the imagination. He watches, judges, and visits our homes on Christmas Eve and rewards good children with gifts, etc. There isn’t a shred of evidence for Santa, in actuality, and no adult really believes in him. It is a mighty powerful myth, as besieged parents of 2 to 7 year-olds know. We present it as literally true to children so that we can manipulate their thoughts and actions.

Most Americans report believing in Santa when they were children. A 2013 Pew Research Center survey finds that one-fifth of Americans say they are the parent or guardian of a child in their household who believes in Santa, and 69% will pretend that Santa visits their home this Christmas Eve. Parents even pretend to believe this when kids are dubious: “One-in-five parents whose children do not believe in Santa (18%) say they will pretend to get a visit from Santa this year, as do 22% of those who are not the parents or guardians of minor children in their household.”

We do this because we don’t really think that telling this story is wrong, but it is a lie. Despite the cold logical consistency of deontologists who rebuke us for lying to genocidal or otherwise depraved persons, we don’t accept that all lying is wrong. In fact, lying to children is especially good for them. Much can be learned from this episode in their young lives at so little cost. The Santa story is not the worst lie we can teach children, it is also not the best. This conspiracy of elders which kids must contend with exercises their nascent rationality and autonomy. It primes them for questioning all of the stories people tell.

Children don’t have much choice about what to believe, they are poor discerners of fact from fiction. But children are future autonomous, moral agents and this is especially why we should lie to them before they are fully-fledged, so that their filters and shields emerge early as they become rational. The world is filled with deceptions, we do them a great service with this benign story. Children are well-adapted to believe that parents are looking out for their interests but need to learn that even these people are not reliable truth-tellers. People who love us and seek our best-interests will deceive us, sincerely, even if they are well-intentioned but ignorant, short-sighted, or misguided. True love and truth telling are uncorrelated.

Could we get the benefits if we told them Santa was make-believe at the outset? Perhaps, but this lie is so systematic, accessible, widespread we are fools not to take advantage of it. By age 10 most people don’t believe it, they realize and accept that they have been deceived for egotistical reasons. As parents and teachers, when we discuss the implications of Santa with mature children we can show them, rather than merely tell them, that they cannot just accept what others assert. The Santa story is corrosive to the faith and confidence we extend too readily to loved ones and authority figures. It also exemplifies the imaginative power of the human intellect in preserving the appearance of truth in a problematic story, however much we wish it were true. The Santa story as a plausible hypothesis fails when we test it. Use it to show children how to check the math. If Santa spends only 5 seconds visiting each of say 20 million homes, he spends well over 3 years delivering presents. We derive a result inconsistent with his legendary 24-hour delivery time-frame. Reindeer cannot travel that fast, etc. The story falls apart.

By the age of 10, with this one myth, children may learn much. People speak falsely, deliberately. The people whom you ought to trust most will deceive you if they believe that it benefits you or us to do so. If people who love you will lie to you, for whatever reasons, then you can’t accept that whatever they tell you is true or even that it is what they themselves believe. No people are reliably honest. All of us have had, and probably still have, widely-held beliefs regardless of whether they are true. Also from the Pew study: Roughly three-quarters of adults (73%) say they believe Jesus was born of a virgin. Among the religiously unaffiliated, 32% believe it.

Voltaire warns us:
'Those who can make you believe absurdities can make you commit atrocities.'
Teaching a misbelief that makes children sensitive to inconsistencies in character, testimony, evidence, math and logic is morally permissible. The Santa story does all of this. Deceiving kids about Santa is prosocial. Use it to probe the limits of honesty, integrity, compassion. Once exposed, the Santa myth is an antidote to the totalitarian trap of traditional, authoritarian, faith-based thinking.

Pass it on.

Scott Merlino
Department of Philosophy
Sacramento State

Sunday, November 8, 2015

Markets fail. So what?

In welfare economics, a market failure is when the competitive price system fails to allocate resources efficiently, where this usually refers to a violation of Pareto optimality. This means there are unexploited ways to make some people better off without making anyone worse off. If, for example, the market systematically underprices a good because some of the costs associated with its provision are externalized on the public, that’s a market failure. If the market under-provides a good because there isn’t a good way to prevent free riders, that’s a market failure.

A very common strategy of argument is to identify a market failure and then suggest a government intervention designed to address it. For example, the standard microeconomic analysis of public goods provision suggests that things like lighthouses would be under-provided by the market because of people’s propensity to free ride. Someone might notice that ships need lighthouses and build with the hope of signing up paying users, but this arrangement would certainly fail because lighthouses are non-excludable and non-rivalrous. None of the ship owners would pay for the service. If we’re going to have lighthouses, we need the government to provide them.

But market processes aren’t the only kinds that fail to secure an efficient outcome. Theories of government failure developed among public choice economists in response to the assumption that in a case where a market process has failed, a government decision-making process will correct it. This is closely aligned with the assumption that government actors are genuinely benevolent and reliably motivated to pursue the common good. Public choice theory shows that you can generate better predictions of government behavior by assuming that people holding government offices are people of normal good will and largely motivated by self-interest.

Just like there are several well-theorized sources and examples of market failure (e.g., externalities and public goods), there are likewise several well-theorized sources and examples of government failure. In cases of corruption, government officials use their control of public resources to advance their private ends. An official may be in charge of some project and solicit bribes in exchange for granting the government contract supporting it. The problem here isn’t that it’s immoral, though it is. The problem is that extending a contract on the basis of someone’s willingness to provide a bribe will almost certainly violate Pareto optimality. Public choice theorists argue that ineffective monitoring regularly permits politicians to benefit themselves at the expense of the public. Individual losses among the public may be quite small. In fact, that they are small explains the ineffective monitoring since their losses escape their notice. Therefore, their ignorance about who it is best for them to vote for, what policies are best for them to support, or who might be taking advantage of them is rational. But in the aggregate, their total losses will tend to be much greater than the benefit the politician consumes in the form of rents.

This dynamic of concentrated benefits and dispersed costs also figures into accounts of regulatory capture. When political actors have a great deal of discretionary power, this generates powerful incentives for an industry to use whatever means available to influence the decision-making process. They might convince regulatory agencies to permit certain profit-enhancing externalities or provide economic protection from foreign or domestic competitors. These high stakes provide incentives to win influence that are much stronger than anything that would induce an individual citizen to organize with others to help keep the regulatory agency’s activities in line with the public interest.

So governments fail, too. We should, therefore, be on guard against committing the Nirvana fallacy. The following syllogism makes the mistake in the Nirvana fallacy pretty obvious:
 1. In a range of circumstances, markets constrained by interventionist policies administered by morally and informationally perfect people would have better outcomes than markets free of any interventions. 
2. In those circumstances, actually implementing those interventionist policies administered by morally and informationally perfect people would have better outcomes than the market free of any interventions. 
3. Therefore, we should implement the interventionist policies.
Obviously, 3 does not follow from 1 and 2.

This lesson, and even many sources of government failure, was acknowledged by, of all people, Cambridge welfare economist A.C. Pigou, who is the patron saint of market failure theorists. As early as 1912 in Wealth and Welfare, he wrote
“It is not sufficient to contrast the imperfect adjustments of unfettered private enterprise with the best adjustments that economists in their studies can imagine. For we cannot expect that any State authority will attain, or even whole-heartedly seek, that ideal. Such authorities are liable alike to ignorance, to sectional pressure, and to personal corruption by private interest.”
Again, markets fail. But even when they do – even when real-world markets do not meet the standard modeling assumptions that ensure perfect competition and Pareto optimality – government intervention may make things worse. The government is, at best, another tool societies can sometimes use to good effect. It is not a Deus ex machina that societies can rely upon to swoop in and bring about a happy ending.

The possibility of government failure should militate against the tendency to compare the reality of unregulated markets with an idealized implementation of government control in order to argue for interventionist public policy. That isn’t the choice that’s available to us. Instead, we have to choose between the messy real-world outcomes of unregulated markets and the messy real-world outcomes of regulated markets.

Examples of messy real-world institutional arrangements might actually surprise some economists in the way outcomes sometimes do not cooperate with standard microeconomic models. Return to the lighthouses. In 1820 about 75% of lighthouses on the English coast were built and operated by private parties because they could effectively limit access to their service by tying its use to entry into harbors. There, berths were excludable and fees were easy to collect. This example may suggest a sort of market resiliency where cooperative solutions to market failures emerge without government intervention because novel solutions are incentivized by mutual gains from trade.

Government failures generally don’t have this natural self-correcting feature, which may make them more serious. To correct a government failure there must be someone with the insight to devise a solution and the benevolence, courage and skill to see it through in the face of highly motivated political opposition. But politics eats up people like this for breakfast.

Kyle Swan
Department of Philosophy
Sacramento State