Relationships between robots and humans have fascinated filmmakers and storytellers for decades.
Today’s robots are a long way off from such complicated constructs. However, relatively simple but appealingly cute robots are fulfilling companionate roles, from the robotic seal Paro, who keeps senior citizens company in nursing homes, to NAO, a little humanoid robot that holds users’ hands and retrieves small items.
Deciding whether Rachel from Blade Runner could be a friend might require us to decide whether she’s a person, introducing thorny questions about what that involves. Data might just be an example of what it would take for a robot to be both a person and a friend.
But there is another character in Blade Runner whose situation more closely parallels ours: the eccentric inventor J.F. Sebastian.
When Pris, one of the replicants, encounters J.F., who lives in an abandoned building, she comments, “Must get lonely here, J.F.”
“Not really,” he replies. “I MAKE friends. They're toys. My friends are toys. I make them. It's a hobby.” And he does. His living space is populated by an assortment of creatures much closer to Paro or NAO than Pris or Rachel.
Is J.F. right? Are his toys his friends? What should we think of his claim that he isn’t lonely because he’s got them?
These questions aren’t merely speculative. Robots are being used in nursing homes and extended care facilities to alleviate patients’ loneliness, and research suggests that they are effective: patients report less subjective loneliness after interacting with them, and show fewer physical markers of stress.
But although I am a fan of using technology to improve our lives, I have a worry about these technologies, one that dates back to well before we started telling stories about robots. Is what they provide an improvement, on balance?
Grant that these robots can make people feel less lonely. May they, in doing so, introduce another problem?
To answer this, we need to think a bit about the value we place on social relationships versus the feelings they induce in us.
Aristotle claimed that “without friends no one would choose to live, though he had all other goods”. Even if he overstates the case a bit, what he seems to have meant is that, given the choice, we would opt for a life with friends over one that included all the other goods but no friends at all.
In that spirit, imagine being given a choice between two lives. You know at the time of the choice how the lives will differ. But once you begin your chosen life, you will forget – it will be as though things have always been this way.
In one option, the people you consider your friends are actors, although if this life were chosen you wouldn’t discover their illusory nature. These friend-facsimiles would not use the appearance of friendship to exploit you, or betray your confidence. But neither would they care for you or find pleasure in interacting with you. Call this the Truman Show option.
In the other life, your closest friends are exactly as they appear to you to be. Call this the Genuine option. It is my guess that most of us would prefer Genuine over Truman Show.
In Truman Show, “friends” provide the same external appearances as in Genuine. They do not cause harms associated with “false friends”. And yet Truman Show is less choice-worthy than Genuine. It seems the best lives involve reciprocal caring of genuine agents - something today’s robots can’t pull off. (This does not mean it’s always bad to be alone. Some peace and quiet might also be important for the good life.)
A good movie can hit one’s emotional buttons without being immoral. But social robots are different here, because lonely and compromised residents of long-term care facilities are not in a good position to distinguish the genuine article from a compelling facsimile – one that makes them feel like they’ve got a friend.
About deception and friendship, Aristotle said,
when a man has deceived himself and has thought he was being loved for his character, when the other person was doing nothing of the kind, he must blame himself; when he has been deceived by the pretences of the other person, it is just that he should complain against his deceiver; he will complain with more justice than one does against people who counterfeit the currency, inasmuch as the wrongdoing is concerned with something more valuable.Causing lonely patients to think they have friends when they don’t makes us counterfeiters of something more important than money. This seems like something we ought to avoid.
To combat this, while taking advantage of the benefits such robots offer, several things will be important:
· Distinguishing treatment of patients’ subjective loneliness from their social isolation. (This is especially important when we must make good decisions on their behalf.)
· Being aware of individual patients’ susceptibility to mistake robot “friends” for real ones.
· Where possible, designing robots that are unlikely to fool people. (Paro is a good example of this – we rarely encounter seals in ordinary life. A realistic robot baby or child might more easily confuse geriatric patients.)
Until robots are capable of real friendship, designing and using them wisely and well will require us to avoid manufacturing false friends.
Editor's note: In case you love this topic, today on The Splintered Mind, Eric Schwitzgebel writes on an overlapping theme. The problem of making fully conscious robots that will always cheerfully sacrifice themselves for humans.