by Arthur Ward
Some people view technology as simply a means to an end. The user has a set of goals and a technological artifact can help the user achieve those goals, for better or for worse, but does not alter or introduce new goals. Call this the instrumental view of technology. A difficulty with this view is that it contradicts our own experiences: working in a beautiful library can help one focus, holding a weapon can make one bolder, and driving a powerful car can lead one to be reckless, etc. Interfacing with technology, it seems, does sometimes have the power to change us in various ways.
An alternative view, call it the non-instrumental view, is that some technology is more than a means to an end, and can actually alter some of the ends that a user wants to pursue. Technology can change us. A difficulty with this latter view is that it appears to lessen the moral responsibility of a person who uses a piece of technology for an evil end. “It wasn’t entirely my fault,” they might exclaim, “I was influenced/lured/tempted/seduced by technology X!” Call this the Responsibility Problem.
Here I will argue that the Responsibility Problem can be dealt with, and the non-instrumental view adopted. I will look at two technologies, controversies over which might gain clarity by adopting this analytic scheme: guns and social networks. Both, I argue, are clear examples of non-instrumental technologies that can affect some users in a negative way, influencing them to act wrongly when they would not have erred in the absence of the technology. Recognizing this fact, I think, will not lead to letting evil-doers off the hook, but will facilitate very necessary precautions and regulations with the technologies.
The stakes of the debate over guns are nicely laid out by Evan Selinger in his Atlantic article on guns. As he describes, the Instrumentalist view of gun technology is summed up by the familiar slogan “guns don’t kill people; people kill people.” This proclaims the ethical neutrality of guns, placing any blame for evil acts committed with a gun solely onto the person firing the gun and not the technology itself. Selinger argues that this position is untenable if we observe that people act more bold, reckless, or aggressive when holding a gun (at least some kinds of gun: if not a hunting rifle, perhaps a handgun or an assault rifle). I think Selinger is clearly right about this. This isn’t to say that merely holding a gun is enough to turn any decent person into a maniac - the claim is more modest than that: sometimes some guns in some cases impact some people in such a way as to affect their personality and alter their goals. I daresay it’s unlikely that George Zimmerman would have stalked and picked a fight with Trayvon Martin had he not been armed.
Looking at social networks brings forth a more pervasive example that many undergraduates have experienced: cyberbullying. There will always be bullies, but the technological advances of social networks like Facebook and Twitter have allowed for the proliferation of anonymous, venomous, bullying that can occur 24/7 online instead of being limited to the “schoolyard.” There are good reasons to think that cyberbullying is such a problem specifically because of features of the technology, namely the ability to instantly communicate in an asynchronous way from a distance without getting visual or other sensory feedback from one’s actions. In other words: it’s easy to trash-talk someone when you don’t have to look them in the face. There is a mountain of research demonstrating that our sympathetic response as humans is highly sensitive to immediate feedback such as a smile, frown, or grimace. When this interpersonal connection is severed, through distance or anonymity, we become less sympathetic towards one another. The Non-instrumental view of technology helps us see the import of this finding: Facebook is making some of us meaner! Some people who are normally kind and thoughtful can become colder and ruder online. This is an empirical claim, and research on this is in its infancy. Though, I think if we’re honest, many of us have caught this tendency in our own online behavior, and we surely recognize it in others.
The Responsibility Problem
Does the non-instrumentality of technology lead down a slippery slope away from personal responsibility and towards something like the twinkie defense? I don’t think so. For one thing, I’m convinced it’s the correct view, and where it leads we’ll just have to deal with that reality. But that aside, I don’t think we should worry about people dodging responsibility and foisting it on technology instead. While the external effects of technology can be powerful, the internal effects on our own personality and goals are usually very subtle, so much so as to often go unnoticed. “I didn’t realize I was speeding, I just felt kind of excited!” he exclaimed. In the vast majority of cases, the threat of technology to our free will is negligible. And note that if a technology did have a noticeable powerful effect on our will (consider a strong psychotropic drug), people would be very comfortable with lessened moral responsibility.
So, if we shouldn’t worry about the Responsibility problem, what are the stakes of being an instrumentalist versus a non-instrumentalist in the first place? I think the answer to that is non-instrumentalism should lead to cautious oversight, regulation, and education surrounding technologies such as guns and social networks. Their very minor effects on us can lead to enormous impacts further downstream, and for that reason it would be folly to take a Laissez-faire attitude towards them. What exactly those regulations look like is obviously where all the action is, and I don’t touch that here. But to protect each other, especially those more vulnerable to the lure of some technologies, taking a “guns don’t kill people; people kill people” is unwise and unsound.
Lyman Briggs College
Michigan State University