Monday, February 24, 2014

Instrumental technology and the responsibility problem

by Arthur Ward

Some people view technology as simply a means to an end. The user has a set of goals and a technological artifact can help the user achieve those goals, for better or for worse, but does not alter or introduce new goals. Call this the instrumental view of technology. A difficulty with this view is that it contradicts our own experiences: working in a beautiful library can help one focus, holding a weapon can make one bolder, and driving a powerful car can lead one to be reckless, etc. Interfacing with technology, it seems, does sometimes have the power to change us in various ways.

An alternative view, call it the non-instrumental view, is that some technology is more than a means to an end, and can actually alter some of the ends that a user wants to pursue. Technology can change us. A difficulty with this latter view is that it appears to lessen the moral responsibility of a person who uses a piece of technology for an evil end. “It wasn’t entirely my fault,” they might exclaim, “I was influenced/lured/tempted/seduced by technology X!” Call this the Responsibility Problem.

Here I will argue that the Responsibility Problem can be dealt with, and the non-instrumental view adopted. I will look at two technologies, controversies over which might gain clarity by adopting this analytic scheme: guns and social networks. Both, I argue, are clear examples of non-instrumental technologies that can affect some users in a negative way, influencing them to act wrongly when they would not have erred in the absence of the technology. Recognizing this fact, I think, will not lead to letting evil-doers off the hook, but will facilitate very necessary precautions and regulations with the technologies.


The stakes of the debate over guns are nicely laid out by Evan Selinger in his Atlantic article on guns. As he describes, the Instrumentalist view of gun technology is summed up by the familiar slogan “guns don’t kill people; people kill people.” This proclaims the ethical neutrality of guns, placing any blame for evil acts committed with a gun solely onto the person firing the gun and not the technology itself. Selinger argues that this position is untenable if we observe that people act more bold, reckless, or aggressive when holding a gun (at least some kinds of gun: if not a hunting rifle, perhaps a handgun or an assault rifle). I think Selinger is clearly right about this. This isn’t to say that merely holding a gun is enough to turn any decent person into a maniac - the claim is more modest than that: sometimes some guns in some cases impact some people in such a way as to affect their personality and alter their goals. I daresay it’s unlikely that George Zimmerman would have stalked and picked a fight with Trayvon Martin had he not been armed.

Social Networks

Looking at social networks brings forth a more pervasive example that many undergraduates have experienced: cyberbullying. There will always be bullies, but the technological advances of social networks like Facebook and Twitter have allowed for the proliferation of anonymous, venomous, bullying that can occur 24/7 online instead of being limited to the “schoolyard.” There are good reasons to think that cyberbullying is such a problem specifically because of features of the technology, namely the ability to instantly communicate in an asynchronous way from a distance without getting visual or other sensory feedback from one’s actions. In other words: it’s easy to trash-talk someone when you don’t have to look them in the face. There is a mountain of research demonstrating that our sympathetic response as humans is highly sensitive to immediate feedback such as a smile, frown, or grimace. When this interpersonal connection is severed, through distance or anonymity, we become less sympathetic towards one another. The Non-instrumental view of technology helps us see the import of this finding: Facebook is making some of us meaner! Some people who are normally kind and thoughtful can become colder and ruder online. This is an empirical claim, and research on this is in its infancy. Though, I think if we’re honest, many of us have caught this tendency in our own online behavior, and we surely recognize it in others.

The Responsibility Problem

Does the non-instrumentality of technology lead down a slippery slope away from personal responsibility and towards something like the twinkie defense? I don’t think so. For one thing, I’m convinced it’s the correct view, and where it leads we’ll just have to deal with that reality. But that aside, I don’t think we should worry about people dodging responsibility and foisting it on technology instead. While the external effects of technology can be powerful, the internal effects on our own personality and goals are usually very subtle, so much so as to often go unnoticed. “I didn’t realize I was speeding, I just felt kind of excited!” he exclaimed. In the vast majority of cases, the threat of technology to our free will is negligible. And note that if a technology did have a noticeable powerful effect on our will (consider a strong psychotropic drug), people would be very comfortable with lessened moral responsibility.

So, if we shouldn’t worry about the Responsibility problem, what are the stakes of being an instrumentalist versus a non-instrumentalist in the first place? I think the answer to that is non-instrumentalism should lead to cautious oversight, regulation, and education surrounding technologies such as guns and social networks. Their very minor effects on us can lead to enormous impacts further downstream, and for that reason it would be folly to take a Laissez-faire attitude towards them. What exactly those regulations look like is obviously where all the action is, and I don’t touch that here. But to protect each other, especially those more vulnerable to the lure of some technologies, taking a “guns don’t kill people; people kill people” is unwise and unsound.

Arthur Ward
Lyman Briggs College
Michigan State University


  1. Arthur, thanks for this post. I agree with you that the non instrumentalist position is correct. The instrumentalist position is medieval. But I'm not convinced there is a responsibility problem. I think that standards of personal responsibility generally rise rather than fall with the introduction of technologies that change our behavior. The more that people are entrusted with technologies that can do harm when mishandled, the higher our expectations become with respect to their intelligent use. Neither modern day cars nor e-mail could be safely deployed by people with 18th century levels of impulse control.

    But I do think technology is profoundly changing our understanding of the nature of responsibility. I think we are beginning to understand that anytime we lower the activation energy associated with a certain type of action, its frequency is going to rise. So we don't just tell people to be more careful when they are driving or sending e-mails; we build safer cars and Undo buttons. The introduction of new technologies do raise standards of personal responsibility, but they also give us more realistic expectations of the results that can be achieved by doing so.

  2. Randy, thanks so much for the comment. I think you're completely right when you say the standards of responsibility ought to rise when encountering powerful technologies ("with great power comes..." as the spiderman quote goes). But I wonder if this truth can co-exist with what I describe as the responsibility problem.

    The picture that emerges from the Non-instrumental view is that technology is somewhat akin to an intoxicant, equipped with the power to alter our mood, shift our goals, etc. And when you know the power of an intoxicant, I think we must heed your cautions about taking extra precautions and elevating our sense of personal responsibility before we consume it. But with intoxicants, would you agree there's also a Responsibility problem in the sense that I try to bring out as well? That is, those already "under the influence" are responsibility-impaired, at least to some degree. If the intoxicant - or by analogy, the technology - has a causal impact on our will (as one concedes in adopting the non-instrumental view), I'm tempted to say that accordingly our will, or overall moral responsibility, is degraded just a little.

    1. Yeah, that makes sense. How about changing your terminology a little? The word 'instrument' is appropriate since we're talking about the effect of tools, but it would be nice to avoid confusion with the instrumentalism/realism debate in philosophy of science. I think you've basically identified three different varieties of instrumentalism or instrumentism or whatever: Naive, Transformative and Zombie, with Transformative being the realistic position that recognizes the unintended consequences, the exaptations and the total restructuring of preferences and environment that occurs as a result of tool construction (one of which, I claim, is a preference for more responsible citizens) and which holds that both the Naive theory and the Zombie theory are just generalizing on primitive/extreme examples.

    2. The terminological similarity to instrumentalism in philosophy of science is unfortunate. We might blame Heidegger, who used the term in roughly this sense, and in philosophy of technology the term "instrumentalism" seems to have stuck (as an aside, I think my coinage of "non-instrumentalism" and "responsibility problem" in this context is unique to me)

      I'm a fan of your notion of transformative instrumentism, though, and I'm wholly on board with the thought that advances in technology come with an imperative for more a responsible citizenry. I wonder if you're claiming something even stronger than my normative gloss on what you said, though - do you think that more responsible citizens are something like an inevitable result of technological advances?

    3. Only in the evolutionary sense that societies in which this doesn't happen will perish. But I do mean it to be descriptive and predictive rather than just normative. It's what actually has happened so far. But there are serious counterexamples. Diagnostic medical technology, for example, has deeply corrupted the medical profession to the point that doctors seem to see patients mainly as an opportunity to use it. I think we could be killed off by the modern practice of medicine if we don't learn how to restructure the incentives, and fast. The rising cost of health care is responsible for failing to help countless people simply because the resources are allocated so poorly. Medical technology is probably a good exception to your claim that most of the effects aren't that strong.

    4. Yes, I like that example of medical technology having a profound effect on doctors' eagerness to use it (sometimes to everyone's detriment). I wonder if military technology has a parallel effect on soldiers.

  3. Interesting post Arthur! I too agree that technology has a causal influence on us but it's hard to see how cautious regulation can be implemented in the case of guns and social networks. With the advent of the 3D printed gun, it looks like regulation there is impossible because anyone with an internet connection could print one or several. Would posts in social media have to be filtered my some "meanness algorithm" before being accepted? In the case of social media, people can flag posts if they're being harassed so there is a secondary oversight that handles the aftermath of someone's lack of responsibility with the social technology.

    1. Derek, thanks, I'm sympathetic to those worries. Even if we accept that some technologies can be dangerous, maybe it's futile to do much about it. I think I have two sorts of responses to this worry.

      The first is that if you're on board with the claim that "if only we could regulate and limit the use of technology X, this would be very desirable," then my analysis will have already gained a little ground against strict instrumentalists (or as Randy suggests, maybe "instrumentists") who don't think there's a point to regulation, even in principle. Then, we can let the researchers and policymakers figure out if it really IS futile or not.

      But secondarily, let me push back against the futility argument a little bit and see what you think. A hallmark of seeing technology as merely an instrumental means to an end is that one believes that current users of the technology will nearly always find a way to continue using it because they have a persistent desire for the goods the technology can produce. Your example is that if gun enthusiasts want a gun for hunting, or target practice, or violence, soon they'll be able to print such a gun, easily getting around any regulations. What the non-instrumental view suggests is that some people, when barriers are put in place, will lose some interest in using the guns altogether because their desire to hunt, target, etc. was partly introduced by having such easy access. While I agree that serious gun enthusiasts might take to printing their own guns, I doubt this is true of many casual gun owners. And even those that print guns illegally may not take to carrying them in public, so overall I think the number of shootings would diminish radically. That said, I'm trying to avoid taking a political stance here on what regulations/limitations are appropriate.

      I'm with you that the regulation of social media seems even more futile and absurd than guns. There are those that want to ban guns, but thankfully no one is realistically suggesting we ban internet chatting. Instead, as you hinted, I suppose we could try to prevent abusive comments from being expressed online (there's an algorithm called "bullyspace" out of MIT that is attempting this), but like you, I'm not optimistic about this being an effective solution. I think instead the emphasis with social media should be on education: making people more aware of how behavior can shift when communicating online, and more aware of the damage possible. I'd start with educating the kids, but adults too could stand to learn more about this.

    2. I think I am on board with the idea that limiting/regulating technology X would be desirable but only because of the extreme examples of technology abuse. I mean it's clear to me that not everyone should have access to nuclear technology at the very least but it's harder to say how much further to take gun regulation.

      As to your second point about futility, I suppose if gun violence is mitigated due to some regulations, that's worth it but I'm leaning more toward the education side rather than banning because it becomes a slippery slope taking rights away from people. It's like never letting your kid go and play outside because you're afraid he'll get hit by a car instead of teaching him to look both ways before crossing. Not only does the child remain ignorant, but when they inevitably get outside on their own they're more likely to get hit since they never learned. The same goes for social media.

      Since everyone is throwing around slogans, mine would be something like, "education over regulation" or "education is regulation."

  4. Nice post, Arthur. I wonder whether you might accept a friendly amendment to your initial statement of the two views. You state them in terms of "ends" and "goals", and you mark the distinction between the instrumentalist view--which says technologies never give their users new "ends" and "goals" besides those the user began with--and the non instrumentalist view--which says technologies do, at least sometimes, give their users new "ends" and "goals." But I'm not so sure about this, and I think that your examples (libraries, guns, social networks) might be cashed out in terms of various genuine "effects" of using the technology that do not quite rise to the level of adding new "ends" or "goals."

    For example, a person might become more of a bully online imperceptibly, without the person ever actually adopting, or even endorsing, the "goal" or "end" of being more of a bully. I'm quite willing to 'hold responsible' the technology here for a bit, but once the person perceives what is going on--'hello, I seem to be becoming more of a bully; what is happening to me?'--then I think we are right to 'hold responsible' the person for not doing something to stop this slide. The person does not get the new end or goal from the technology alone.

    Since I want to make room for the idea that the person can accept or endorse a goal or end which the technology generates (and which may not have been generated by anything else), then maybe my proposal is not so different than yours. Perhaps my point is just that a technology does not ever make a person adopt or endorse a goal. To echo the naive gun motto in a less memorable way, "Tools don't make goals; people make goals..."?

    1. Russell, thanks – I think this is really helpful. I think there are two independent points that we might draw from your comment. One I’ll adopt thankfully and the other I might resist a little.

      The one I want to adopt is the recognition that sometimes using a technology can affect our behavior, altering moods and personality traits, without introducing a new end. Your bully example fits that well. Sometimes we’ll find ourselves becoming more aggressive and it’s not that we suddenly have a new goal of aggression. In fact, this phenomenon might account for quite a lot of non-instrumental technology interactions.

      But let me stick up for the idea that sometimes goals really do change, and that this change can be traced to the effect of technology in such a way as to reduce the user’s moral responsibility. You hold your self-reflective bully fully responsible for her actions, but what about a bully that never has such an epiphany? This new bully gets an unexpected thrill from humiliating her cousin, and finds herself doing it again and again, to an extent that we’d now count it among her goals. She doesn’t self-identify as a bully, but rather she comes to think of herself as someone who’s “saying what everyone else is thinking” in a way that amuses other family members. Let’s stipulate that in physical family gatherings, she’d never have the gall. I think this is a case where the technology has infected our new bully to a degree, and since I see this as a part cause of the bully’s new set of ends, I think her actions have been partly determined by the social network technology, and accordingly she is less responsible for these actions. Just a bit. Not enough to let her off the hook entirely. Some of the disagreement here will come down to one’s stance on free will and determinism, and I suspect we’re on different sides of that debate. But what do you think of my unreflective bully?

  5. Hi Arthur,

    Just a quick comment - really on Selinger's point (and on Don Ihde's and Bruno Latour's), not yours.

    Selinger is not "clearly right." In my limited experience, the most common response to a firearm is not to make the wielder more reckless or aggressive. Quite the contrary: they become more cautious, reflective, and methodical in their actions.
    This is reinforced quite sternly by instruction.

    You might respond that this is still a change in one's goals produced by the artifact; it's just that there's more than one possible change.

    I think there's a better description. An individual whose moral dispositions incline toward responsible actions will bring that disposition to bear on the firearm. An individual whose moral dispositions incline in the opposite direction might tend to be reckless in this case too.

    So it seems that Selinger's thesis must be revised: responsible people will be responsible with guns; irresponsible people will be irresponsible with them.

    Not a tautology, but not a deep philosophical thesis either.

    To the possessor of a gun the world does "take on a distinct shape," as Selinger says. Just not the shape he implies. For instance the holder of a "concealed carry" license is legally required to exercise greater restraint in a confrontation than an ordinary citizen - even if they are not carrying a firearm at the time.

    1. Hi Tom. You may be right about the empirical claim regarding the effects of guns on most people. And it's good to get the reminder that a lot of this IS empirical and there's a limit to what the armchair can yield. Maybe Michael Dunn (the guy who shot into a car of teenagers) was already a hot-head and having a gun just brought out a disposition that was already there. I would wager that there will still be a statistically relevant number of people who take the trajectory Selinger describes, and even if it's a small percentage that's going to be a lot of people and likely still worth pursuing regulation of one kind or another. The case that comes to mind is the Florida (always florida) movie theater shooting that happened after a brief fight over texting. The shooter was a former police officer and his family are insisting he's never been an angry person. Presumably he's had a lot of firearms training and should have been more restrained. I realize this case may not be representative of most gun owners, and we heard about it precisely because it was an unusual event. The question, I suppose, is how often a gun DOES cause a Selinger-like change in someone. And I wonder at what percentage do we think actions of the type I speculated about earlier need to be applied.

  6. I agree with Selinger, and Arthur too: the effect of possessing a weapon is strongly correlated with aggressive behavior, and at this point there's several decades of research that provide the empirical evidence
    Reading the summary of the studies, it appears that they were structured in such a way as t demonstrate a cause-and-effect relationship, rather than a self-selection one, where the more aggressive people were more likely to choose to be armed.

    Either way, I'd breathe a lot easier walking around where people don't tend to carry weapons.