When we ask “Can a machine be conscious?,” we often miss several important distinctions. With regard to the AI project, we need to distinguish at least between qualitative/phenomenal states, exterior self-modeling, interior self-modeling, information processing, attention, sentience, executive top-down control, self-awareness, and so on. Once we make a number of these distinctions, it becomes clear that we have already created systems with some of these capacities. Others are not far off, and still others present the biggest challenges to the project. Here I will focus just on two, following Drew McDermott: interior and exterior self-modeling.
A cognitive system has a self-model if it has the capacity to represent, acknowledge, or take account of itself as an object in the world with other objects. Exterior self-modeling requires treating the self solely as a physical, spatial-temporal object among other objects. So you can easily spatially locate yourself in the room, you have a representation of where you are in relation to your mother’s house, or perhaps to the Eiffel Tower. You can also easily locate yourself temporally. You represent Napoleon as an 18th century French Emperor, and you are aware that the segment of time that you occupy is later than the segment of time that he occupied. Children swinging from one bar to another on the playground are employing an exterior self-model, as is a ground squirrel running back to its burrow.
Exterior self-modeling is relatively easy to build into an artificial system compared to many other tasks that face the AI project. Your phone is technologically advanced enough to put itself in a location in space in relationship to other objects with its GPS system. I built a CNC machine in my garage (Computer Numeric Controlled cutting system) that I ”zero” out when I start it up. I designate a location in a three dimensional coordinate system as (0, 0, 0) for the X, Y, and Z axes, then the machine keeps track of where it is in relation to that point as it cuts. When it’s finished, it returns to (0, 0, 0). The system knows where it is in space, at least in the very small segment of space that it is capable of representing (About 36” x 24” x 5”).
Interior self-modeling is the capacity of a system to represent itself as an information processing, epistemic, representational agent. That is, a system has an interior self-model if it represents the state of its own informational, cognitive capacities. Loosely, it is knowing what you know and knowing what you don’t know. It is a system that is able to locate the state of its own information about the world within a range of possible states. When you recognize that watching too much Fox News might be contributing to your being negative about President Obama, you are employing an interior self-model. When you resolve to not make a decision about which car to buy until you’ve done some more research, or when you wait until after the debates to decide which candidate to vote for, you are exercising your interior self-model. You have located yourself as a thinking, believing, judging agent within a range of possible information states. Making decisions requires information. Making good decisions requires being able to assess how much information you have, how good it is, and how much more (or less) you need or how much better you need it to be in order to decide within the tolerances of your margins of error.
So in order to endow an artificial cognitive system with an interior self-model, we must build it to model itself as an information system similar to how we’d build it to model itself in space and time. Hypothetically, a system can have no information, or it can have all of the information. And the information it has can be poor quality, with a high likelihood of being false, or it can be high quality, with a high likelihood of being true. Those two dimensions are like a spatial-temporal framework, and the system must be able to locate its own information state within that range of possibilities. Then the system, if we want it to make good decisions, must be able to recognize the difference between the state it is in and the minimally acceptable information state it should be in. Then, ideally, we’d build it with the tools to close that gap.
McDermott, Drew. “Artificial Intelligence and Consciousness,” The Cambridge Handbook of Consciousness, 117-150. Zelazo, Moscovitch, and Thompson, eds. 2007.