[2] Sentiments such as these run quite deeply throughout much of cognitive science, and those advocating studying qualitative experience in an information processing paradigm are generally viewed with suspicion. In an effort to give these sentiments a principled foundation, Tim Maudlin (1989) argues that the impossibility of any computationalist theory of consciousness follows more or less directly from three very simple principles of computationalism. If correct, this would indeed be a significant result, for then the issue of how to understand consciousness within an information processing framework would be laid to rest. However, I believe that Maudlin errs in his argument. I find the error instructive because understanding it helps to clarify an important distinction in computational theory.
[3] Maudlin argues that a computational theory of consciousness is in principle impossible. In response, I suggest that Maudlin misunderstands the difference between an algorithm for computing some function and the actual computations themselves. This distinction is important because it helps clarify the significance of supervenience in theories of the mind.
[5] Generally, computationalists adhere to a stricter form of supervenience, a version restricted by the additional dimension of time. Mental events or episodes supervene on concurrent brain events or processes. Any computational theory of consciousness then must hold that two identical physical systems engaged in precisely the same activity over time would support the same phenomenal experiences through that time. A materialist would hold that the occurance or emergence of a conscious state must depend completely upon the physical changes in the system that instantiates those states at the time of the experience. Thus, only the motion of the system at that time can be relevant to the conscious state. This more restricted principle tells us that if we introduce a causally and physically inert object into the system, the same physical changes would still support the same conscious state.
[6] Maudlin's second fundamental principle of computationalism concerns matters we have already hinted at above, that the important level of organization for understanding cognition is the level which does our information processing. We can understand this level in terms of a machine table for a Turing machine which describes all possible connections among the internal states of a machine and their various input and output mechanisms. The advantage of this description is that it is very precise. We can determine by straightforward analysis exactly what a machine table looks like once we have been given the internal states and the proper operating parameters. We gain this predictive precision, of course, at the expense of a machine table's descriptive paucity. As already alluded to above, this sort of computational description specifies no underlying physical mechanisms. A computational description of consciousness would therefore assume that phenomenal experiences require only that the correct program is executed by an appropriate machine, for consciousness would occur whenever that program is being run.
[7] Maudlin holds that these two principles, plus the third condition that consciousness requires a nontrivial computation, are mutually inconsistent, and he concludes that a computational theory of consciousness would not be possible. He illustrates his point using a very simple device made out of pipes, tanks, water, and a hose. A computation is simulated in this machine when a hose squirting water is attached to an apparatus and run by a series of tanks. This hose apparatus can either (1) fill empty tanks with squirting water, (2) empty full ones by knocking out their stoppers which protrude from the bottom, or (3) do nothing to tanks that lack a protruding stopper and are surrounded by a shield which blocks the incoming stream of water. In this sort of machine, a computation then amounts to any attempt by the armature to change the level of the water in some tank. The question before us is whether this simple device could instantiate consciousness, despite the fact that it could instantiate any computation we choose.
[8] Maudlin maintains that even though this machine could compute any input-output pair, the hose apparatus alone could not instantiate any actual program since what it does is completely unresponsive to any of the "data" stored at the tank addresses.{1} If we had to define some algorithm that it is executing, we would have to say that it is "computing" a constant function since it gives the same "output" (squirting water), regardless of "input" (a stoppered, unstoppered, or shielded tank). Now such a simple algorithm or program could not support a conscious state, for otherwise it would go against the third principle, that only a nontrivial program can simulate consciousness.
[9] What would we have to do to our device such that we would conclude that it no longer computes a trivial function? We would have to arrange matters so that the machine could give a different output were it to receive a different input. That is, we need to arrange matters so that the machine would not always give the same output for any input set; it would just give the same output for the particular input set that it happens to have received in the case just sketched. In other words, we need to redesign our machine so that it could support counterfactual conditions. So let us add to our original machine additional tanks with floats, and chains connecting the stoppers, and other similar gizmos and gadgets, so that we could say the hose apparatus would counterfactually behave differently, given different initial conditions (even though it actually never does).
[10] Notice that now we should conclude that it is actually calculating a more complex function, even though we could not make this inference just using the input-ouput pairs it actually produces.
[11] Nevertheless, we still would have a problem. The supervenience thesis tells us that any phenomenal experience must supervene on the physical motion that instantiates the computation. Hence, in Maudlin's example of a consciousness machine, the phenomenal experiences must supervene just on the activity of the hose apparatus, and possibly the water, since that is all the activity that there is. The tanks and whatnot added to make the program nontrivial are irrelevant for supervenience, since nothing actually happens to or with them. However, without them we are prevented from saying that the hose is performing some nontrivial calculation and from saying that consciousness (the alleged product or by- product of a series of computation) supervenes on its activity alone. In sum, a computationalist is committed to saying that the system with the hose moving and squirting water, but without the extra stuff being hooked up, cannot be conscious, yet the system with the hose moving and with the extra tanks could be conscious. The actual physical movements in the two systems are the same, regardless of whether the extra tanks are there, so the two claims contradict the principle of supervenience. That is, the physical motion is the same in the two machines (even though the parts of the machines that do not move differ) and only that physical motion is important for supervenience. However, only one of the machines is conscious and ipso facto has phenomenal states, so whatever consciousness is cannot be captured just in the physical computations of the system. As Maudlin writes, "the supervenience space of [the system's] ... computational description, indeed whether [it] is computing at all, depends vitally on the counterfactuals that the idle machinery supports. Hence, [its] conscious phenomenal states cannot derive from [its] computational structure" (423). Therefore, he concludes, a computational theory of consciousness is not possible.
[13] In order to maintain that the system is actually running the multiplication program, and not some other algorithm which looks something like multiplication under certain circumstances, we have to ensure that the program supports counterfactuals such that, given any two positive numbers as input, the machine would in fact multiply them together. Now, we could arrange the input sets such that all this program actually got to multiply together were zero and some positive integer, so that it looked as though the program were actually calculating a constant function. It would appear that, regardless of the input, the machine would always spit out zero as output, though counterfactually it does behave quite differently.
[14] This multiplication program could be instantiated on any number of physical systems, just like a consciousness program could. We could instantiate it, like Maudlin's example, in a system of water, tanks, stoppers, and a hose. If we assume the level of water in each tank represents a natural number (and an empty tank denotes zero), then given the input set mentioned above, all our hose apparatus would do is knock out stoppers. For in order for our machine to multiply these particular numbers together, all we would need to do is to arrange matters so that the system gives the same output, regardless of the input chosen from our input set.
[15] Now, were we to stumble upon such a machine exhibiting such an input-output set, we might be tempted to say that this system is calculating a constant function, for all that is actually occurring in this instance is that a series of tanks -- each filled to a different level with water -- have their protruding stoppers knocked out by a hose apparatus. Moreover, we could claim that the apparent "constant function" supervenes on the physical activity of the machine alone.
[16] However, if we want to maintain that the system is actually running the multiplication program, and not just some constant function, then we would have to attach different sorts of tanks with floats and such to our machine in order to guarantee that, regardless of the inputs the system might receive, it could still multiply the water levels together. Now, by hypothesis, our input set is arranged such that the machine will in fact only multiply natural numbers by zero. Hence, these additional constructions are causally and physically inert. Nevertheless, they are required in order for us to say that the machine truly instantiates a program that could multiply any two natural numbers together. Notice that since the extra apparatus is inert in this instantiation -- just as in Maudlin's case -- we are prevented from saying that the actual multiplication processing supervenes over the additional stuff.
[17] We seem now to be in the same situation as in Maudlin's consciousness program. The additional tanks are irrelevant for supervenience, since nothing happens to or with them, but without them, we are prevented from saying that the hose is performing some nontrivial calculation and from saying that multiplicative states supervene on its activity alone. According to this Maudlin-style argument, then, a computationalist is committed to saying that the system with the hose moving, but without the extra tanks being hooked up, cannot be performing multiplication, and the system with the hose moving and with the extra tanks attached could perform multiplication. As before, the physical activity is still the same, regardless of whether the extra tanks are there, so the two claims must contradict the principle of supervenience.
[18] Do we now want to conclude that a computational program (or theory) of multiplication is not possible? I would scarcely think so.{2}
[19] What has gone wrong with Maudlin's example and discussion then? I believe that what Maudlin glosses over is that the actual set of input-output pairs used, which is what supervenes over the physical activity, is not the same thing as the algorithm or program of which the input-output set comprises one instance. The actual computation space, which makes up one relata of the supervenience relation, is not equivalent to the theoretical computational domain a program operates over.
[20] Supervenience marks a relationship between a current and particular calculation (or whatever) and concurrent and particular physical activity. However, running a program is an event which extends over not only current and particular calculations and concurrent and particular physical activity, but also has information-theoretic ties with possible calculations and unrealized physical activity. As suggested above, these ties are needed to eliminate alternative interpretations of the ongoing calculations.
[21] What Maudlin does notice is that even if the objects in the computational space are already specified, unless we have a complete machine table -- that is, unless we know all the counterfactual alternatives -- the machine's actual behavior is ambiguous. For the only way to eliminate the constant function interpretation in the examples above is by understanding how the machine would behave given a different input set. (Of course, we have to specify the program prior to pinpointing any particular computation just because the program is what properly individuates the computational space for us. As Maudlin remarks, "A particular physical state only becomes interpretable as a machine state of a system ... in virtue of standing in the right counterfactual or subjunctive relations to ... to the whole constellation of other states in the machine table" (419).) What Maudlin's examples show is that there is a double indeterminacy in any computational theory. Once we somehow overcome the general problem of semantic interpretation of the objects in the computational domain, we are still left with the difficulty of specifying the actual function computed. To solve this second problem, we must know more than the actual behavior of the system -- we have to understand how the system would behave were it to receive different inputs.
[22] However, these facts do not mean that for the instances we discussed these sets of actual input-output pairs so defined could not be a subset of the input-output pairs for more than one program nor that these sets do not supervene over the actual physical activity. These examples pull apart the domain entailed by the supervenience relation from the domain needed to specify a computational theory and show that the relata of supervenience form a subset of the computational universe. Multiplying a set of numbers together does supervene on the physical activity which underwrites those particular computations, but running a multiplication program is a different beast entirely, and it extends over more than what actually changes in the world. Thus, it is entirely consistent to say that while the act of multiplying supervenes on some physical activity, running the multiplication program does not supervene on the same activity.
[23] Any computational theory of consciousness would work in the same way. To have a particular conscious state at a particular time does supervene on concurrent brain activity, for exhibiting any particular phenomenal state just is exhibiting the appropriate input-output pair (just as multiplying two particular numbers together just is exhibiting the appropriate input-output pair) -- even though that input-output pair is perfectly consistant with any number of different algorithms. For a materialist, this just means that to have any particular conscious state in any particular brain just is to have the appropriate neurons firing in the correct order (or whatever). However, to say something is a conscious system (or is running a consciousness program) requires more than a set of input-output pairs. It require more than the particular neurons firing (or whatever). It requires the possibility for other sorts of states of consciousness than those that actually obtain, given different inputs.
[24] A computational theory of consciousness remains possible. (At least, Maudlin's arguments against one do not work.)
[26] This conclusion should be intuitively plausible for materialists (at least as intuitively plausible as any claim about what consciousness amounts to). For example, assume that for a moment the wind blew the leaves in my backyard into the right configuration such that they (for a moment) mimicked my brain state upon awaking this morning. Surely, we would have to maintain that the leaves, for a moment, instantiated a phenomenal state, just because my brain state this morning had a phenomenal quality. And these phenomenal states supervene on both my brain state and the leaves' state. But we would not want to maintain that the leaves blowing about in my backyard are conscious, just because under different conditions (e.g., a moment later) they do not instantiate anyone's brain state.
[27] To summarize: prior acceptance of materialism entails that mental states supervene on brain states such that no (relevant) change in our brain states could occur without also altering the corresponding mental state. This supervenience shows itself when we study particular mental or brain phenomenon. However, in detailing a functional program designed to explain consciousness, we should not expect the entire set of possible input-output pairs to supervene over past and future physical states since the function or functions our cognitive systems compute must include counterfactual events that the system may never instantiate.
[28] Thus, any computational theory will always extend beyond any evidence we could gather concerning particular phenomenal states, and the referents of the predicates in the theories will be determined by more than any particular set of observational data. But these underdeterminacy conditions are no different than those for any other computational/functional theory.{3}
{2} I am assuming here (fairly uncontroversially, I believe) that (i) computational theories are in principle possible, and that (ii) mathematics, if anything, is computational, and so should be implementable on an appropriate computational machine. Return
{3} I would like to thank Bruce Glymour and an anonymous referee for their comments on earlier drafts of this paper. Return