Back to CQ Homepage

The Hard Problem is Dead;

Long live the hard problem

Teed Rockwell

2419A Tenth St

Berkeley, CA 94710

510/ 548-8779 Fax 548-3326

74164.3703@compuserve.com

 

 

The Burden Tennis Stand off.


Dennett criticizes Chalmers for simply asserting, without justification, that consciousness is given in experience. But Chalmers says those who claim that we do not have direct awareness of consciousness need to be able to explain why so many people think we do. If you can't accept his position, Chalmers says, you need to be able to explain it away.

Traditionally, it was assumed by people as diverse as Freud and William James that the existence of consciousness as a distinctive phenomenon was not open to argument or question{1}. Ned Block spoke for this consensus when he says "our fundamental access to consciousness derives from our acquaintance with it." (Block 1992 p.205) Daniel Dennett, however, refuses to accept this as an answer. In Dennett and Kinsbourne 1992 (p. 240) he asks about consciousness "well, what is it?" thus placing the burden of proof on Block's shoulders. Not surprisingly, he shows that Block's deliberately ostensive answers are essentially without content. In a commentary on Chalmers (Dennett 1996), Dennett similarly criticizes Chalmers for simply asserting that consciousness is given in experience. This seems like a philosophical gambit with an honored tradition. Ever since Socrates, an inability to respond when someone cries "define your terms" has been taken as a sign of philosophical weakness. But Chalmers, like most other defenders of qualia, refuses to take up the challenge.

I have assumed that consciousness exists, and that to redefine the problem as that of explaining how certain cognitive and behavioral functions are performed is unacceptable. . . .Like many people (materialists and dualists alike), I find this premise obvious, although I can no more "prove" it than I can prove that I am conscious. . . .there is no denying that such arguments - on either side - ultimately come down to a bedrock of intuition at some point. (Chalmers undated)

 

And so they are stuck in a stalled game of what Dennett calls burden tennis, with each side claiming that the ball is in the other's court. Some have despaired at finding any resolution for this stand-off (Hardcastle 1996 Guzeldere 1997a), but I do not share this despair. There is another passage from Chalmers which does give some hope for destabilizing this stand off.

Dennett. . . has often stated how radical and counterintuitive his position is. So it is clear that the default assumption is that there is a further problem of explanation; to establish otherwise requires significant and substantial argument (Chalmers 1997 p.9)

In other words, those who claim that we do not have direct awareness of what Chalmers calls "experience" need to be able to explain why so many people think we do have this direct awareness. If you can't accept his position, Chalmers says, you need to be able to explain it away. This is a legitimate point, and rising to Chalmers' challenge is the best way to break the stand off. I would however, like to raise two other points to make the task no more challenging than it has to be.

First of all, Chalmers admits that positing experience as a fundamental ontological category is necessary only because it is unavoidable, and that such posit is in many ways an unsatisfying last resort.

by taking experience as fundamental, there is a sense in which this approach does not tell us why there is experience in the first place. (Chalmers 1995 p.210)

As Chalmers points out, this kind of posit in not unprecedented. All theories have to have some fundamental posits, and occasionally whole new fundamental entities are introduced into a theory, as when electromagnetism was posited to explain the results of Maxwell's discoveries. But this has to be seen as a last resort, or there would be almost no ontological unity to science at all. Chalmers' theory seems to be especially ontologically promiscuous , for it requires us to posit physical--mental "Siamese fraternal twins" which don't resemble each other, but are joined at the hip for all time for some inexplicable reason. It may be that reality is ontologically messy, and we just have to learn to live with that fact. But if there is another theory which accounts for the same facts with more simplicity and elegance, it should be considered to be more acceptable.

Secondly, there are some aspects of Chalmers' theory which seem to indicate some sort of conceptual confusion. The "hard problem" is not just hard to answer, it is clearly impossible to answer as it is currently formulated. I don't mind rolling up my mental sleeves and trying to clarify a difficult or confusing problem, but there is nothing confusing about explaining consciousness as Chalmers defines the task. It seems as clear as crystal that no answer, either verbal or mathematical, could possibly be immune to the objection "But I can imagine something having X and not being conscious". To offering any explanation whatsoever would be like measuring triangle after triangle in hopes of eventually finding one with more than 180 degrees as the sum of its angles.

Again, neither of these problems is sufficient to dismiss Chalmers' claims. It may very well be that reality is just funny that way. But if there is any other theory without these problems that can explain all the same facts, it should be considered preferable. In short, if there's a tie, I win.

 

The Concept of Consciousness and

its Relationship to Other Concepts

When a question is clearly unanswerable in principle, it's probably formulated incorrectly, which means that the problem is most likely philosophical, and not scientific. This does not mean that scientific data should be ignored, but it does mean that the data must be analyzed very carefully, and may need to be reinterpreted in radical ways. I think the hard problem needs to be dealt with by analyzing and reconfiguring many of our basic assumptions not just about consciousness, but in what appear to be concepts distantly related to it. Specialization of subject matter, although often a good strategy for scientists, is usually a terrible strategy for philosophers. The thing that gives a philosophical inquiry focus, and saves the philosopher from the epistemic sin of dilettantism, is not concentration on a subject matter, but concentration on a problem. Philosophical dead ends almost always arise when philosophers try to resolve a paradox on its own terms, rather than question the presuppositions that make the paradox unavoidable. Assuming that philosophy of consciousness is a self-contained subject will probably insure that its problems generate journal articles until everyone gets tired of getting nowhere and starts thinking about something else. But if we start asking ourselves what other philosophical assumptions give rise to the hard problem, we may be able to come up with a different ontology or epistemology or metaphysics that could help dissolve it.

When Chalmers claims that explanations which only explain structure and function won't explain consciousness, this could be indicative of the fact that there is a mystery about consciousness. But it is at least as likely that the problem lies with our concept of explanation. If we had a variety of different explanations, some of which relied on structure and function, and others that relied on something else, this might require us to give a special status to experience . But as far as I can see, what Chalmers means by "structure and function" is the same thing as "explanation": To say that something has structure and function is simply to say that it is explicable. Chalmers says that by function he means "any causal role in the production of behavior that a system might perform." (Chalmers 1995 p.202). And he gives no example of an explanation that doesn't explain something by giving its structure and function. If it is impossible to say anything about anything at all, either mathematically or verbally, without giving its structure and function, then the fact that experience cannot be explained without describing structure or function seems to be as much a characteristic of explanation itself, as about experience.

I'm therefore going to begin this paper by considering the relationship between knowledge and experience. Most of us assume that knowledge consists of causal explanations, so if we cannot establish a relationship between causal explanations and experience, this would apparently create problems for the task of relating knowledge and experience. I believe that many of Chalmers' more counterintuitive claims follow from epistemological presuppositions of this sort that are widely believed by almost everyone, except for a few professional philosophers. The fact that Chalmers' conclusions are so counterintuitive, however, indicate that there may be good reasons for reexamining those premises, regardless of how undeniable they may seem. The premises I would like to consider are exemplified by the following quotes from Chalmers.

Conscious Experience, by contrast, forces itself upon us as an explanandum and cannot be eliminated so easily". (Chalmers 1996 109)

Experience is the most central and manifest aspect of our mental lives, and indeed is perhaps the key explanandum in the science of the mind. Because of this status as an explanandum, experience cannot be discarded like the vital spirit when a new theory comes along. (Chalmers 1995 p.206)

If it were not for the fact that first-person experience was a brute fact presented to us, there would seem to be no reason to predict its existence (Chalmers 1990)

The most obvious interpretation of these statements is something resembling the Cartesian "Cogito ergo sum" : Our subjective experience is where all of our attempts to understand the world begin, because it is directly given to us as a brute fact. Unlike Descartes, Chalmers does not necessarily claim that the contents of our conscious experience are all given to us with certainty. Nothing in the above passages implies that it is impossible for us to be mistaken about the nature of our mental states. But Chalmers claims that the existence of the mental states is not open to question. They are there as a brute fact, and consequently that fact needs to be accounted for.

The main intuition at work is that there is something to be explained--some phenomenon associated with first person experience. . . The only consistent way to get around the intuitions is to deny the problem and the phenomenon altogether. (Chalmers 1996 p.110)

The main theme of this paper is that the second sentence in the above quote does not follow from the first. I intend to get around those intuitions by accounting for their existence, not by denying it. But accounting for their existence is very different from taking it for granted. Just because the existence of consciousness as Chalmers describes it seems to be a brute fact, does not mean that it is a brute fact. If I can come up with an alternative explanation why it seems to Chalmers and others that consciousness forces itself upon us as a brute fact that evades all causal explanation, I do not have to take their claims at face value. And if that alternative explanation is simpler and more coherent than Chalmers', he will no longer have the epistemic right to describes his feelings on this subject as an unquestionable "bedrock of intuitions". For such a supposed bedrock could be used to justify an entirely different theory of consciousness if it were reinterpreted in a different context.

The Sellarsian Alternative to the Given

My primary inspiration for this alternative explanation will be Wilfrid Sellars' critique of "the Myth of the Given". But I will not limit myself to Sellars scholarship. I will borrow from the ideas of those who have built on Sellars' insights, and freely add interpretations of my own. My goal is to sketch an alternative explanation that is directly aimed at Chalmers' description of common sense intuitions, that will enable us to reinterpret them in new and fruitful ways. We can, however, begin with a quote from Sellars' "Empiricism and the Philosophy of Mind", which Sellars considered important enough to put in italics.

For we now recognize that instead of coming to have a concept of something because we have noticed that sort of thing, the ability to notice a sort of thing is already to have the concept of that sort of thing and cannot account for it. (Sellars 1963 p. 176)

What makes this quote relevant to our discussion is that Sellars believed that this was true not only for concepts like 'table" and "green", but also for the concepts with which we comprehend our subjective experience. To paraphrase the above quote, we do not come to have a concept of subjective experience because we have noticed that we have subjective experience. Rather the ability to notice that we have subjective experience is already to have the concept of it. This means that because we have inherited a folk-Cartesian concept that includes a doctrine of direct access, we will experience ourselves as confronting first person subjectivity as a brute fact, an explanandum, and thus the hard problem seems to be unavoidable. This is why a large majority of people think that something other than structure and function needs explaining: Because introspection tells them so. However, the Sellarsian view can account for this experience without asserting that we must know on the basis of introspection that we are conscious.

Sellarsians claim that the existence of internal states is something we posit to explain certain facts about our life in the world. (for example, the fact that sometimes we hallucinate and have red-apple experiences when there are no red apples present., or that red apples look orange in yellow light.) Consequently, introspection does not directly reveal to us that we have mental states, rather introspection is only possible because we accept a theory that posits the existence of mental states that can be introspected. Those who have genuinely appropriated such a theory spontaneously make a distinction between inner events and outer events, and it thus seems to them that the inner events are directly given, and that the outer events are only inferred from those inner events. Professional epistemologists eventually refined this assumption into various sense datum theories. These eventually became so full of conceptual tangles that eventually Sellars had to come along and posit a new theory, which said that none of our experience is directly given, not even our experience of our own inner states.

Sellarsians do not deny that it seems to us that our mental life is central and manifest, and the fundamental explanandum in the science of mind. But they account for this appearance by saying that our theories trickle down into our experience in such a way as to make the distinction between explanandum and explanans untenable. The explanandum is a spontaneous taking, the explanans is a deliberate taking, but both are theoretical judgments, and will inevitably change places in our epistemic space, as we learn to know our way around in our environment. When a theory helps us become more at home in the world, we naturally begin to experience the world in it's terms. This is what Patricia Churchland meant when she said " "the available theory specifies not only what counts as an explanation, but also the explananda themselves" (P.S. Churchland 1986 p.398)

Because our inherited folk-Cartesianism posits consciousness as something directly given, and takes cognitive functions and contents to be mediated or inferential, we began to spontaneously take there to be something we call "experience", which is supposedly different from the things in the outside world that are being experienced. It thus seems obvious to us that experience is a further prima facie phenomenon that needs explaining. It also seems obvious that anything that looks like what we think of as a structure and function based explanation cannot possibly do the job. But the fact that this seems obvious does not necessarily mean that it is true. It just means that it appears to be true for those who are spontaneously experiencing the world in Folk-Cartesian terms.

This is, largely a paraphrase of Sellars' "Myth of Jones" in "Empiricism and the Philosophy of Mind". (Paul Churchland also expands on this point on pp. 67-83 of "Matter and Consciousness".) But what makes it relevant to our discussion is that it (begins to) meet Chalmers' challenge of explaining why it seems to so many people that there is a direct awareness of our mental states, without requiring us to believe that there actually is such a direct inner awareness. Those of us who have appropriated this Sellarsian world view do not experience ourselves as having this direct inner awareness. Rather we experience ourselves in terms of a theoretical system that acknowledges itself to be the introspective subset of our conceptual world view. Both the Cartesian system (which accepts the concept of immediately given experiences) and the Sellarsian system (which does not) have conceptual apparatuses for accounting for private experiences, which I will call their third person concept of the first person, or third-person-first-person for short. (3P1P for even shorter). The Cartesian 3P1P starts from the assumption that our inner experiences are directly given. Because this assumption apparently does have some isomorphism with reality, it usually becomes a self fulfilling prophecy for those who accept it, and they end up experiencing the world in quasi-Cartesian (Chalmersian?) terms. But those of us who acknowledge that explananda and explanans can change places during the study of introspective experience will find their prophecies self-fulfilled as well. I have experienced my 3P1P concepts grow and develop as I try to understand myself, and this growth and development seems to be essentially isomorphic with the way I make sense out of my encounters with the external world. The relationship between my concept of fear and individual instantiations of fear does not seem to me to be significantly different from that between my concept of dog and individual instantiations of dogness. (or rather the difference is not captured by saying that there is a hard problem involved in trying to explain the former.) So I find appeals to fundamental intuitions about consciousness as a brute fact to be unconvincing, and I would guess that Dennett and Churchland feel the same way for similar reasons.

Chalmers comes close to acknowledging this when he says in "Moving Forward on the Problem of Consciousness" "one is tempted to agree that {Dennett's Commentary} might be a good account of Dennett's Phenomenology" (Chalmers 1997 p.7). I think if he accepts that temptation, rather than assume that we all share the same phenomenology and that our disagreements are matters of logical confusion, the nature of the argument has to change in important ways. If he is willing to believe me when I say that I don't have an awareness of something called experience which is distinct from structure and function, how can he account for this fact? It will not be as difficult for Chalmers as it would be for Descartes, for Chalmers is only claiming that we are directly certain that there is something to be explained. He is not claiming that there are any characteristics of that something that we can be certain of. Consequently, he could simply say (and in fact did say in correspondence) " I do think that in fact you and Dennett have an awareness of what I'm calling experience. I just think that {your} philosophy leads you to say some false things about it." It is true that we Qualophobes could be so repulsed by the ontological messiness of Chalmers' theory that we could end up repressing what he apparently assumes is our direct awareness of experience. But if our theory permits us to account for Chalmers' belief in direct awareness without having to accept his ontologically messy world view, then it seems to me that we have no reason to believe that there is anything to repress.

Chalmers' Reliance on the Given

If Chalmers takes my criticism seriously, he will have to reformulate many of his favorite arguments, even if he does not have to abandon the position for which he is arguing.

For example consider this argument from Chalmers 1995

If someone says "I can see that you have explained how DNA stores and transmits hereditary information from one generation to the next, but you have not explained how it is a gene", then they are making a conceptual mistake. All it means to be a gene is to be an entity that performs the relevant storage and transmission function. When someone says "I can see that you have explained how information is discriminated, integrated, and reported, but you have not explained how it is experienced", they are not making a conceptual mistake. This is a nontrivial further question. (p. 203)

Why is this a nontrivial further question? If Chalmers is appealing to the assumption that we have a direct awareness of given subjective experience,. there is no need for him to say anything more. But for those of us who reject that assumption, he has simply stated his belief, not given an argument for it. Without any such supporting direct awareness, there is no apparent difference between the gene example and the experience example. Without the supporting reference to direct awareness, it appears that what he is saying is that all of the components of consciousness can be explained by scientific means, but for some reason the whole of consciousness (which he calls "experience") eludes physical science.A zombie is thus sort of the mirror image of Aquinas' description of transubstantiation.


Aquinas' communion wafer is the Body of Christ even though it possessess none of the characteristics that make it so.


A zombie is not a conscious being even though it possesses all of the characteristics to make it so.

Aquinas used scripture as the reason for accepting transubstantiation. It appears that Chalmers is using the Myth of the Given in a similar way to justify the existance of zombies.Dennett's example of the "Zagnet", which behaves exactly like a magnet but lacks something like inner "Magnetismo", was meant to make the same point. Why should a Zombie be taken more seriously than a Zagnet? The only answer seems to be something like "because we have a direct awareness of our mental states, and no direct awareness of magnetismo. Because we can't say anything about what Chalmers calls "experience", (to describe it you would have to give it structure and function), it appears that it must be directly given for us to have any reason to believe it exists at all. And the claim that experience is central and manifest, and an explanandum rather than an explanans, seems to be a paraphrase of the claim that it is given. Similarly, on page 109 of Chalmers 1996 he says "Conscious Experience, by contrast, forces itself upon us as an explanandum and cannot be eliminated so easily". Unless he explains a specific sense that things can force themselves on us when they are not directly given, the natural assumption is that he is taking for granted that conscious experience is directly given to us.

This scattering of examples does seem to show that Chalmers has permitted himself to rely on arguments and presuppositions that take direct awareness of our mental states for granted. But this does not mean that there is no hard problem if we abandon those arguments. Sellars still insists that there are strong theoretical considerations that require us to posit the existence of something that he calls sensations, which underlie all of our concepts but which are themselves non-cognitive. And although Sellars believed he had adequately accounted for the dualism between thoughts and physical bodies, he still felt that sensations were somewhat of a mystery for his scientific realist world view.

The Hard Problem without Givenness

Perhaps the most well known quote from Sellars is the phrase "All awareness. . .is a linguistic affair"(1963 p. 160). When Dennett claims that all perceptions are kinds of judgments in "Consciousness Explained" (1991), he is refining and extending this basic Sellarsian principle. This is why Dennett's interpretation of the blindsight experiments rejects the possibility of separating qualitative experience from information. For Dennett, qualitative experience without information is a contradiction in terms. (p.322) It is also the primary motivation for Dennett's extended attack on external images in Chapter 10, and the critique of the concept of "filling in" with the qualitative stuff he derisively labels "figment". (P.344). Dennett could claim that in these critiques he is consistently applying this Sellarsian dictum. But in doing so, he is being more royalist than the King, for he blurs a distinction that Sellars himself thought was extremely important.

Sellars, like most analytic and post-analytic philosophers, embraced the assumption that questions about knowledge and thought are primarily questions about language. But unlike most of his contemporaries, Sellars was also aware of the limitations of this assumption. In the following quotation, he almost admits that there is something wrong with his approach, without actually suggesting that there is any other way of dealing with these questions.

 

Not all 'organized behavior' is built on linguistic structures. The most that can be claimed is that what might be called 'conceptual thinking' is essentially tied to language, and that, for obvious reasons, the central or core concept of what thinking is pertains to conceptual thinking. Thus, our common-sense understanding of what sub-conceptual thinking -- e.g., that of babies and animals -- consists in, involves viewing them as engaged in 'rudimentary' forms of conceptual thinking. We interpret their behavior using conceptual thinking as a model but qualify this model in ad hoc and unsystematic ways which really amounts to the introduction of a new notion which is nevertheless labeled 'thinking'. Such analogical extensions of concepts, when supported by experience, are by no means illegitimate. Indeed, it is essential to science. It is only when the negative analogies are overlooked that the danger of serious confusion and misunderstanding arises. (Sellars 1975 p.305)

When Sellars wrote about perception, his commitment to the principle that all awareness is linguistic often prompted him to equivocate about "organized behaviors" that were not built on linguistic processes. He claimed that the apparently unified processes of seeing an object in front of us consists of two distinct process, and (as he often said in class) the only reason that these processes occur simultaneously is that people can chew gum and walk at the same time. The first process consists of the occurrence of a sentence in the mind, saying something like "lo, a pink ice cube over there" and is completely cognitive,. The second process consists of a manner of sensing, an occurrence in the mind of a process Sellars called "sensing a-pink-cube-ly". Sellars usually describes this second process as completely non-cognitive, but he never felt completely comfortable with this description. Consider, for example, this passage from Sellars 1963, which offers an explanation for how linguistic knowledge can be derived from pre-linguistic experience.

 

While Jones's ability to give inductive reasons today is built on a long history of acquiring and manifesting verbal habits in perceptual situations, and, in particular, the occurrence of verbal episodes, e.g. "This is green," which is superficially like those which are later properly said to express observational knowledge, it does not require that any episode in this prior time {i.e. before Jones had language} be characterizeable as expressing knowledge.

. . . {Footnote added in 1963} My thought was that one can have direct (non-inferential) knowledge of a past fact which one did not or even (as in the case envisaged) could not conceptualize at the time it was present.{5} (Sellars 1963 p.169)

.

Note that this passage equivocates on whether sensory experience is knowledge or not. The original passage says sensory experience is "not . . . characterizeable as expressing knowledge", the footnote says sensory experience is "direct (non-inferential) knowledge".

In "the Structure of Knowledge" Sellars attempts to clarify this distinction between thought and sensing by saying that musicians and composers have two different ways of thinking about their art. They can think about sound (i.e. linguistically) and they can also non-linguistically think in sound. He then makes the following conclusion from this, which seems to contradict many of his other statements.

 

There is much food for thought in these reflections. . . But the fundamental problems which they pose arise already at the perceptual level. For as we shall see, visual perception itself is not just a conceptualizing of colored objects within the visual range-a 'thinking about' colored objects in a certain context--but in a sense most difficult to analyze, a thinking in color about colored objects ( Sellars 1975 p.305)

Sellars was certainly right that this sense was most difficult to analyze, and the more he wrote on this subject, the more obvious the difficulty became. This paragraph actually seems to be implying that Sellars believed that sensations are different from both the linguistic and the non-linguistic thoughts we have about them. This seems to leave us with three categories of mental events, exemplified by 1) linguistic thoughts about sound 2) non-linguistic thinkings in sound, and 3) Audial sensations. One wonders how many more mediating entities would have to be posited if we continued along these lines. And what function would the completely non-cognitive sensation perform if we had both linguistic and non-linguistic concepts? Why not just say that the world caused the non-linguistic concept, and eliminate the non-cognitive sensation as an unnecessary middle step? Clearly there was a tension in Sellars' thinking on this point that was very difficult for him to resolve.

This tension is completely absent, however, in Rorty's interpretation of Sellars. His initial description of the problem, nicely captures the essence of Sellars' distinction between knowledge and sensation.

 

Sellars invokes the distinction between awareness-as- discriminative behavior and awareness as what Sellars calls being "in the logical space of reasons, of justifying and being able to justify what one says." Awareness in the first sense is manifested by rats and amoebas and computers; it is simply reliable signaling. Awareness in the second sense is manifested only by beings whose behavior we construe as the uttering of sentences with the intention of justifying the utterance of other sentences (Rorty 1979 p. 182)

But Richard Rorty's description of the relationship between these two kinds of discriminative behavior ignores Sellars' ambivalence, describing him as being, like Dennett, firmly committed to the idea that all awareness is linguistic.

 

Either grant concepts to anything (e.g. record-changers) which can respond discriminatively to classes of objects, or else explain why you draw the line between conceptual thought and its primitive predecessors in a different place from that between having acquired a language and being still in training (ibid. p.186)

Rorty imagines Sellars placing this dilemma before those who reject the claim that all awareness is a linguistic affair. But it can easily be placed before Sellars himself during those times he is trying to salvage some kind of non-cognitive consciousness for sensations/experience. There is no point in criticizing Rorty's Sellars scholarship; the texts are ambiguous enough that his resolution of the ambiguity is as accurate as any. But the other resolution of the ambiguity--saying that there are two different kinds of awareness, which follow different rules--can be more fruitful if we combine it with certain insights from Dewey.

Sellars and Dewey on Experience

Dewey wanted to claim that experience is not just vaguely perceived knowledge, but something different in kind from knowledge; something constituted by our habits, skills, and abilities, and necessarily linked to our goals, aspirations, and emotions. Throughout it's history, philosophy has concentrated on understanding the architecture that made rational thought possible: i.e. language. Language is the thing which separates us from the brutes, and entitles us to the Aristotelian honorific "Rational Animal". For that reason, most philosophers associated conceptual thought with the abstract and the spiritual, and it was thus considered to be the thing that made us conscious beings. Felt experiences were considered to be thoughts that were deficient in some way: They were confused thoughts, waiting to be brought into focus, or atomistic bits of thought waiting to be assembled into scientific theory. Dewey's radical claim was that although "knowing is one mode of experiencing" (Dewey 1910/1997 p. 229) experience in general had more fundamental rules of organization that were all it's own.

By our postulate, things are what they are experienced to be; and, unless knowing is the sole and only genuine mode of experiencing, it is fallacious to say that Reality is just and exclusively what it is or would be to an all competent knower (ibid. p. 228)

Until very recently, it was usually assumed that both sensations and thoughts were constituted by processes that were essentially the same and essentially linguistic and propositional. Positing sensations/qualitative experience as being significantly different from concepts/ language sounded mystical and ineffable, because the sole model we had for cognitive activity was a linguistic one. Dewey made an important contribution by showing the advantages of assuming that thought and experience were different processes that were cognitive in different ways. Unfortunately, because he could not explain the two different mechanisms that distinguished knowing from feeling, he had to assert this claim as an unproven postulate. But modern connectionist neuroscience has now caught up with Dewey's vision, and shown that there are cognitive processes which are non-linguistic that enables us to do many things (particularly perceptual pattern recognition) in ways that are very different from language based cognition. (see Churchland 1989 and 1995 for the scientific details).

Although there has been some controversy over whether the regions of computational space in a connectionist neural network are describable as representations, no one would claim that their interactions were fundamentally describable as a language. Even those who believe that a Language of Thought theory is the best description of higher cognitive processes acknowledge that connectionism successfully implements "lower" processes like perception and motor control. And this means that no Language of Thought theorist denies that there are many activities that are describable as cognitive in some sense which is clearly not linguistic. The controversy between Language of Thought theorists and Connectionists is over whether connectionism can eventually reduce or eliminate a Language of Thought. No one is claiming that things could go the other way around, and we could eventually describe Connectionist nets with a Language of Thought theory.

What I am suggesting is that consciousness may be an emergent property of two distinct but closely related subsystems, each of which is responsible for a different kind of awareness. One sort of awareness enables us to occupy what Sellars (1963) calls the Space of Reasons: the linguistic realm of logical explanations and communication. The other enables us to perform the kind of discriminative signal processing manifested by Rats and Amoebas. The first kind makes possible the sort of awareness which is a linguistic affair, the second kind makes possible what Dewey calls experience and Sellars calls sensations.

There are passages in Dewey that are strikingly parallel to Sellars on this point.

. . . .to be a smell is one thing, to be known as a smell, another; to be a "feeling" one thing, to be known as a "feeling" another (ibid. p.81)

If we substitute "a-pink-cube"ly manners of sensing for smells and feelings, this claim is identical with the Sellarsian distinction between thoughts and sensations we discussed earlier. There are numerous distinctions between Sellars' concept of sensation and Dewey's concept of experience, but there is enough in common between them to provide a foundation for a non-Cartesian form of dualism. This would be a very relative sort of dualism, of course. Both linguistic thought and non-linguistic connectionist experience are implemented in physical systems, which differ only in structure, not in fundamental substance. And contemporary neuroscience strongly implies that linguistic structures are also specific implementations of certain connectionist structures. But it appears plausible that connectionist structures that don't implement language are experienced differently by us than those that do, and that this could account for the qualitative dichotomy between thought and experience.

Sellars often said that the sensing part of a perceptual act was non-inferential, but he did not mean by this that it was "given" in the way that classical empiricism said that sense data were given. On the contrary this claim became the basis of one of Sellars' most important critiques of sense datum theory. What he meant by this was exactly what he said: Perceptual sensations are non-inferential because you can't make inferences from them. It really is impossible to make an inference from a particular "sensing a-pink-cube-ly", even though it is fully justifiable to make an inference from the mental sentence that accompanies it. To think otherwise (as did the sense datum theorists) is to make a category mistake along the lines of trying to make an inference from a walnut. (as opposed to making inferences from a sentence about a walnut.)

The fact that you cannot make logical inferences from a sensation bars the sensation from ever entering the Space of Reasons. But this does not mean that the sensation is not cognitive in the broad sense that connectionist cognitive science defines "cognitive". For if a sensation is the product of a series of appropriate vector transformations it might enable us to move skillfully through the world even if it was not accompanied by any sentences that would enable us to talk to ourselves (and others) about it. It might, for example, help me in reaching skillfully across the room to pick up the pink ice cube and put it in my drink, even though I was completely absorbed in the discussion of some other topic. In order to do this, however, it would have to be something other than a single ""A-pink-cube" ly manner of sensing that was joined at the hip to a sentence like "lo, a pink ice cube". It would instead have to be like what Dewey described as experience: a moment in an experiential process that was constituted in part by its relationship to a series of ongoing projects in the world

Rorty's grouping of computers with rats and amoebas in the above cited quote shows that he considers the distinction between the linguistic and the non-linguistic to be merely the distinction between the complex and the simple. Computers are of course much simpler than we are (at the moment). But regardless of their simplicity they are still devices designed to help us function in the logical Space of Reasons. In contrast, a connectionist system can be as complex as a logic or language-based computer, and far more skillful at what it does best. It is a common mistake to assume that only simple functions can be performed without verbal processing, and thus we describe sensations with the pejorative term "raw feel". One of the things we have learned from the scientific study and philosophical analysis of neural networks is that their kind of discriminative signal processing can be more complex than what most computers do, and that in us it is far more complicated than in rats or amoebas. The fundamental principles that govern this kind of processing are not linguistic, which is why humans with "know-how" can often do things that they cannot explain, even to themselves. And even animals possess cognitive processes that enable them to be skillful enough to avoid predators, remember where they have stored food, and recognize kin, all without any help from that awareness which is a linguistic affair.

If this kind of vector transformation-based cognition is capable of producing consciousness when it reaches a certain level of complexity, then the higher non-linguistic animals would be to some degree conscious in this sense. For humans, this kind of consciousness would provide the qualitative background in which our linguistic consciousness would dwell. Language might be the factor that enables us to have what Rosenthal (1986, 1990b) calls higher order thoughts that pick out individual items in our qualitative space and make them present-at-hand{2}. But if I'm right about this, a language processing machine would not be able to produce consciousness all by itself, no matter how many Turing tests it could pass. Language would perform a supplementary function that enriched our awareness of the qualitative experience produced by vector transformations, but by itself the language producing machine would be a zombie. This is almost the reverse of Dennett's position on animal consciousness (Dennett 1996). Dennett claims that animals are not conscious because they lack linguistic processing. I am saying that linguistic processing enriches and deepens consciousness, but by itself it cannot produce consciousness.

If connectionist nets can generate conscious experience without language, this experience would be as fundamentally private as language is fundamentally public. Consequently, any attempt to incorporate this kind of knowledge into the logical Space of Reasons would be necessarily doomed to failure. We could not completely capture the experience of this kind of discriminative processing by describing its structure and function in linguistic terms. This processing is distinct from linguistic processing, just as language is distinct from anything else it describes. And thus we would experience an explanatory gap when we compare the linguistic component of perceptual judgment to the manner of sensing that accompanies it. There is obviously more there than simply the observation sentence, but what? Words fail us, but that is because we have other cognitive processes that make words unnecessary. These cognitive process have no need for the logical Space of Reasons, although they are frequently guided and corrected by our frequent visits there, and from the advice and communications we receive from those who share that space with us. But the processes themselves are not designed to communicate or convince anyone of anything. They are designed to move me skillfully through the world. I could in principle "understand" those processes by putting my brain under a cerebroscope, and describing all of those processes in intersubjective terms. But knowing how to describe those processes would necessarily be a different ability from the one being described.

When we learn folk psychology, we acquire what I called earlier the concept of the third person first person. (3P1P for short). The 3P1P enables us to think about and classify mental states in terms that can be communicated to other people, and these are the same concepts which enable us to make sense out of our own mental life, rather than just experience it. Because we have this concept, we know what pains are, we know how people behave when they have them, and we know what it is like for us to have them. And we also know, because it is a necessary part of the 3P1P concept, that it is very different for other people to have pains than it is for us to have them. We make a variety of inferences and descriptions based on this fact, and almost all of these descriptions and inferences underline the difference between what is subjective and what is knowable. In order to determine what we know and what we don't, we have to separate the subjective from what can be communicated in the Space of Reasons. If we can't tell the difference between the subjective and the verbally expressible, it would be impossible to communicate to other conscious beings. The primary purpose of the concept of the subjective is to aid us in making this distinction, particularly if one is in a knowledge seeking profession.

And this is one reason why the subjective cannot be explained in principle, and that this fact is particularly difficult for philosophers and scientists to ignore. By definition, the subjective is that part of our experience which cannot be explained. Conversely, anything that could be communicated to our fellow inhabitants of (visitors to?) the Space of Reasons would by definition not be subjective. So there is a kind of conceptual necessity for the claim that no explanation can ever account for subjective experience. The purpose of explanations is to separate the subjective from the objective, so if something can be captured in an explanation it can't be subjective.

When seen from the third person logical Space of Reasons, any non-linguistic cognitive process will always appear different from the way it does to the person who actually possesses those abilities. So every third person description of those process will appear to be vulnerable to the objection "But I can imagine a machine doing X and not being conscious." What we are imagining during such a thought experiment is the exact perspective that the logical Space of Reasons is designed to create. This experience, profound and unshakable though it may seem, is actually based on a subtle misunderstanding. When I contemplate an item, whether organism or machine, from the objective third person point of view, it will, by the very nature of that perspective, seem like an object, an unconscious thing. But that doesn't mean that what I am contemplating is not conscious from its own point of view. Objectivity makes everything appear to be an object, including entities with subjective points of view. This is what accounts for both the illusion of solipsism and the Hard Problem.

The Resurrection of the Hard Problem

 

There is, unfortunately, one objection to this solution, to which there is no decisive answer: Why shouldn't we say that the non-conceptual discriminative abilities are all unconscious until we conceptualize them? To some degree the view presupposed by this objection is accepted as common sense by most current theories of consciousness, especially those which posit a separate function for consciousness, such as Lycan's inner sense theory (Lycan 1996) Baars' Global Workspace theory, (Baars 1988, 19997) and Rosenthal's higher order thought theory. (Rosenthal 1997). All of these theories assume that the problem of consciousness is the problem of bringing things into awareness, and rely on models that see consciousness as something like conceptual thought{3}. If we assume that such a process is solely responsible for consciousness, we would also have to assume that the cognitive processes that aren't touched by this process are unconscious.

What I am suggesting is an alternative to this widely held view: that the various processes required for non-verbal discrimination are equally essential to consciousness; that they embody the sensations which provide the material upon which the linguistic process reflects, and the background within which it dwells. Or perhaps more accurately, I am saying that there are two kinds of consciousness, linguistic and non-linguistic, and that human consciousness is a blending of both. Human beings are not different from animals only because we have language. No non-human animal can learn a complicated non-instinctive dance step, or play a musical instrument. Why shouldn't we say that both of these uniquely human capabilities is responsible for human consciousness?

Unfortunately, this question of "Why shouldn't we?" can be answered by saying "Why should we?". And this leads us into an apparently endless game of burden tennis, as the following thought experiment will demonstrate. Let us suppose that the laboratories of Marvin Minsky and Rodney Brooks get funded well into the middle of the next century. Each succeeds spectacularly at its stated goal, and completely stays off the other's turf.


The Minskians invent a device that can pass every possible variation on the Turing test.

It has no sense organs and no motor control, however. It sits stolidly in a room, and is only aware of what has been typed into its keyboard. Nevertheless, anyone who encountered it in an internet chatroom would never doubt that they were communicating with a perceptive intelligent being. It knows history, science, and literature, and can make perceptive judgments about all of those topics. It can write poetry, solve mathematical word problems, and make intelligent predictions about politics and the stock market. It can read another person's emotions from their typed input well enough to figure out what are emotionally sensitive topics and artfully changes the subject when that would be the best for all concerned. It makes jokes when fed straight lines, and can recognize a joke when it hears one. And it plays chess brilliantly.


Meanwhile, Rodney Brooks' lab has developed a mute robot that can do anything a human artist or athlete can do.

It has no language, neither spoken or internal language-of-thought, but it uses vector transformations and other principles of dynamic systems to master the uniquely human non-verbal abilities. It can paint and make sculptures in a distinctive artistic style. It can learn complicated dance steps, and after it has learned them can choreographs steps of its own that extrapolate creatively from them. It can sword fight against master fencers and often beat them, and if it doesn't beat them it learns their strategies so it can beat them in the future. It can read a person's emotions from her body language, and change its own behavior in response to those emotions in ways that are best for all concerned. And, to make things even more confusing, it plays chess brilliantly.

The question that we now need to answer is: Which of these two machines is conscious, and why? The one thing that everyone accepts about consciousness is that human beings are conscious. Therefore we assume that the more something shares in those characteristics that are unique and essential to human beings, the more likely it is to be conscious. Those animals that resemble us probably are conscious, those animals that don't resemble us probably are not, and rocks definitely are not. The only alternative to dualism is to assume that there is some way that the parts of our bodies interact with each other and the world which causes consciousness to arise. Therefore, if we build a machine that could do whatever we can do that other things can't do, that machine should be conscious. Once we had built such a machine, we could also shove aside (but not solve) Chalmers' "hard problem" by saying that it was just a brute fact that any machine that could do those uniquely human things must be conscious.

The problem that this thought experiment seems to raise is that we have two very different sets of functions that are unique and essential to human beings, and there seems to be evidence from Artificial Intelligence that these different functions may require radically different mechanisms. And because both of these functions are uniquely present in humans, there seems to be no principled reason to choose one over the other as the embodiment of consciousness. This seems to make the hard problem not only hard, but important. If it is a brute fact that X embodies consciousness, this could be something that we could learn to live with. But if we have to make a choice between two viable candidates X and Y, what possible criteria can we use to make the choice? It could be a matter of empirical fact that a certain level of perceptual and motor skills gives rise to consciousness, and that language has nothing to do with the case. It could also be a matter of empirical fact that a certain level of language facility gives rise to consciousness, and that perceptual and motor skills have nothing to do with the case. If the former is true, then Minsky's super Turing-test machine is a zombie. If the latter is true, then Brooks' super mute-robot is a zombie.

For me, at least, any attempt to decide between these two possibilities seems to rub our nose in the brute arbitrariness of the connection between experience and any sort of structure or function. So does any attempt to prove that consciousness needs both of these kinds of structures. (Yes, I know I'm beginning to sound like Chalmers. Somebody please call the Deprogrammers!) This question seems to be in principle unfalsifiable, and yet genuinely meaningful. And answering a question of this sort seems to be an inevitable hurtle if we are to have a scientific explanation of consciousness.

Some people (probably Fodor, and perhaps Dennett) might say that the level of skillfulness I posit for the robot of the future simply wouldn't be possible without some kind of Language-of-Thought. After all, don't dancers and musicians talk about their work, and isn't such talk essential to their work? There is no doubt that it is helpful, but my own experience as a musician tells me that it is not necessary. I have known too many musicians who are completely incapable of talking about what they do, and still manage to do it brilliantly. For some philosophers, who spend most of their time talking and writing, there may seem to be no serious candidate other than language for constituting consciousness. But anyone who has worked with Rock musicians knows it possible for someone to be skillful and flexible at activities that no non-human can perform. (Song birds have nothing remotely like the human capabilities for creating music), and have verbal abilities only slightly better than those of Washoe the signing wonder-chimp. And even if we do conclude that the uniquely human character of our consciousness requires language, what basis do we have to either confirm or deny that non-linguistic animals have no consciousness at all?

Perhaps once we discovered the right evidence and arguments, it would be intuitively obvious which of these two sorts of structure would be necessary for consciousness (or why and how both would be necessary). But somehow, having the whole thing rest on a brute intuition seems almost as disturbing as having it rest on a brute fact. Suppose when I think about one of these structures' relationship to consciousness, a little light goes on in my head, and a voice says "aha", so what? Why should that be any more decisive than a flutter in my stomach, or a design found in tea leaves or animal entrails?

So despite my belief that to some degree I have accounted for the explanatory gap, and invalidated many of Chalmers' arguments for the hard problem, I also believe that the hard problem itself has not vanished. But I would like to think that the considerations I have raised make the problem somewhat more complex as well as more significant to a variety of other issues. I would hope even more that it is actually a pseudo-problem, and that my new formulation will enable someone to see why.

 

Bibliography

Baars, Bernard (1988) a Cognitive Theory of Consciousness Cambridge, Cambridge University Press.

__________ (1997) In the Theater of Consciousness Oxford University Press Oxford.

Block, N. (1992) "Begging the Question against Phenomenal Consciousness" in Behavior and Brain Sciences Vol. 15 #2.

Block, N., Flanagan, O., and Guzeldere, G. (eds) (1997) Consciousness in Philosophy and Science MIT Press Bradford Books Cambridge.

Chalmers, David (1990) Consciousness And Cognition. http://ling.ucsc.edu/~chalmers/papers/c-and-c.html

Chalmers, David (1995) "Facing up to the problem of Consciousness" in The Journal of Consciousness Studies Vol. 2 #3.

Chalmers, David (1996) The Conscious Mind MIT Press Cambridge.

Chalmers, David (1997) "Moving Forward on the Problem of Consciousness" in The Journal of Consciousness Studies Vol. 4 #1.

Chalmers, David (undated) Reply to Mulhauser's Review of "The Conscious Mind"

http://ling.ucsc.edu/~chalmers/mulhauser-response.html

Churchland P. M (1988) Matter and Consciousness MIT Press Cambridge Mass.

Churchland P. M (1989) A Neurocomputational Perspective MIT Press Cambridge Mass.

Churchland P. M (1995) Matter and Consciousness MIT Press Cambridge Mass.

Churchland, P.S. (1983) Neurophilosophy MIT Press Cambridge Mass.

Churchland, P.S. (1996) "The Hornswoggle Problem" in The Journal of Consciousness Studies Vol. 3 #5/6.

Churchland, P.M. and P.S. "Recent Work on Consciousness: Philosophical, Theoretical, and Empirical." in On the Contrary MIT Press Cambridge.

Dennett, D. (1991) Consciousness Explained New York Little, Brown and Co.

_________(1996) Kinds of Minds MIT Press Cambridge

_________(1996) "Facing Backwards on the Problem of Consciousness" in The Journal of Consciousness Studies Vol. 3 #1.

Dennett, D., and Kinsbourne, M. (1992) "Escape from the Cartesian Theater" in Behavior and Brain Sciences Vol. 15 #2.

Dewey, John (1910/1997) the Influence of Darwin on Philosophy and other Essays Prometheus Press Amherst, N.Y.

Dretske, F.(1995) Naturalizing the Mind MIT Press Cambridge.

Flanagan, O. (1992) Consciousness Reconsidered MIT Press Cambridge.

Guzeldere, G. (1997a) "The Many faces of Concsiousness: A Field Guide" in Block, N., Flanagan, O., and Guzeldere, G. (eds).

Guzeldere, G. (1997b) "is Consciousness the Perception of What Passes in One's own Mind" in Block, N., Flanagan, O., and Guzeldere, G. (eds).

Hardcastle, V. (1996) "The Why of Consciousness: a non-issue for Materialists" in The Journal of Consciousness Studies Vol. 3 #1.

Hodgson, David (1996) "The Easy Problems aint So Easy" in The Journal of Consciousness Studies Vol. 3 #1.

James, William (1976) Essays in Radical Empiricism Harvard University Press Cambridge Mass

Lycan, William (1996) Consciousness and Experience MIT Press Cambridge.

Miller, George (1962) Psychology the Science of Mental Life Harper and Row New York.

Robinson, (1996) " The Hardness of the Hard Problem" in The Journal of Consciousness Studies Vol. 3 #1.

Rockwell, W.T.(1996) "Awareness, Mental Phenomena and Consciousness (a synthesis of Dennett and Rosenthal)" Journal of Consciousness Studies (Fall)

Rorty, Richard (1980) Philosophy and the Mirror of Nature Basil Blackwell

Rosenberg, Greg (1996) Rethinking Nature: a Hard Problem within the Hard Problem in The Journal of Consciousness Studies Vol. 3 #1.

Rosenthal, D. (1986) "Two Concepts of Consciousness" Philosophical Studies 49 pp. 329-359

_____________,(1990a) "Why are Verbally Expressed Thoughts Conscious?" ZIF Report no.32 Zentrum für Interdisciplinäre Forschung Bielefeld Germany.

_____________, (1990b) "A Theory of Consciousness" ZIF Report no.40, Zentrum für Interdisciplinäre Forschung Bielefeld Germany.

Rosenthal, D. (1997) "a Theory of Consciousness" in Block, N., Flanagan, O., and Guzeldere, G. (eds)

Sellars, W. S. (1963) Science, Perception, and Reality. Routledge and Kegan Paul, New York.

Sellars, Wilfrid (1975) "The Structure of Knowledge" in Action, Knowledge, and Reality, ed. Castaneda, Hector-Neri Bobbs-Merrill '. Indianapolis

Sellars, Wilfrid (1981b)"Foundations for a Metaphysics of Pure Process: The Carus Lectures," The Monist 64 , 49, #56.

Shear, Jonathan (1996) "The Hard Problem: Closing the Empirical Gap" in The Journal of Consciousness Studies Vol. 3 #1.

Varela, Francisco " Neurophenomenology" in The Journal of Consciousness Studies Vol. 3 #4.


Notes

{1} Although James believed this when he wrote his "Principles of Psychology", he later wrote an essay (reprinted in James 1976) titled "Does Consciousness Exist" in which he answered this question in the negative. [Back]

{2}This is not to say that animals would be incapable of having something like higher order thoughts. Any creature must have mechanisms that enable them to pick out from the range of experience those items which were worthy of attention. I think that Rosenthal's higher order thought theory, and Baars' (1988) global work space theory, are primarily ways of explaining the mechanisms of attention, which enable us to focus on certain aspects of our experience and ignore others. Language probably helps us with this function, which is why we are almost certainly better at it than animals are. But the mechanisms that enable us to do this are clearly different from those that enable us to have experiences in the first place. Yet another reason why it is so easy to imagine Rosenthal's and Baars' mechanisms not producing consciousness is that these mechanisms almost certainly couldn't produce consciousness if they didn't have a range of experiences to select from. I'm sure Baars doesn't believe that the sections of the brain that perform the global workspace function would be conscious if they were surgically removed and placed in a vat. See Rockwell 1996 for a further development of this distinction. [Back]

{3} some of them, particularly Lycan's, are ostensibly couched in perceptual terms. But Guzeldere 1997b argues, I think correctly, that higher order perception theories have certain inconsistencies unless they are refined into being a species of high order thought theories. (and if we operate from a Sellarsian perspective, non-perceptual thoughts would be linguistic)[Back]