In the seventh paragraph of the post, you say "This question [which machine, if
any or both, is conscious/] seems to be in principle unfalsifiable, and yet
genuinely meaningful." (I'm assuming that you mean that any answer to it is
unfalsifiable.) My neo-Carnapian intuitions diagnoses the problem right at this
point. Forget about attributions of meaningless and all that stuff. Replace it in
your statement with more pragmatically-oriented evaluative notions: theoretically
fruitless, arbitray without even being helpful for any theoretical, experimental,
or practical purpose, and so on. Any answer to the question will be those. Thus the
question is not worth pursuing, especially since the thought experiment is science
fiction right now. A much more useful way to spend one's time is addressing
frutiful questions, like the ones involved in constructing your postulated robots,
or investigating neural mechanisms, and so on. So acknowledge the connection
between unfalsifiability/verifiability/confirmability and theoretical and practical
worthlessness (rather than "meaningless"). Then get on with the theoretically and
empirically worthwhile questions. Many of the latter are quiter abstract and
"philosophical," anyway (about the scope and limits of various methodologies,
existing theories, and so on). Aren't those enough to occupy even the most abstract
theorist's attention? Why puzzle about questions whose answers can't be rationally
Thnik also about the first
sentence of your next paragraph: "perhaps once we discovered the right evidence and
arguments . . ." How exactly are we supposed to make such "discoveries"?
"Discovering" something like that is incomprehensible. There is no evidence or
argument one could possibly appeal to defend the "rightness" of any answer to these
"zombie" thought experiments. So why bother? Severe your attachment to the idea
that these are the questions we need to answer in order to progress.
Gary Schouborg writes:
Using Baars' global workspace theory, why couldn't either linguistic or dynamic
sub-processes generate cs in competing for access to a global workspace?
And where does emotion come in, which could
exist in either system? Although I can't give an argument to say that emotion can't
exist without cs, what makes me *care* about another being's cs is its having
emotion. An emotionless cs is a rather chilling prospect. More importantly, I'm not
sure what the difference would be between conscious and unconscious superperforming
devices. But if one could respond emotionally to me, then knowing I could sadden or
gladden it would make all the difference. If either linguistic or dynamic processes
could successfully imitate human emotional responses, I would feel no compunction
about evoking suffering behavior in either unless I really believed it could *feel*
any suffering I caused it.
Perhaps if either class of behavior was sufficiently similar to human behavior,
I would be compelled to believe the device was conscious, because we are hardwired
to believe that when confronted with sufficiently similar behavior. Who says all
our beliefs are generated by epistemic criteria? Credo quia absurdum est. Evolution
may have selected various beliefs for us, and our self-reflective process is able
to manage them only within limits. Perhaps our intelligence is a prosthetic that
extends, but never fully replaces, our original abilities or functions.
David Chalmers writes:
hi teed, not a solution to your problem, but a couple of relevant data points.
(1) milner and goodale's work on two perceptual systems. i imagine you know
this, as you were at the claremont conference. they postulate two visual systems,
one for online control of direct motor action, the other for cognitive analysis,
planning, etc. the latter system is supposed to be for "semantic" perception,
connected to language, etc. and only the latter system is supposed to be
associated with conscious processes -- the online system is unconscious. if
something like this hypothesis is correct, this suggests that consciousness would
be more likely to be associated with a pure-language system than a pure-motor
system. of course one can argue that m&g's allocation of consciousness between
these systems is essentially grounded on a prior assumption that consciousness goes
with the cognitive/ semantic system, so this doesn't prove anything, but it's
interesting nevertheless. w.r.t. your cases, of course your language-free system
was far more than a pure online motor-reaction system, so that would complicate
things. my own money is on both of your systems being conscious, in very different
(2) bob french's paper "subcognition and the limits of the turing test". mind,
1990. he argues that no system could pass a full turing test without actually
having been embodied and living a life just as we had. otherwise there would
inevitably be subtle question that would show it up. you should check it out. i
think it's available on the web, via my page of online papers on consciousness.
Ronald Lemmen writes:
What on earth makes you think and say that the choice is between two sort of
structures. Why would the ability to talk about dance have more in common with the
ability to talk about wine than with the ability to dance? Why are *you* (rather
than those philosophers who'd make language the prime necessary condition for
consciousness) suppose that language is so special that it gets a niche all of its
own, whereas all other abilities are lumped together as 'the other' kind of
Anyway, thought experiments like these are highly unlikely to get you any
further, for in order to draw any conclusions from them you already have to have
accepted some very basic assumptions. Like the one I just talked about. Or that
in order to intelligently speak about football you never need to have played or
even watched a game. Or more generally (and self-defeatingly), indeed that in
order to speak intelligently at all, you need no body, which is disturbingly close
to Descartes' thought experiment which made him decide that the mental forms a
realm all of its own, wholle separate from that of the body. (By the way, why do
you need to set up your thought experiment in such a way that the Minskian machine
has expert knowledge on so many different areas and wonderful social skills as
well? I could pass a Turing test without being able to write poems or play chess
very well). The flip-side is of course that you are supposing that we can all
imagine that people can learn all kinds of complex things without the help of
language (and I am not talking about a Language of Thought here, which does not
exist, but simply about language, which allows us to communicate and teach to and
learn from others *and* which is a vehicle that allows us to reflect upon
ourselves, our accomplishments and possible ways to improve. Musicians may not be
able to explain what they do--but then again, no expert can--but they can and
do--and *have to*--reflect on their performances.)
Imagine that you're a brain in a vat. You can? So that must mean that you do
not need a body for intelligence.
Imagine that you're not even a brain, but energy fluctuations floating freely
through the universe. Easy? So, we should conclude that minds don't even need
Imagine that you can fly. So what does that tell us about human physiology and
Imagine that a computer can pass the stiffest Turing test and that a
mute-and-dumb robot can do better anything that you can do, except speaking.
In a word, we can imagine too easily ...
Jim Garson writes:
Teed: My reaction is to refuse to accept the dilemma. Which of these is
conscious you ask. They are both conscious to a certain degree, and we would more
seriously entertain thinking of one as conscious (and be right about it too) if it
had some of the properties of the other. The legitimate question here is how do
various abilities correspond to the degree of consciousness we should attribute. Of
course since consciousness has so many dimensions the calculus of consciousness
will be messy, and involve different ratings for different aspects of consciousness
even for the same cognitive ability.
Markate Daly writes:
I realize that you want to disconnect consciousness from any kind of cognitive
function, either right brain or left brain. But there does seem to be an organismic
function that consciousness serves - it provides the means or techniques to satisfy
wants. When "I want", my attention focuses on the object of my desires and I
mobilize my resources to get it. For example, my friend's 4 month old baby was
being pushed in a swing. In front of her, I held out her cap just within reach of
her forward swing. Each time she approached her cap she concentrated all of her
attention on making her fingers close on the cap. She actually trembled with
ferocious concentration, even though the fingers couldn't quite do the job yet. She
wants, then she tries. She uses her consciousness to teach her body how to get
what she wants. It would be amazing it this were accidental or unique.
Peter Webster writes
Probably neither. What if consciousness is inherited, i.e., a physical entity,
in order to have it, must be the product of a continuing manifestation of
consciousness within its species? Without getting too Sheldrakian about it, the
unity that one experiences in some altered states might actually indicate the
continuum of both life and consciousness, and nothing not born of that continuum
can be either conscious or "alive".
The only alternative to dualism is to assume that there is some way that the
parts of our bodies interact with each other and the world which causes
consciousness to arise.
But what if it does not "arise" each time as a discrete phenomenon in each
apparently discrete organism, but is merely "revealed in active condition" by any
organism with enough complexity. Consciousness unrevealed may not be a "thing", but
more like the "void" of the Tao, and thus not a thing, no-thing, thus no dualism.
Richard Double writes:
I grant that it is logically possible that any computer that could pass the
super-duper Turing test would be conscious. Ditto with the robot-equivalent (which
passes the 'nonverbal' Turing test). Consciousness logically could emerge whenever
we produce an artifact as wonderful as those in your thought experiment. Afterall,
consciousness must enter the human picture SOMEWHERE.
But from this concession I am not much inclined to grant you that I must decide
whether the computer or the robot (or both) would really be conscious. Despite all
the criticism of Searle's Chinese room, I accept his fundamental (often overlooked)
point: It is not necessary that anything with behavior that is equivalent to a
human's is therefore conscious. If I had to give an answer, I would say that,
probably, neither the computer nor the robot is conscious, because they were not
built the old fashioned way (over billions of years of evolution). I would guess
that consciousness in fact underlies both the nonverbal and verbal behaviorial
manafestations in humans and other animals, with the nonverbal coming first
temporally. I agree with you that these are empirical matters, and do not see much
light being shed on them by thought-experiments.
Bob Kane Writes:
It seems clear to me (maybe clearer than it should), that language is not
necessary for consciousness. For self-consciousness, yes, but consciousness, no.
Otherwise what happens to the consciousness of lower animals that lack language.
Also motor skills seem not to be necessary, being possessed by living things way
down. And perceptual skills if that means mere info processing. What does seem
necessary are sensory experience and feeling. Do the Minsky/Books characters have
Sarah Fisk Writes:
You mention that you'd need to ask the equivalent of "are you conscious" to
each of the robots. It would likely be fairly easy to come up with some slightly
'tricky' way to ask the lingual robot via language (aside from "are you conscious")
that would reveal self-awareness, etc. I think that there are also means to
extrapolate this from the mute robot's behavior. This is rather too simplistic, but
as an example, place a mirror before the dancing robot. Does it behave
differently? (perhaps showing off to itself?) If it realizes that a person has
seen this transformation into vain-bot, does it stop what it's doing and perhaps
act embarrassed at all?
This would be a typically human response, because of the level of
self-awareness as seen in the reaction to the mirror, and the awareness that others
can 'see' to some degree into our motivations (here, vanity), which implies
awareness of others' mental states as well as of one's own.
Stephen Jones writes:
If this system has no sense organs (apart from the issue that a keyboard is a
sense organ because sense organs are essentially input devices) how can it have any
experience by which to make sense of anything which is typed into its keybioard?
Allowing that the keyboard is of such generality that it can be used to convey such
things as whatever might be able to stimulate some kind of qualia in the system you
then fall into a reductio. In any functioning (currently known) conscious system
the means by which any knowledge of the world is gained must then constitute
something of an experience of the world (not necessarily first order or direct
experience). Some of the capabilities you have accorded it can be done now by
computing machines (viz. solve mathematical logic and make predictions about stock
markets and politics) but to make judgements about emotions requires some sort of
experience of emotions and to write poetry probably requires a similar relation to
the world. It is about relations to the world and to have relations to the world
requires machinery in some general sense with which to have those relations. A
keyboard and a net connection are input organs. The capacity to display and to
produce net output are motor organs. I'm afraid you have a failure of generality
All of the things that your Brooksian machine can do are generalisations of the
business of language. Such internal representations of pictorial elements or
choreographic movements whether in dance or fencing are still internal
representations/transforms of the processes externally art and dance/fencing.
Besides since when did we deny the consciousness of your average deaf-mute? And
enough researchers will allow animals to have consciousness to make that idea
I think you would be reasonably safe in suggesting that the Brooksian machine
would have consciousbness, at least in the sense of passing a non-loinguistic
Turing Test. I don't believe that the logical inconsistency of the Minskian machine
could even get near a Turing Test.
This "causes" business is still a dualisation (now of the Chalmers kind). The
real-time (dynamic) process of being living systems-in-the-world in processing
within a monist view must in itself be conscious. I think the real solutions to the
dualist view revolve around the concept of experince. The way I put it is like
this: I am not some surf-rider on the wave of experience. I am the wave.
Therefore, if we build a machine that could do whatever we can do that
>other things can't do, that machine should be conscious. Once we had built >such a
machine, we could also shove aside Chalmer's "hard problem" by >saying that it was
just a brute fact that any machine that could do those >uniquely human things must
Basically this is correct, except that emulating human capabilities is not
necessarily the basis of consciousness. After all: What is it like to be a...? What
we have to do is show that the experience of something is a qualial process (if I
may coin a new word). Chalmers' point is more about the transform of those
functions of conscious activity into the subjective phenomenal first-person
experiences of our internal private woirlds, ie. the production of qualia.
The problem that this thought experiment seems to raise is that we have two
very different sets of functions that are unique and essential to human beings, and
there seems to be evidence from Artificial Intelligence that these different
functions may require radically different mechanisms.
What? Language and emotions? I'm not sure I know what you're referring to.
And because both of these functions are uniquely present in humans, there
>seems to be no principled reason to choose one over the other as the >embodiment
Doesn't your cat ever show affection for you or fear of you when it has done
something you'd rather it didn't?
..... This seems to make the hard problem not only hard, but important. If
it is a brute fact that X embodies consciousness, this could be something that we
could learn to live with. But if we have to >make a choice between two viable
candidates X and Y, what possible criteria >can we use to make the choice? It
could be a matter of empirical fact that >a certain level of perceptual and motor
skills gives rise to consciousness, and that language has nothing to do with the
case. It could also be a >matter of empirical fact that a certain level of language
facility gives rise to consciousness, and that perceptual and motor skills have
nothing to do with the case. If the former is true, then Minsky's super Turing-test
machine is a zombie. If the latter is true, then Brooks' super mute-robot is a
Look, both of these things as operational processes are the same business, viz.
means of gaining experience of the world and having causal interaction with the
world. I think it is perfectly possible to have non-speech based interaction and
experience but it ain't possible to have non-interaction based experience.
Experience and the means for having and communicating it are the real game here. >
> For me, at least, any attempt to decide between these two >possibilities
seems to rub our nose in the brute arbitrariness of the >connection between
experience and any sort of structure or function.
The arbitrariness of connections has to be considered carerfully. The
experience by the retina of photons of a wavelength which we call red (560nM or
whatever it is) is a brute fact and arbitrariness of a sort but the fact that the
culture which uses the English language has taught me that that experience is an
experience of redness is an arbitrariness of quite different kind. This is the
... So does any attempt to prove that consciousness needs both of these
kinds of structures. (Yes, I know I'm beginning to sound like Chalmers. Somebody
please call the Deprogrammers!) Do we ask each machine if it is conscious? A Radio
shack PC with a four line program in basic could be trained to print "yes" on the
screen when someone typed in "are you conscious?". This >question seems to be in
principle unfalsifiable, and yet genuinely >meaningful. And answering a question
of this sort seems to be an inevitable hurtle if we are to have a scientific
explanation of consciousness.
Asking a machine if it is conscious is pointless unless it is capable of lying
to you. Ask the machine what it would like to do for itself if you bought it a GPS
system (go exploring!) for example. I think more useful results are gained if you
use the kind of criteria that Robert Kirk (from Nottingham University in the UK)
Perhaps once we discovered the right evidence and arguments, it would be
intuitively obvious which of these two sorts of structure would be necessary for
consciousness (or why and how both would be necessary). But somehow, having the
whole thing rest on a brute intuition seems almost as disturbing as having it rest
on a brute fact. Suppose when I think about one of these structures' relationship
to consciousness, a little light goes on in my head, and a voice says "aha", so
what? Why should that be any more decisive than a flutter in my stomach, or a
design found in tea leaves or animal entrails?
I've lost track of what structures you're really talking about here, machines?
or processes (language, sonsory-motor)?. The crucial point is experience and the
ability to have it and communicate that one has it (ie. reflect on it)
Some people (probably Fodor, and perhaps Dennett) might say that the level
of skillfulness I posit for the robot of the future simply >wouldn't be possible
without some kind of Language-of-Thought. After all, don't dancers and musicians
talk about their work, and isn't such talk essential to their work? There is no
doubt that it is helpful, but my own experience as a musician tells me that it is
not necessary. I have known too many musicians who are completely incapable of
talking about what they do, and still manage to do it brilliantly. For many
Philosophers, who spend most of their time talking and writing, there seems to be
no serious candidate other than language for constituting consciousness. But anyone
who has worked with Rock musicians knows it possible for someone to be skillful and
flexible at activities that no non-human can perform. (Song birds have nothing
remotely like the human capabilities for creating music), and have verbal abilities
only slightly better than those of Washoe the signing wonder-chimp.
I would repeat that language is not necessary for consciosness. I have a friend
who is epileptic. She has written some very interesting stuff on the re-assembly of
herself after a fit. and the reacquisition of language comes very late in the
process ao that she is quite conscious of its happening. She needs language to
report the experience but not to have the experience of not having language.
If, as seems reasonable, your criterion for the presence and absence of
consciousness is the presence or absence of conscious/phenomenal experience, we now
have conclusive empirical evidence showing that the function of
conscious/phenomenal experience is to provide what Broadbent (1971) calls the
"evidence" on which the categorization of problematic inputs (inputs which are
either unexpected or significant relative to the organism's current or perennial
motivational concerns) is based. This evidence comes from the work on the effect
of lesions of the striate cortex in man (Weiskrantz 1986) and in monkeys (Humphrey
1974; Cowey & Stoerig 1995). We know from the "blindsight" evidence assembled by
Weiskrantz that the effect of lesions of the striate cortex in man is to abolish
visual conscious experience in the affected part of the visual field. Some visual
discriminations are still possible to objects in the affected part of the field,
but are described by the subject as "pure guesswork". The Cowey & Stoerig
experiment shows that the principal effect of a near total ablation of the striate
cortex in a monkey is to deprive the animal of the ability to categorize its visual
inputs. The work of Broadbent (1958; 1971) on the so-called "cocktail party
effect" in the auditory modality shows that the function of selective attention,
both involuntary and voluntary, in relation to the initial processing of sensory
input is to protect the perceptual categorization mechanism from overload by
focusing on the problematic at the expense of the non- problematic. Subsequent
work by Pashler (1991; 1997) and Posner (Posner & Petersen, 1990; Posner & Dehaene
1994) shows that the selective attention which controls the processing of sensory
input (the posterior attentional system - superior colliculus; pulvinar and
posterior parietal cortex) is to be distinguished from another such system (the
anterior attentional system-anterior cingulate and basal ganglia) which controls
access from the output of the perceptual categorization system into another limited
capacity channel whose function is to select a response appropriate to a situation
of the type that has been identified by the categorizer as being currently present.
In the light of this evidence I have no hesitation in concluding that the
Rodney Brooks machine is conscious and that the Minsky machine is not. Sadly, I
have to say that the argument on which this conclusion is reached owes virtually
everything to empirical neuropsychology and almost nothing to philosophy. This, so
it seems to me, is the end of the line as far as the philosopher's involvement in
the mind-body problem is concerned. Just as the problem of the origin of the
universe has ceased during our lifetime to be a problem in theology and become an
empirically decideable issue within astronomy; so, as I foresaw in 1956, the
mind-body problem is ceasing to be a philosophical problem and becoming an
empirically decideable issue in neuroscience.
Broadbent, D.E. (1958) *Perception and Communication*
Broadbent, D.E. (1971) *Decision and Stress*. London: Academic Press.
Cowey, A. and Stoerig, P. (1995) Blindsight in monkeys. *Nature* 373, 6511:
Humphrey, N.K. (1974) Vision in a monkey without striate cortex: a case
*Perception* 3:241-255. Pashler, H. E. (1991) Shifting visual
attention and selecting motor responses: distinct attentional mechanisms. *Journal
of Experimental Psychology: Human Perception and Performance* 17:
Pashler, H. E. (1997) *The Psychology of Attention*. Cambridge,
MA: MIT Press.
Place, U.T.(1956) Is consciousness a brain process? *British
Journal of Psychology* 47:44-50.
Posner, M.I. and Dehaene, S. (1994) Attentional networks. *Trends in
17:75-79. Posner, M.I. and Petersen, S.E. (1990) The
attention system of the human brain. *Annual Review of Neuroscience*
Weiskrantz, L. (1986)*Blindsight*. Oxford: Clarendon Press.
Dick Byrne writes:
My comments may just reflect the naive positivism of an ape-watcher, but there
My immediate thought is, Why should we want to imagine that either machine is
conscious? --which yu've anticipated, by
The one thing that everyone accepts about consciousness is that human beings
are conscious. Therefore we assume that the more something shares in those
characteristics that are unique and essential to human beings, the more likely it
is to be conscious. Those animals that resemble us probably are conscious, those
animals that don't resemble us probably are not, and rocks definitely are
But that is a terribly weak form of logic! Analogously, everyone accepts that
rhinoceroses are rare. So, do we think that the more similar to a rhinoceros an
animal is, the rarer it is? So, probably hippopotamuses are rare, but ivory-billed
woodpeckers are not. There's an underlying assumption here, that consciousness is
simply a measure of human-ness, a score on the Human Uniqueness Scale. I can't buy
Therefore, if we build a machine that could do whatever we can do that other
things can't do, that machine should be
See? it's just a score on the HU scale!
Perhaps once we discovered the right evidence and arguments, it would be
intuitively obvious which of these two sorts of structure would be necessary for
consciousness (or why and how both would be necessary).
Perhaps. What I like about this statement is that it accepts that we haven't. I
agree with that, and therefore think it's not the time to ask the question. (But I
appreciate that a philosopher can't be stopped that easily!) Put it another way, we
don't yet use the word 'consciousness' in a consistent enough way as necessary to
consider asking any questions about it. Everyone means something different, and
there's no way each can explain to the aother what s/he means. I sat through a
conference on 'consciousness' here, organized my my neuro-colleagues, with a
growing disbelief--that what they meant was what blindsight patients didn't have.
Seemed only one stage up from what my doctor means, not being unconscious. But who
am I to assume monomoly on semantics? However, discussions about what it 'is',
which robots would 'have' it, are going to be tricky with a wealth of different
senses being conflated.
Oh well, I'm probably being boring, it's a bit of a hobby-horse of
mine--personally I always firmly distance myself from the Griffin camp who want to
attribute consciousness to animals, and try instead to attribute some level of
planning and representation instead, which seems to me more tractable.
Edward Hubbard writes:
I am inclined towards an "architecturalist" position. I am also convinced that
the sorts of verbal cognition that we make use of are derivative on our overall
general cognitive structure. The important thing to me is that you *have* created
an interesting conflict between the intuitions of GOFAI and the more modern
connectionist approaches. This is also a confict that is paralleled by the
division between traditional generative accounts of language and more cognitively
oriented appraoches such as those of Lakoff, Sweetser, Langacker and Fauconnier.
In my opinion, it really *is not* possible to have meaningful linguistic
representation without drawing on other non-linguistic forms of representation
first. This is the distinction between semantic and semiotic that Contintental
philosophers were much absorbed with earlier this century. The way I have
understood this, it seems that "semantic" refers to linguistic forms of
representational meaning only, while "semiotic" covers all forms of
In this sense, then, data from the developmental literature is relevant. Piaget
long ago noted that children begin to represent the world around them in terms of
there sensory-motor interactions with it. For example, when his daughter was
interested in riding the rocking horse, she would begin to rock back and forth in a
motion like that made by the rocking horse. Later, when she encountered real
horses, she made the same rocking motion. This was prior to her ability to speak
using language, but the fact that diverse stimuli such as the rocking horse
engendered the same pattern of motor behavior indicates that she had, in some way,
picked up on relevant similarities in the appearance of the two objects and was in
some way grouping them together.
Further data that is relevant here is the role of the "basic level" as explored
by Elanor Rosch. She noted that objects at basic level were more quickly,
accurately and consistently categorized by subjects. The important thing here is
that basic level stimuli were those for which there was a common shape, common
motor interactions and other such factors. That is, the factors that are
constrained by how *we* interact with those
Given considerations such as those, I am concerned that, although *logically*
possible, there is no way *in practice* to ever achieve the Minskian ideal of a
linguistic system without having a system that is also "in the world". Thus,
although the Brooksian approach puts off the problem of semantics, I believe it is
dealing with the problem of semiotics.
As a final note, I think that Place's comments about the role of empirical
science and the lack of a place for philosophers in the debate over the mind/body
problem derives from a relatively narrow conception of what a philosopher's job is.
(*Don't tell HIM I said that*) As Kuhn points out, there is no science that doesn't
proceed with it's own set of methodological and super-empirical assumptions. Even
within "normal science", that is science away from a pre-revolutionary period,
there is a role for philosophers in exposing and examining the very foundations
that make normal science proceed. I think that we may be a bit on the back-side of
a paradigm shift, perhaps the thrid of the century in our thinking about the mind,
and that the role of empirical science in addressing these questions is becoming
stronger, but that is *only* because we have a new set of super-empirical
asumptions with which to make sense of the empirical data. With the relative
newness of the paradigm (i.e. cognitive/computational neuroscience) there is bound
to be debate over the validity and interpretation of the findings. In that
capacity philosophers will always have an important role to play.