Many thanks for your thoughtful responses. Because I am including everyone's
responses in the commentaries, I will be brief (and
possibly misleading) in the summaries I give below. I urge everyone to read the
originals if they have the time.
JOHN BICKLE says:
"Forget about attributions of meaningless and all that stuff. Replace it in
your statement with more pragmatically-oriented evaluative notions: theoretically
fruitless, arbitrary without even being helpful for any theoretical, experimental,
or practical purpose, and so on. Any answer to the question will be those".
He describes his position as "neo-Carnapian", i.e. he is claiming that even if
the question is meaningful, that doesn't mean it's worth looking into. He's
probably right, in the sense that anyone can be right about a personal evaluative
choice. And until I started questioning the belief that there is only one kind of
physical process that could embody consciousness, I felt the same way myself. But
the point about this thought experiment is that the current state of cognitive
science offers us two possible candidates for the embodiment of mind. And as Bickle
points out, it seems like nothing we can imagine discovering in the future could
settle this problem one way or the other. If this is true, this means that,
strictly speaking, all this talk about being on the verge of a scientific
understanding of consciousness is hype: No matter how close we get to solving the
Chalmersian easy problems, we are getting nowhere nearer to solving the hard
problem. If this is true, Cognitive Scientists ought to change their description of
what they are doing, even if it cuts back on publicity and grant money. But I don't
want to believe this, and I think the only way to avoid believing this is to
discover the presuppositions that compel this belief, and see if we can change
them. It's a dirty little job, but somebody has to do it, and philosophers seem
less unqualified to attempt it than anyone else. Note, however, that I am not
claiming we can use a thought experiment all by itself to find the answer, the way
Searle claimed that the Chinese Room experiment supposedly proved that a computer
couldn't be conscious. As RONALD LEMMEN points out, the fact that we can imagine
something doesn't tell us anything about the world, only about our concepts of the
world. Remember that the conclusion of my thought experiment was a question, not an
answer. My only goal is to help clarify the question.
WHY ASSUME THAT THESE PROCESSES ARE
SEPARABLE?
RONALD LEMMEN asks: "Why. . . suppose that language is so special that it gets
a niche all of its own, whereas all other abilities are lumped together as 'the
other' kind of structure/function?" He also questions the assumptions that a) it
would be possible to speak intelligently at all if one had no body, and 2) it would
be possible to learn what the Brooksian robot learns without language. STEPHEN
JONES points out that we could collapse these two kinds of machines into the same
category by either a) classifying the keyboard as a kind of perceptual device or b)
questioning the possibility that a Turing machine could talk intelligently about
poetry or social interactions unless it had a life in the world.
I think these objections are on the right track, but they need some more flesh
on their bones. If we accept the idea that certain of our characteristics are not
essential to our being conscious (such as being made of meat instead of silicon)
and certain others are essential. (such as our cognitive architectures, in some
sense of those words), the obvious next question is which characteristics are
essential and which aren't. If we go far enough down the abstractive ladder to
posit what embodies conscious, language could be seen as not "biological" enough to
be responsible for consciousness. But the more abstract we think consciousness is,
the more likely it is that language is essential to consciousness. (Millikan's
claim that language is a biological category justifies itself by showing that
biological categories are more abstract than is commonly assumed).
Part of what language-based GOFAI assumes is that we can abstract language from
all other biological categories and use it to create a creature that thinks. As
Lemmen points out (and Dreyfus documents in great detail), there are a lot of
problems with this assumption that probably doom the GOFAI enterprise from ever
achieving the kind of success described in my thought experiment. But we still have
a problem if we say that linguistic abilities are necessary but not sufficient for
consciousness. Suppose we accept JIM GARSON's claim that "{The two machines} are
both conscious to a certain degree, and we would more seriously entertain thinking
of one as conscious (and be right about it too) if it had some of the properties of
the other." Garson admits that "since consciousness has so many dimensions the
calculus of consciousness will be messy," but my concern is that I'm not at all
sure how to even begin cleaning up the mess. In other words, *How can we possibly
tell which of these functions are responsible for consciousness and which are not,
if they are both always present in conscious beings?* We could, of course, simply
do an analysis of the ordinary language concept of "Consciousness", but what
criteria could we use for correcting the ordinary language concept so as to refine
it into a scientific concept? Perhaps what's needed to answer that question is a
more thorough study of the concept of scientific reduction as applied to philosophy
of mind. That, at any rate, will be the subject of future CQs.
Lemmen's critique of the super Brooksian robot is, I think, partly right and
partly wrong. He is probably right that a Brooksian Robot would have to be able to
self-reflect in order to be that good at learning, but I don't think we need to
assume that reflection is only possible if we have language. SARAH FISKE makes the
following suggestion.
"This is rather too simplistic, but as an example, place a mirror before the
dancing robot. Does it behave differently? (perhaps showing off to itself?) If it
realizes that a person has seen this transformation into vain-bot, does it stop
what it's doing and perhaps act embarrassed at all? This. . . implies awareness of
others' mental states as well as of one's own."
She's probably extrapolating from the fact that chimpanzees (unlike parakeets)
do have an awareness that their reflection is not another chimpanzee, and that
chimps don't really have language. Chimps also have the ability to learn skills
from each other by the "Monkey-see Monkey-do" method, and so do many musically
illiterate Jazz and folk musicians. It could be, however, that the reason we are
so much better at learning than chimps is that we have language and they don't, and
if so, my mute Brooksian machine would be impossible.
IT'S SOMETHING ELSE ALTOGETHER
ROBERT KANE and RICHARD DOUBLE both claim that consciousness is based on
something other than what any machine could do. Kane opts for what he calls
"sensory experience and feeling" and Double agrees with Searle that "It is not
necessary that anything with behavior that is equivalent to a human's is therefore
conscious". That is the starting point of my thought experiment, but hopefully not
the end point.. To say that sensory experience is what is necessary for
consciousness is rather like saying that opium makes us sleep because it has
dormative powers. And as I said earlier, I don't agree with Searle that you can use
thought experiments to prove something about the world. But if sensory
experience/consciousness cannot be identified with something that can be
comprehended from a third person perspective, than dualism is alive and well. Maybe
it can't, and maybe it is.
Double and PETER WEBSTER also offer a possible way of denying consciousness to
both robots without embracing dualism. In Double's words "{Perhaps} neither the
computer nor the robot is conscious, because they were not built the old fashioned
way (over billions of years of evolution)". Webster says the same thing in a
somewhat more mystical fashion. "What if consciousness is inherited, i.e., a
physical entity, in order to have it, must be the product of a continuing
manifestation of consciousness within its species?" Regardless of how mystical
this claim may appear, however, it is not dualistic. It does not require us to
posit a different kind of mental substance, it is just shifting the burden of
embodiment to a broader range of physical territory. (I think that Millikan's
"externalism" might imply something like this. However, I've had many occasions
where interpretations of her ideas that seemed obvious to me were not obvious to
her, so I hesitate to say this with any confidence.) Given that the connection
between consciousness and any particular physical stuff seems so brutally a
posteriori, there's no knock down drag-out argument either for or against making
this shift. I think that such arguments would have the same problems as arguments
about either of the two robots being conscious. This may be a problem with our
concept of causality: If an evolutionary history, linguistic abilities and high
level motor skills are all constantly conjoined with consciousness, how can we
determine which one is the "real" embodiment of consciousness? Perhaps the whole
concept of embodiment is based on a spurious concept of causality, which assumes
that one cause (such as the brain state) is somehow more responsible for a mental
event's taking place than any of the others.
MARKATE DALY and GARY SCHOUBERG also discuss another characteristic of
consciousness that both robots lack: emotions. Daly points out that even a four
month old baby shares with us the following organizing principle: "When "I want",
my attention focuses on the object of my desires and I mobilize my resources to get
it". And Schouberg points out that most of the ethical reasons we have for giving
special rights to conscious beings spring from the assumption that conscious beings
feel emotions, and will suffer if they are denied things they need. It certainly
seems plausible that either of the two robots could do everything posited, and
simply not care one way or the other about this fact. In such a case, they would
sit on a shelf until asked a question, or told to dance, and do nothing otherwise.
It thus seems that the whole question of modeling emotions is a different project
from modeling cognitive abilities, and that both Minskians and Brooksians will have
left something out if they don't tackle this task. In fact, there are now
researchers who are tackling this task, and claiming that our cognitive abilities
are closely related to the fact that we care about things. (See Damasio's
"Descartes' Error" and Rosalind Picard's work on affective computing.) However, we
do get out of the economic niche occupied by AI once we start building machines
that do what they want to do, rather than what we tell them to. Radical journalist
Scoop Nisker once said that a true "Smart Bomb" would be one that refused to go
off.
LET'S LOOK AT THE DATA
U.T. PLACE and DAVID CHALMERS both offer data to help clarify the problem.
Place is confident that scientific research has already answered my question in
favor of the Brooksian machine, and Chalmers remains more agnostic. Both of their
posts have great references, and I would recommend reading them in full, and saving
them for your bibliography. Place was the first person in the analytic tradition to
claim that the mind/body problem could not be solved without looking at scientific
data. The fact that this claim has gone from being controversial to self-evident is
a tribute to his prophetic abilities. No survey of the mind/body problem would be
complete without his classic 1956 paper, and I am honored and grateful for his
continued and careful participation in CQ. But I must disagree with his claim that
"the mind-body problem is ceasing to be a philosophical problem and becoming an
empirically decideable issue in neuroscience." We must make a distinction between
questions for which empirical research is necessary, and questions for which it is
sufficient. My experience has been that whenever anyone claims that a philosophical
questions has been answered by empirical data, the answer always depends on how the
data is interpreted. The facts rarely speak for themselves, and their significance
depends on who is speaking for them. I've seen interpretations of the blindsight
data which claim that it proves the existence of qualia, others who claim that it
disproves the existence of qualia, and Dennett's interpretation claims plausibly
that the data doesn't even imply the existence of blindsight at all. I think
sorting through these kinds of ambiguities is the sort of thing for which
philosophical skills are useful. It may be that neuroscientists will simply acquire
those skills as neuroscience becomes more and more theoretically sophisticated, or
it may be that philosophers will start calling themselves "theoretical
neuroscientists" as an analog to the distinction between theoretical and laboratory
physicists. But regardless of how the duties are divided academically, I think
there will always be a place for philosophical skills in understanding the nature
of the mind.