by Teed Rockwell
Clark ends his appendix with a description of what he calls "dynamic computationalism", which he
describes as an interesting hybrid between DST and GOFAI. My 'horseLISP" example could be described as
an example of dynamic computationalism. It is clearly not as eliminativist as Van Gelder's
computational governor example, for I am trying to come up with something like identities between
computational entities and dynamic ones. Thus unlike other dynamicists, I am not doing what Clark
calls "embracing a different vocabulary for the understanding and analysis of brain events". I think
we probably can keep much of the computational vocabulary, although the meanings of many of its terms
will probably shift as much as the meaning of 'atom' has shifted since Dalton's time. The label of
"dynamic computationalism" is perhaps as good a description of my position as any, but I think I would
mean something slightly different by it than Clark would. (For the following, please insert the mantra
"of course, this is an empirical question" (OCTEQ) every paragraph or so.)
For one thing, I don't think than an information processing model requires us to assume that all
information processing is going on inside the head. A state space analysis of a living organism in an
environment produces vector transformations of the same sort that we get when measuring the relative
voltages of neurons. And although the latter is essential information if we want to build a duplicate
of the system piece-by-piece, it clearly is a less complete story of what is going on cognitively than
the former. Much of what made connectionism so exciting was that it revealed how state space
transformations could do many things that logic based systems could not. But connectionist models fail
to use all of the potential resources of state space transformations precisely because they are
exclusively brain-centered. An array of neurons is an organ located in the skull, and if the only
computational spaces we are willing to consider are those that can be found by reading off the
voltages of such an array, we are still basically dealing with information transfer between hardwired
modules, rather than bifurcations between state spaces. A higher level of flexibility could in
principle be acheived in a system that shifts between basins of attraction by varying parameters in a
dynamic system. And that higher level of flexibility could be the thing that makes the next level of
cognitive sophistication possible. (OCTEQ)
If it is possible to describe the state space transformations of a whole environmentally embedded
organism using the vocabulary of dynamic computationalism, this would be a radical change from the
brain-centered view of computation that dominates both GOFAI and Connectionism. It would enable us
(and perhaps require us) to treat a variety of other organic parameters as dimensions in the basins of
attraction that would be the "modules" in a cognitive system. These could include variations in the
chemical and hormonal activity in the brain and in other parts of the body, and even changes in the
environment. An example: When J.J. Gibson said that visual information was contained in the light
itself, and was therefore directly perceived, I think he meant that there was no need to copy the
information into the brain for us to be aware of it. This implies that visual experiences emerges from
the brain and the light interacting with each other, and that the neural activity itself would not
have produced visual experience in a brain in a vat that was not interacting with real light. If this
interpretation of Gibson is correct about visual experience , it would mean that only a dynamic model
that included parameter variations in both the light and the brain could accurately model visual
processing. (OCTEQ)
Many of Clark's arguments for the primacy of the brain rest on widely held intuitions: that the
brain is the "key to understanding" intelligent activity, or "the principal seat of
information-processing activity" or (as Paul Churchland would put it "the seat of the soul"). But I
believe that the acceptance of those widely held intuitions is one the things that is holding
cognitive science back.
My critique of the standard idea of causality
questions the assumption that
there must be an answer to the question "what is the cause of mental activity". If that were a
legitimate question, "the brain" would probably be the only possible answer. But I think that question
misrepresents the causal nexus by assuming that causal responsibility is determined by what wins some
sort of contest, in which the winner is declared to be the only contestant. Everyone admits when
pressed that there is never one factor that is fully causally responsible for anything, yet for
pragmatic reasons we have to assume there is one cause for something so we can either understand or
control it. Admittedly this misrepresentation seems to be more useful than misleading most of the
time. But now that the bloom of connectionism's original promise is starting to fade, there is good
reason to consider that there are problems with assuming that the brain is the sole embodiment of
thought merely because it is the most noticeable causal factor in thought. (OCTEQ)
I think that my distinction between the abstract and the material is important for reasons that
may be peripheral to the central points of this paper, which is perhaps why Clark doesn't feel it's
all that essential. Part of what I was trying to do with the distinction between abstract and material
was to show that what is ordinarily called the functional-physical distinction should not be dumbed
down to a distinction between what physicists study and what everybody else studies. This is an almost
universally accepted error, and I would have been glad to have an opportunity to take a few potshots
at it even if it had not been essential to my argument. I don't agree with Van Gelder that dynamic
systems theory could replace cognitive science with physics. I think that the most interesting thing
about DST is that it provides potenital cognitive capabilites that are lacking in GOFAI and
connectionism, and we can only be aware of those when we think about a dynamic system in cognitive
terms. That involves placing the physical facts in a cognitive context, something that physicists, by
the nature of their profession, just don't do.
In fact, I used the distinction primarily for a very different reason: to show that so-called
distributed systems are not distributed abstractly, even if they are distributed materially, and that
the distinct between the two is not that important. I believe it is only perceptual chauvinism that
makes us believe that material things are realer than abstract things. Both are physical, as far as
physics is concerned, and the fact that this distinction can be found in physics as well as in
computational discourse should inhibit the common assumption that functional characteristics are
somehow less real than physical characteristics. Consequently I also think that there is actually less
difference between distributed and modular systems than is widely believed, which might have implied
(outside of the context of the rest of the paper) that I was defending Fodor against Van Gelder. As
Clark points out, however, it is not the fact that dynamic systems are distributed that gives them
their power, but their flexibility. The old blues song "It ain't the meat, it's the motion" only tells
part of the story about the differences between modular and dynamic systems. The important difference
is that certain kinds of information may be stored in motion than cannot be stored in meat. (OCTEQ)
Van Gelder says "it is conventional wisdom that symbol processing "bottoms out" in the dynamical
properties of the computer hardware (although nobody I know can actually describe the details)." What
I was trying to do in this paper was to describe some of the details, or at least show by example what
such a description might look like. I still think that I have taken one small step in the right
direction, and pointed the way towards others. A LISP primitive may look like a minnow when seen from
a higher language perspective, but from the point of view of its machine language implementation it is
a fairly sophisticated module. Van Gelder is right that emergent properties are extremely important in
DST, and consequently our job would not be even close to finished even if we did manage to find
attractor sets that corresponded to all LISP primitives. But it would be a worthwhile beginning.
I'm also not sure there is that much difference between a dynamical system and an attractor. Van
Gelder's glossary does say that an invariant set can be studied as a separate dynamic system, and that
an attractor is one kind of invariant set. I did not mean to imply that a state space could be a
module; if anything it could be a possible value or computational state of a module. But given that I
learned most of this terminology from Van Gelder's book, I don't feel confident about correcting him.
Reading these two commentaries gave me a better sense of how my own beliefs are positioned between
Clark and Van Gelder. Clark implies that Van Gelder's position is more eliminative than his, and Van
Gelder describes my position as "semi-eliminative". With Van Gelder on my left, Clark on my right,
and Fodor way to the right of all three of us, I feel in very distinguished company. I'm pretty sure
Fodor is mostly wrong on the points where we disagree, but I have much less confidence about my disagreements with Clark and Van Gelder. OCTEQ, so
the more possibilities there are available for empirical testing, the better chance we have of finding
the right one.