Back to CQ Homepage
Commentary on "The Modularity of Dynamic Systems"
by Andy Clark
I thought this was well-executed and interesting. I have just a few thoughts in response.
1. Throughout the paper, and especially in the section called "LISP vs. DST", I worried that
there was not enough focus on EXPLANATION. For the real question, it seems to me, is not whether some
dynamical system can implement human cognition, but whether the dynamical description of the system is
more explanatorily potent than a computational/representational one. Thus we know, for example, that a
purely physical specification can fix a system capable of computing any LISP function. But from this
it doesn't follow that the physical description is the one we need to understand the power of the
system considered as an information processing device. In the same way, I don't think your
demonstration that bifurcating attractor sets can yield the same behavior as a LISP program goes any
way towards showing that we should not PREFER the LISP description. To reduce symbolic stories to a
subset of DST (as hinted in that section) requires MORE than showing this kind of equivalence: it
requires showing that there is explanatory gain, or at the very least, no explanatory loss, at that
level. I append an extract from a recent paper of mine that touches on these issues, in case it helps
clarify what I am after here.
2. Re 'the essence of distribution'..I agree that superposition is the key. But it is important to
see WHY it is so crucial. It is not, I think, just that it buys an increase in coding power but also
that it forces a kind of coding in which (something like) similarity of inner vehicle implies
similarity of content i.e. it creates what I call a 'semantic metric'
3. The section "The Abstract and the Material"
I found something here unsettling though I keep failing to put my finger on exactly what it is.
But try this: you say DST is a 'branch of physics' and suggest that makes the DST stories not
functional but 'material' i.e. something like 'abstract physical'. But the same could be said for
computation, as you note. Indeed, the real question again is: what kinds of description serve us best?
It wasn't at all clear to me why the DST descriptions, once linked to CONTENTS, are not just
functional stories? Why introduce the new middle term 'material'? If it is the function that COUNTS,
then functionalism wins. And 'abstract modularity' seems fine to me, even on a full Fodorian account.
It isn't where things are in the traditional computer that matters.
4. The section "The dynamics of state spaces"
I liked this a lot. One thought: is it worth distinguishing something like run-time modularity
from impermeability to learning? I think I can imagine a good, encapsulated subsystem that is a clean
module when invoked as part of on-line skilled behavior but that can NONETHELESS be altered and
transformed by outside influences when in a kind of learning mode, and that can go into that mode
again and again (think of what happens when a good golfer tinkers with her swing). (??).
-------- Appendix:
From Clark, "Time and Mind" JOURNAL OF PHILOSOPHY 1998: 354- 376 ------- 3.
DYNAMICS & THE
FLOW OF INFORMATION.
The deepest problem with the dynamical alternative lies precisely in its treatment of the brain as
just one more factor in the complex overall web of causal influences. In one sense this is obviously
true. Inner and outer factors do conspire to support many kinds of adaptive success. But in another
sense it is either false, or our world-view will have to change in some very dramatic fashions indeed.
For we do suppose that it is the staggering structural complexity and variability of the brain that is
the key to understanding the specifically intelligence-based route to evolutionary success. And we do
suppose that that route involves the ability, courtesy of complex neural events, to become appraised
of information concerning our surroundings, and to use that information as a guide to present and
future action. If these are not truisms, they are very close to being so. But as soon as we embrace
the notion of the brain as the principal seat of information-processing activity, we are already
seeing it as fundamentally different from, say, the flow of a river or the activity of a volcano. And
this is a difference which needs to be reflected in our scientific analysis: a difference which
typically is reflected when we pursue the kind of information-processing model associated with
computational approaches, but which looks to be lost if we treat the brain in exactly the same terms
as, say the Watt Governor, or the beating of a heart, or the unfolding of a basic chemical reaction.
The question, in short, is how to do justice to the idea that there is a principled distinction
between knowledge-based and merely physical-causal systems. It does not seem likely that the
dynamicist will deny that there is a difference (though hints of such a denial are sometimes to be
found). But rather than responding by embracing a different vocabulary for the understanding and
analysis of brain events (at least as they pertain to cognition), the dynamicist re-casts the issue as
the explanation of distinctive kinds of behavioral flexibility and hopes to explain that flexibility
using the very same apparatus that works for other physical systems, such as the Watt Governor.
Such apparatus, however, may not be intrinsically well-suited to explaining the particular way
neural processes contribute to behavioral flexibility. This is because 1) it is unclear how it can do
justice to the fundamental ideas of agency and of information-guided choice, and 2) the emphasis on
total state may obscure the kinds of inner structural variation especially characteristic of
information- guided control systems.
The first point is fairly obvious and has already been alluded to above. There seems to be a
(morally and scientifically) crucial distinction between systems that select actions for reasons and
on the basis of acquired knowledge, and other (often highly complex) systems that do not display such
goal-oriented behaviors. The image of brain, body and world as a single, densely coupled system
threatens to eliminate the idea of purposive agency unless it is combined with some recognition of the
special way goals and knowledge figure in the origination of some of our bodily motions. The
computational/information-processing approach provides such recognition by embracing a kind of
dual-aspect account in which certain inner states and processes act as the vehicles of specific kinds
of knowledge and information. The purely dynamical approach, by contrast, seems committed (at best) to
a kind of behavior-based story in which the purposive/non-purposive distinction is unpacked in terms
of such factors as resistance to environmental perturbation.
The second point builds on the first by noting that total state explanations do not seem to fare
well as a means of understanding systems in which complex information flow plays a key role. This is
because such systems, as Aaron Sloman has usefully pointed out, typically depend upon multiple,
'independently variable, causally interacting sub-states' (op. cit., p. 80). That is to say, the
systems support great behavioral flexibility by being able cheaply to alter the inner flow of
information in a wide variety of ways. In a standard computer, for example, we find multiple
databases, procedures and operations. The real power of the device consists in its ability to rapidly
and cheaply reconfigure the way these components interact. For systems such as these the total state
model seems curiously unexplanatory. Sloman (op.cit. p.81) notes that:
a typical modern computer can be thought of as having a [total] state represented by a vector
giving the bit-values of all the locations in its memory and in its registers, and all processes in
the computer can be thought of in terms of the machine's state space. However, in practice, this
[Total State Explanation] has not proved a useful way for software engineers to think ... Rather, it
is generally more useful to think of various persisting sub-components (strings, arrays, trees,
networks, databases, stored programs) as having their own changing states which interact with one
another
The dynamicist may suggest that this is an unfair example, since of course a standard computer
will reward a standard computational analysis. This, however, is to miss the real point, which is that
information-based control systems tend to exhibit a kind of complex articulation in which what matters
most is the extent to which component processes may be rapidly de-coupled and re-organized. This kind
of articulation has recently been suggested as a pervasive and powerful feature of real neural
processing. The fundamental idea is that large amounts of neural machinery are devoted not to the
direct control of action but to the trafficking and routing of information within the brain. The
point, for present purposes, is that to the extent that neural control systems exhibit such complex
and information-based articulation (into multiple independently variable sub-systems) the use of total
state explanations will tend to obscure the important details, such as the various ways in which
sub-state x may vary independently of sub- state y and so on.
The dynamicist may then reply that the dynamical framework really leaves plenty of room for the
understanding of such variability. After all, the location in state space can be specified as a vector
comprising multiple elements and we may then observe how some elements change while others remain
fixed and so on. This is true. But notice the difference between this kind of dynamical approach and
the radical, total state vision pursued in section 2. If, as I suspect, the dynamicist is forced to a)
give an information-based reading of various systemic substates and processes and b) to attend as much
to the details of the inner flow of information as to the evolution of total state over time, then it
is unclear that we still confront a real alternative to the computational story. Instead, what we seem
to end up with is a (very interesting) hybrid: a kind of dynamical computationalism in which the
details of the flow of information are every bit as important as the larger scale dynamics, and in
which some local dynamical features lead a double life as elements in an information-processing
economy.
This kind of dynamical computationalism is surely attractive. Indeed, it is the norm in many
recent treatments that combine the use of dynamical tools with a straightforward acceptance of the
notions of internal representation and of neural computation. Nonetheless, such an accommodation is
clearly rejected by those, who like van Gelder, depict the dynamical approach as in some deep sense
non-computational.