A Defense of Emergent Downward
Causation
Teed Rockwell
Philosophy
Department
Sonoma
State University
At
least one of my professors told me that in order to write a good philosophy
paper, one should always try to defend as little territory as possible. The
danger of this advice is that although it may make one's points defensible, it
may also make them not worth defending. In order to avoid both of these
extremes, I am going to defend a relatively modest claim, which appears to be
necessary but not sufficient for another more ambitious claim, which itself is
also necessary for another more ambitious claim, and so on for several layers.
I will start with the most ambitious claim, and then work my way down until I
come to the claim I believe I have some chance of defending. I will, however,
continue to make references to the other layers, to help us remember why the
more modest claims are worth thinking about.
1) Free Will--This is the
big one, which has devoured many a philosophical career in its defense. I won't
even attempt to define free will here. I will only mention that in order for
Free Will to exist, there must at the very least also exist:
2)
Mental Causation. Hume claimed that even if events were
caused by mental entities such as beliefs and desires, they would still be
determined, and I won't dispute that claim here. Mental causation seems to be
necessary for free will, even if it is not sufficient. Mental causation is,
however, also in need of defense, for many of it's
contemporary defenders have weakened it with faint praise. Dan Dennett's
concept of the Intentional Stance, for example, makes mental causation
compatible with determinism by seeing it as a useful fiction, and implying that
because it is useful it is somehow more than a fiction. After the dark days of
the Skinnerian behaviorists, many people are grateful to have intentional
explanations granted any ontological status at all. So it is not surprising
that few people ask what seems to me an important question. Why should the
"Physical Stance" taken when we try to understand the world in
physical terms be seen as a more authentic stance than the Intentional Stance?
One reason is that everything in the world can be talked about in physical
terms. Many entities studied in biology may have no characteristics that are
relevant to the study of linguistics, and vice versa, but there is always something
to be said in physical terms about every entity described in other terms. If we
look at any language game, all of the tokens used in that game are made of
thinks like paper or wood or sound-events, and each of these has physical
characteristics of some sort. But just because the physical stance tells us something
about everything doesn't necessarily imply that it tells us everything
about everything. The physical stance does not tell us the whole story if there
is such a thing as:
3)Downward
Causation--I
think the primary, and perhaps the only, reason that the physical stance has
something to say about everything that exists is that it breaks the world down
to its smallest possible parts. This is why we say that everything, from brains
to bananas, is "made of" atoms. Orthodox determinism says that once
you have said exhaustively what something is made of, you have told its entire
story, because the behavior of any whole is controlled by its
parts. In order for there to be downward
causation, it must be possible for a whole to determine the behavior of its
parts, rather than the other way around. It must, for example, be possible
for a brain to control its atoms and molecules, rather than always have the
atoms and molecules controlling the brain. Downward causation is thus also
called "macrocausation", because if it
exists, then macroscopic entities have causal powers over the atoms they are
made of. This can only happen if those macroscopic entities
possesses what are called:
4)
Emergent Causal Properties--These days almost everyone
admits that some sort of emergence exists i.e. that in some sense every whole
is distinct from the sum of its parts. Consequently, the fact that all of the
parts of a given system are physical need not imply that a physical description
of a given system tells its entire story. Various forms of functionalism claim
to show that a whole can change any, and maybe even all, of its parts and still
remain the same whole. (or at least the same kind of
whole). Nevertheless, we feel that this unquestionable sort of emergence is an innocent emergence, and not the spooky sort of dualist
emergence believed in by idealistic philosophers. There are certainly many
kinds of spookiness that are not implied by modern emergentism,
but if emergent properties possess causal powers, they are not purely innocent
from an orthodox materialist point of view. Indeed, much of
what tempts us towards dualism, without forcing us to accept some of its
problems, can be had by acknowledging the possibility of emergent causal powers.
This paper will examine the concepts of both emergence and causality, and try
to show that the suspicions against emergent causality are based on
presuppositions, and on a view of scientific knowledge, that are now at best
questionable.
The Problems with Non-Reductive
Materialism
One
of the ways that modern philosophers have tried to save the innocence of
emergent properties is with an ontological doctrine called non-reductive
materialism. This is a tempting hybrid that accepts the reality of emergence
yet also claims that all causality is physical. Jaegwon
Kim, however, has compelling arguments demonstrating that this hybrid is
self-contradictory. He claims that one can be either materialist, or
non-reductive, but not both. Consider the example below, which is used in Kim
1993 (p. 351-2) to illustrate what is often called Kim's dilemma.
M
causes M*
P
causes P*
In
this diagram, a single mental event M is seen as causing another mental event
M*. This mental event is physically realized (for example in a brain state) by
a physical event P, which causes P* i.e. the physical realization of M* . Kim's argument (greatly simplified) against the
existence of mental causation is that the top layer does no real work. P can
cause P* all by itself, with no help from M, and there is no coherent way in
which M can cause M* without P's help, or without causing P*. Thus it seems
that physical causality is all we've got, and mental descriptions are somewhere
between being shallow and being outright falsehoods. Kim claims that the only coherent{1}
alternatives are:
1) dualism, which says that M and M* are independent of P and P*. This position
is non-reductive, without being materialist
2)Reductionism, which says that physical events are
identical with mental events and
2a)
Eliminativism, which says that mental events do not
exist at all.
2)
and 2a) are materialist, without being non-reductive.
There
is, however, another alternative, which Kim does not consider, which I will
call Pluralism.
Pluralism as an Alternative to
Dualism
and Materialism
Kim
says that if we accept the possibility of emergent causality "the world
remains bifurcated: the physical domain and a distinct irreducible psychical
domain " (Kim 1993 p.96) This clearly implies that dualism is the only
alternative to physicalism; an understandable
assumption given that these have been the two most defended alternatives since
Descartes.{2}
There is, however, a corollary of this assumption: in order for the world to
be bifurcated by emergent mental causality; there must be no emergent causality
within physics itself. It is widely assumed that all of physics is unified
because its quantifiably precise descriptions refer to a very small variety of
entities (atoms, or quarks or whatever), and everything physical is explainable
by referring to the properties possessed by those entities. If physics were not
unified ontologically, emergent mental causality would not bifurcate the world,
it would just be one more unassembled piece in an already fragmented puzzle.
Kim takes this corollary quite seriously, and is one of the few people who is not afraid of spelling out its implications. He claims
that a genuinely physical explanation would have to deny intrinsic causal
powers not only to beliefs and desires, but to
everything except the elementary particles of physical theory.
"
all causal relations involving observable
phenomena-all causal relations from daily experience--are cases of
epiphenomenal causation" (ibid. p.96)
Kim
defines epiphenomenal causation as a relationship between two events which appears to be a cause and effect relationship,
but in fact is merely a reflection of some other underlying causal process. If
we are to be consistent in our denial of emergent processes, we must claim that
strictly speaking the rock thrown at the chair did not cause the chair to fall
over. Rather the relationship between the thrown rock and the chair is an
epiphenomenon that supervenes on the genuine causal processes of subatomic
particles, in essentially the same way that mental states supervene upon physical
states. For if we granted the existence of emergent macroscopic causal
properties within physics, there would be no reason to deny their existence in
the mental realm.
Suppose,
however, that there were certain patterns that emerged in what we call physical
processes which had genuine causal powers? This kind
of emergence would not necessarily imply a dualistic universe, but rather a
pluralistic one. There could be a variety of macroscopic patterns having an
impact on such a world, some of which would be able to control the
particles they were made of, rather than the other way around. In such a
pluralistic universe , there would be no principled
reason for denying the possibility of mental causality. Mental processes could
be one kind of emergent phenomenon, but not the only one. One could flippantly
say that when one asks a pluralist "are you a dualist" the correct
answer is "yes, at the very least". Such a view would save mental
causation from having to rely on finding something ontologically unique about
the mental, and from being tarred with the brush of Cartesian dualism. In a
post-Darwinian world, any attempt to grant special abilities to consciousness
(especially to human consciousness) is bound to look like special pleading
motivated by wishful thinking. If we can get the same result by seeing our
mental processes as one of many different kinds of emergent properties, then
mental properties would be much more plausible result of evolutionary
processes.
Kim
admits that the claim that physical causality has no emergent properties is an
empirical one. He claims that "modern theoretical science treats macrocausation as reducible epiphenomenal causation and . .
. this has proven to be an extremely successful explanatory and predictive
research strategy." and describes this claim as an "observation"
(Kim 1993 p. 96). There is no denying that the early days of Newtonian science
had a lot of success with this strategy. Every molecule of a magnet is itself
magnetic, possessing a north and south pole, and one can predict how the
magnetic field will behave by quantitatively extrapolating from the behavior of
each molecule. This is also true of gravitational effects; If
two objects have sufficient mass to make them weigh five pounds each, then once
you put them together they will constitute a single object weighing ten pounds.
But a lot of changes have happened since Newton in both physics and the other
sciences, and it is no longer obvious that all macroscopic properties are
predictable from their parts even in principle. I don't
think that the validity of this metaphysical assumption can be proven or
disproven with a single argument or experiment. But there are a variety of
developments in science, and in the history and philosophy of science, that
have made the acceptance of genuine macroscopic causality nowhere near as
unthinkable as it once was.
Where does it all begin?
Kim
admits that the causality of tables and chairs is every bit as epiphenomenal as
that of mental states, because both are dependent on more fundamental laws of
physics. But strictly speaking, Kim's position would also require us to claim
that all of the laws of chemistry are epiphenomenal, because the behavior of
the elements is really causally dependent on the behavior of protons, neutrons
and electrons. The behavior of these subatomic particles would also be
epiphenomenal, because they are causally dependent on the behavior of quarks.
And now that we recognize that scientific revolutions are a natural part of the
growth of sciences, we cannot discount the possibility that further research
could reveal (if it hasn't already) that quarks have parts. If this happened,
then the behavior of quarks would be epiphenomenal. Paul Churchland
also suggests that metaphysically there is no reason to be certain that this
reductive process ever stops.
. . .consider the possibility that for any level of order
discovered in the universe, there always exists a deeper taxonomy of kinds and
a deeper level of order in terms of which the lawful order can be explained. It
is, as far as I can see, a wholly empirical question whether or not the
universe is like this, like an "explanatory onion" with an infinite
number of explanatory skins. If it is like this, then there are no basic or
ultimate laws to which all investigators must inevitably led (P.M. Churchland 1989 pp. 293-4)
If
this suggestion turns out to be correct, then the causal powers of quarks would
be different only in degree, and not in kind, from the causal powers of beliefs
and desires. If every causal effect is dependent on the behavior of its parts,
and the division of parts into parts goes on forever, there would be no
principled reason to stop the regress at one place rather than another. Nor can
we claim that all causality is epiphenomenal, for that would be as meaningless
as claiming that all the money that ever existed was counterfeit.
Of
course, if quarks, or some sort of sub-quark, is where
the regress ends, then none of this is a problem. But this argument does show
that the concept of downward causation is not incoherent. It could be a matter
of empirical fact that the world is an explanatory onion with infinite layers,
and if it were, macroscopic causation would exist. Furthermore, such a
possibility is not contradicted by any empirical facts we possess, nor is it
ever likely to be. It only contradicts certain articles of faith that drive the
scientific enterprise, not scientific fact itself. And in addition, I think
there is also scientific fact that supports the existence of macroscopic
causation, as I hope the following examples will show.
Emergent
Causation in Dynamic Systems.
There
are many who see dynamic systems theory as the strongest contender in the race
for a single theory that will unify physics and biology. This may be true in
some sense of "unified", but many researchers in dynamic systems
theory seem to think that whatever unity dynamic system theory reveals will not
eliminate macroscopic causality. Consider these quotes from Kelso 1995.
Understanding
will be sought in terms of essential variables that characterize patterns of
behavior regardless of what elements are involved in producing the patterns . .
. I will try to avoid dualities such as top down versus bottom up. . . the reductionism will be to principles ( or
what I call generic mechanisms) that apply across different levels of
investigation. (p.2)
The
nature of the interactions {in dynamic systems} must be non-linear. This
constitutes a major break with Sir Isaac Newton, who said in Definition II of
the Principia "The motion of the whole is the sum of the motion of all the
parts". For us, motion of the whole is not only greater than, but different than the sum of the motion of the parts, due
to non-linear interactions among the parts or between the parts and the
environment (p.16)
An
order parameter is created by the coordination between the parts, but in turn
influences the behavior of the parts. This is what we mean by circular
causality . . . (p.16). (all italics in original)
I
think these quotes demonstrate that at least one contemporary researcher {3} does not see his research strategy as reducing macrocausation to an epiphenomenon of microcausation.
But we cannot simply bow to expertise on this question, because it is not the
sort of question that can be answered decisively by purely experimental means.
Conceptual analysis of the data of dynamic systems research is the only way to
clarify the problem, and like most philosophical problems, it will probably
never be clarified enough to yield a completely decisive answer. It may very
well be that Kelso has invalidly extrapolated from his data to make unjustified
metaphysical claims. But the fact that his assumptions sprung naturally out of
his research, and seem to help it, is a compelling reason to take them
seriously.
The
patterns studied in dynamic systems theory have to be treated as emergent
properties because they are non-linear. This means, in this case at least, that
their micro-details are usually unpredictable in practice, and, for all we
know, may be so in principle. Consider the simple dynamic system that Kelso
uses as his introductory paradigm: a pattern called the Rayleigh-Benard instability, which occurs in liquid boiling in a
shallow pan. When the temperature is below a certain level, the liquid's
molecules are basically chaotic, and no noticeable macroscopic patterns emerge.
But once the temperature passes a certain threshold, the liquid coheres into
rolling cylindrical patterns, which turn either clockwise or counterclockwise.
Because there are two possible repeating patterns that the liquid spontaneously
falls into, the boiling liquid is referred to as a bistable
pattern. This pattern appears to be genuinely emergent,
however, because there seems to be no way to predict which of the two kinds of
patterns a given molecule will fall into. As Kelso puts it, "How does the
fluid decide which way to go? The answer is Lady Luck herself. (p.10)". The pattern itself is predictable, and once a
molecule becomes part of a pattern it will not change. But it appears that
there are no forces present within the individual molecule that are responsible
for the pattern's coming into being, and no reason why any particular molecule
in the liquid should become part of either type of cylindrical pattern.
I
could say the fact that it is so hard to find explanations for the behavior of
each molecule in a non-linear interaction is at least partial evidence
indicating that there might not be any. But all I really need to say is that we
no longer have to fear that accepting the existence of emergent properties
would require us to throw out the physical sciences. They have apparently
learned to live with emergent macroscopic properties, just as they learned to
live with action at a distance when Newton introduced gravity. If it turned out
that the behavior of the individual molecules was genuinely chaotic, and not
just very hard to figure out, it would mean that whatever order
that appeared in the Rayleigh-Benard
instability was genuinely emergent. But this would not require us to declare a
scientific revolution. On the contrary it would show that we already knew what
there is to know about the subject, and our theories are just fine the way they
are. And given that quantum mechanics already acknowledges the existence of
real indeterminacy, accepting the indeterminacy of non-linear interactions
would not go beyond the bounds of acceptable physical theory{4}. Macroscopic causality does
not conflict with science any more, only with the metaphysics of scientism.
Because
the Rayleigh-Benard instability is relatively simple,
it helps shed some light on a problem for free will raised in Kane 1985. Kane
claims that the existence of genuine physical indeterminacy creates more
problems for free will than it solves.
If
some physical determinacy should ever have a non-negligible effect on human
behavior, it would interfere with, rather than inhance,
freedom by diminishing the control free agents are supposed to have over their
choices and actions (Kane 1985 p. 10)
One
might object that the indeterminist condition is a two edged sword. If it
thwarts control over the agent's will by other agents, it may also thwart
control over the agent's will by the agent himself. (Kane 1985 pp. 35 -36)
Kane
devotes quite a bit of effort to dealing with these problems, and eventually
manages to get quantum indeterminacy to work for free will in a complex and
subtle way. I think, however, that we can make his job a lot easier by
considering the relationship between the molecules and the two different
rolling patterns in the Rayleigh-Benard instability.
Suppose that any given molecule's "choice" between the two rolling
patterns is genuinely indeterminate and consequently there really is no answer
to the question "why did this particular molecule join a clockwise roll,
rather than a counter clockwise roll?". This
would mean that the molecules were not responsible for the behavior of the
rolling pattern, and therefore the causal buck would stop at the pattern, not
at the molecules. It would not mean that the pattern was being controlled in
some way by chaotic forces. If we are dynamic patterns
that emerge out of genuinely indeterminate chaos, we would also have at least
some genuine ontological independence from our parts. It does make a kind of
sense to talk of molecules being "controlled by chaos" when no other
patterns are around to superimpose order on them. But there's really only a
semantic difference between being controlled by chaos and not being controlled
by anything. To say that particles are chaotic is really to say that, because
their behavior is not completely determined, they are to some degree vulnerable
to having higher level patterns superimposed on them.
Consequently, genuine indeterminacy does provide the possibility of an emergent
autonomy that could be possessed by both us and the patterns of the Rayleigh-Benard instability.
Of
course, a human organism is much more complicated than a layer of boiling
liquid. No one would want to describe boiling liquid as having free will, but I
think the arguments above show that it has something that could be labeled
autonomy. Nothing in this paper is meant to provide arguments why an autonomy of this sort would be sufficient to produce free
will, but I do believe that this autonomy is one necessary condition for free
will. The relationship between pattern and molecules in the Rayleigh-Benard instability is also somewhat similar to the
relationship that Dennett describes between the Intentional Stance and the
Physical Stance: In each case, the pattern describes the broad outlines of the
system's behavior, but cannot predict the exact details.
The
intentional strategy. . . is notoriously unable to
predict the exact purchase and sell decisions of stock traders, for instance,
or the exact sequence of words a politician will utter when making a scheduled
speech. But one's confidence can be very high indeed about less specific
predictions: that the particular trader will not buy utilities today or
that the politician will side with the unions against his party. (reprinted in Haugeland 1997 p.67
italics in original)
There
are patterns that impose themselves, not quite inexorably but with great vigor,
absorbing physical perturbations and variations that might as well be
considered random; these are the patterns that we characterize in terms of
the beliefs, desires, and intentions of rational agents ( ibid.
p.70 italics added)
Most
of the time, Dennett seems to believe that this ambiguity is epistemological
rather than ontological. Note that in the second quote he says that the
physical perturbations "might as well be considered random", not that
they actually are random. But I think this kind of caution is only justified if
we assume that no scientifically minded person would dispute the unity of
physics. Dennett, like Kim, appears to assume that physics is causally unified
in this tough sense, and therefore any defense of the intentional stance had
better make the same assumption. Kelso, however, does not need that assumption
to do his research, and even seems to believe that it would get in the way of
understanding his data. If there is no reason to deny the causal independence
of the Rayleigh-Benard instability from the molecules
that serve as its medium, there is no reason to deny causal independence to the
patterns that are revealed to us when we take the Intentional Stance.
Troubles with Functionalism
Even
if we don't consider this new data from dynamic systems, there are still
problems when we try to reduce (mental or computational) wholes to (physical)
parts. The theory of mind called functionalism based it's argument for the
irreducibility of psychological and sociological categories on the fact that
there are too many different physical ways that functional predicates could be
instantiated for them to be reduced to single physical predicates. This makes
it impossible to formulate what are called bridge laws
i.e. logical identities between entities in the reduced and reducing domains.
As Fodor points out, a monetary exchange could be instantiated physically by
handing over a dollar bill, or by writing a check, or by using a string of
wampum, and it would obviously be only an improbable coincidence if any of
these actions had anything in common physically. (Fodor 1975
Chapter 1). This seems to leave open the possibility of emergent
properties, but it is a possibility that the functionalists have been hesitant
to fully embrace.
Because
functionalism was put forward as a theory of non-reductive materialism, the
functionalists set a somewhat contradictory agenda for themselves. Although
they wanted to claim that there was a difference between 1) the relationship of
the physical to the functional and 2) the relationship of one physical entity
to another, they were ambivalent about whether they were willing to grant full
causal powers to emergent functional properties. Kim's arguments against
non-reductive materialism's attempt to have it both ways are, I think,
decisive. But I believe he has been less successful with his attempts at a
positive account of the relationship between functional and the physical. And
in so far as his accounts have been successful, I do not think they continue to
support his rejection of emergent causation.
Kim
bases most of his arguments against emergent functional causation on his belief
that functional concepts can be reduced to identities with physical concepts by
disjunctive bridge laws. These disjunctive bridge laws
produce genuine identities between ordered pairs of physical and functional
concepts, but the complete functional concept is the entire set of ordered
pairs tied together only by disjunction. For example, there is nothing physical
in common between a biological retina and its silicon analog made by an AI
laboratory. There are, however, certain physical characteristics that enable
the biological retina to perform its function (call those characteristics B),
and other very different physical characteristics that enable a silicon retina
to perform its function (call those S). According to Kim, The concept of retina
reduces to being either B or S or any other (known?) set of physical
characteristic that can perform the function of being a retina. Kim admits that
one characteristic ordinarily associated with concepts is missing from this
definition. Most of us think that "the sharing of a property must ensure
resemblance in some respect" and there is no denying
that "the disjunctive operation does not preserve this crucial
feature of propertyhood." (Kim
p.153). But Kim adds "I do not find these
arguments compelling. It isn't at all obvious that we must be bound by such a
narrow and restrictive conception of. . . properties.
. . in the present context." (ibid.). Hopefully
the following arguments will be compelling enough to make the need for
resemblance more obvious.
If
ever there was an essential characteristic of a concept, it would be it's ability to be used in making a judgment about a newly
encountered case. If I am shown fifteen objects and told that they are all
apples, and then am not able to recognize a sixteenth apple as such when I see
it, I do not have the concept of apple. And it is clearly impossible to make a
fresh judgment with a disjunctive concept of the sort that Kim wants to claim
is identical to the functional concept. In contrast, real functional concepts
that are supposedly identical to this kind of disjunction can
be used for making novel judgments. A biologist who understands the concept of
"retina" can study a newly discovered animal, and by a combination of
observation and experiment, make a reasonably informed judgment about what and
where it's retina is, even if it were made out of a physical substance that was
completely different from any other retina he had ever seen. He could never
have acquired this expertise if he learned this concept as a disjunction of
physical attributes.
Imagine
a biology class in which the teacher announced "Class, today we are going
to study the retina" and then each student was given a silicon photo
voltaic cell, a human retina, the compound eye of a fly or a hornet, and the
light sensitive spot of a planarium. Then imagine
that the student spent the entire class (or her entire life) analyzing the
chemical composition, weight, temperature, etc. of these four bits of physical
flotsam. She would never come any closer to understanding what a retina is, or
be able to speculate what some other previously unencountered
retina would consist of. This is because the thing that determines the defining
characteristics of a retina are not it's physical characteristics, but the
function that it performs within the context of a visual system. To understand
such a system, it is not enough to know what its parts are made of. One must
also be able to understand how a similar system could be made out of completely
different physical parts.
Almost
any physical substance or object could perform almost any function if the right
context were built around it. Recall all the thought experiments about minds
made out mouse traps, or a billion Chinese
manipulating flash cards. Couldn't we also build a heart out of a billion
Chinese who circulated some fluid that was functionally equivalent to blood by
means of a bucket brigade system? If so, the disjunctive concepts of both
"mind' and 'heart" would both have to include the entire nation of China. In fact, in order for any disjunction of this sort to
be completely coextensive with a functional concept, it would have to include
almost everything in the universe, including all of the contents of almost
every other functional concept. The reason we don't encounter this ridiculous
combinatorial explosion when we imagine ourselves analyzing a functional
concept is that we see ourselves as starting with a functional concept at use
in the world. We then see ourselves as documenting the history of that
functional concept by analyzing it into a finite disjunction of physical
properties. But a real concept has a future as well as a history, and one could
never predict a concept's future by analyzing its past into a laundry list of
physical terms.
Kim
repeatedly asserts that the special sciences must be connected to physics by
bridge laws{5},
and he considers this to be the main reason for dismissing the possibility of
emergent causality. (especially in chapter 6 of Kim
1993). His arguments are more conceptual than empirical, and they are
consistent given their assumptions. But they can also serve as a two-edged
sword if we question some of those assumptions. If the sciences are not
connected by bridge laws, then Kim's arguments imply that there is no longer
any reason to deny the existence of emergent causality. And as it turns out,
bridge laws are simply not to be found when we look at the most successful
reductions in the history of science. In fact, whenever we attempt to connect
one domain of discourse to another, even within physics itself,
ambiguity and multiple realizability
seem to be unavoidable.
Consider
Berent Enc's reply to the
functionalist critique of reduction, which is quoted favorably in Paul Churchland 1989 and in Patricia Churchland
1986. Enc discovered evidence that
the multiple instantiability which supposedly renders reduction impossible in psychology
is present in the physical sciences as well. As Paul Churchland
puts it:
temperature is mean
molecular KE in a Gas of the constituent molecules, but in a classical solid,
temperature is mean maximum molecular KE. In a plasma,
it is a complex mix of differently embodied energies (ions, electrons, photons)
depending on just how high the temperature is. And in a vacuum, it is a
specific wavelength distribution among the electromagnetic waves coursing
through that vacuum. (1989 p.285)
This
fascinating observation does show that physical and psychological categories
are not as different as they seem at first, and this is obviously significant.
If there is no essential difference between the kind of reduction
which is possible within physics, and the way mental states can be
reduced to brain states, there is no reason to assume that psychology is an
autonomous specialty. This is an important point with regard to the sociology
of knowledge. But if reductionism includes the denial of emergent causality,
none of this justifies Patricia Churchland's
conclusion that "if psychology is no worse off than thermodynamics, then
reductionists can be cheerful indeed." (1986 p. 357) This cheerfulness is
only justified if we accept as an unquestioned given that whatever relationship that exists between branches of physics is
automatically describable as reduction. For the reasons stated above, multiple realizability cannot be
genuinely reduced by means of disjunctive bridge laws. Because macroscopic
physical entities like temperature are multiply
realized at the microscopic level, this means that we must tolerate the
existence of emergent properties (i.e. incomplete reductions) even within
physics itself. Therefore, because reduction to some fundamental ontology is
not a necessary condition in physics, descriptions of mental causation (such as
those provided by psychology{6}) do not have to be on an entirely different footing from
physics to escape reduction. On the contrary, it would be unfair and arbitrary
to demand reduction for mental causation when it is not demanded of the various
branches of physics.
John
Bickle's "Psycho-Neural Reduction: the New
Wave" eloquently argues that it is still possible to salvage the word
"reduction" in a meaningful way, and in some sense he is surely
correct. But we still lose what was once the original root meaning of the word,
and this will unquestionably have an impact on reduction's ability to epiphenomenalize mental causation. In what sense can we
call a relationship between two terms in a theory a reduction if it doesn't
actually reduce? There is no question that reduction was achieved with bridge
laws. But bridge laws actually reduced because they created logical identities.
At the end of a bridge law reduction, you are left with one entity instead of
two. Because new wave reductionism has no bridge laws, there are no really
strict identities, and if there are no identities the relationship between one
domain and another gets to be very confusing. And it appears that the ability
for the parts to causally control the whole gets lost in the confusion. This
apparently gives genuine causal autonomy to macroscopic events, freeing them
from the control of the behavior of their submicroscopic parts.
One
way of cutting through this confusion is with the concept of elimination.
If we say that the reduced theory is simply falsified by the reducing theory,
then we have genuinely reduced the number of entities because one of the two
entities is thrown away. But if we say that all new wave reductions are
eliminations, we are faced with universal skepticism unless we eventually find
the one true theory. Seeing as the history of science gives us little reason to
think we will find the one true theory, we are forced to accept the following
argument:
1)each theory is
falsified by the theory that succeeded it
2)
this process never ends because every theory is
followed by a better theory.
Therefore:
3)all theories we ever had or will have are false.
The
only way that New Wave reductionism can escape this kind of skepticism is to
admit that the reduced theory is rarely cock-eyed enough to justify the belief
that it is completely false, and therefore we must embraces a compromise
position. Bickle (and his predecessors Hooker and the
Churchlands) consequently arrange reductive
identification and elimination on a continuum, admitting that there is no sharp
line that separates them.
This
is an accurate description of how practicing scientists actually deal with the
problem. Sometimes entities from common-sense and/or
old scientific theories are unambiguously identified with the entities
described in the new theory. For example, no one has any trouble with the claim
that light is electromagnetic energy. Other times the difference between the
old theory and the new is so great that the old theory
is seen to be simply falsified. For example, everyone agrees that there is no
such thing as phlogiston. But between these two extremes there is no
unambiguous criteria to differentiate between identity and elimination, and
those who defend new wave reductionism are often to forced
to rely on somewhat poetic language to explain how the distinction is made.
Here's what Paul Churchland says on the topic, for
example.
full fledged identity statements are licensed by the comparative
smoothness of the relevant reduction (i.e. the limiting assumptions are
not wildly counterfactual, all or most of [the old theory's] principles find
close analogs in [the new theory] etc.) . . . and thus allows the old
theory to retain all or most of its ontological integrity. (ibid
p.50)
Note
the modest criteria for determining smoothness. Limiting assumptions can be
counterfactual as long as they are not wildly so, the principles of the two
theories can be different as long as they are close analogs, the old theory
need not retain all of its ontological integrity as long as it retains most of
it. And yet all of this similarity gets accepted as identity, for all practical
purposes. We cannot say, for example, that the "smoothness quotient"
for the light-electromagnetic energy relationship is .87 and therefore light is electromagnetic energy , whereas the
relationship between caloric and molecular motion has an "S.Q." of
only .34 and therefore there is no such thing as caloric.
Bickle 1998 does attempt to quantify
something like this smoothness relationship in chapter three. But although he
does manage to successfully quantify many aspects of reduction, these all rest
on the concept of "ontological reductive link" (ORL) that connect the
reduced theory to the reducing theory. Bickle defines
the concept of ORL with a great deal of quantitative precision, but also admits
that the ORLs in really interesting reductions correct and change the reduced
theory, resulting in what he calls "blurs". He also defines the
concepts of blurs with a great deal of quantitative precision. The crucial
ontological question, however, is how much blur is permitted before we have to
call a reduction an elimination. Bickle
admits that "context dependent and mostly pragmatic considerations figure
into determining admissible blurs", and that the best he can do is give
some "weak necessary conditions." (p.86)
This
description is appropriate for philosophy of science, for it is every bit as
ambiguous as the concepts it is attempting to explain. But when one denies the
reality of emergent causation, this goes beyond the boundaries of philosophy of
science. Scientists don't have to deal (even subliminally) with such questions
while doing science, and consequently philosophy of science doesn't have to ask
them either. But when we either affirm or deny the existence of emergent
causality, we must extrapolate scientific truth into metaphysics, which goes
beyond the question of how clear an understanding we need to run a good
experiment. We would, however, be justified in denying the existence of
emergent causation, if we were reasonably certain that scientific truth is
actually incompatible with emergent causation, and that to accept such a
possibility would require us to reject the scientific method. This, as I said
earlier, is Kim's main justification for the principle of mereological
supervenience that allegedly invalidates all claims
of emergent causation.
However,
the view of scientific progress revealed by the New Wave reductionism of Bickle, Hooker, and Churchland
seems to indicate exactly the opposite i.e. that belief in emergent causation
is not only compatible with modern science, but may be the only thing that can
save it from skepticism. If we believe that the entities described by the
reduced discourse are genuinely real ontologically, and yet not reducible in
the old tough sense produced by elimination or identity, an inevitable
consequence of this is that the reduced and the reducer are to some degree
ontologically independent of each other. The only way we could eliminate this
ontological independence would be to say that these blurs are apparent, rather
than real, and that everything that occurs at the macroscopic level is actually
identical to something at the microscopic level, even if we don't know what it
is. Given that the newest work in history and philosophy of science reveals
that we have never found such identities, and probably never will, we therefore
have a choice between accepting the partial ontological independence of each
layer, or claiming that reality has an essential
nature that science will probably never reveal. If we accept the latter
alternative, however, we can no longer use scientific realism as a way of
dismissing emergent causality.
If
the connections between entities described by different realms of discourse are
not identities but only isomorphisms, we will not get
necessary causal connections between the two realms, only probable ones. If A =
B, then one can infer from this that if A is F , then
B is F. But if A is only similar to B, the only thing we can conclude from
"A is F" is that "B is probably F". (Not that this is a
legitimate logical inference, but it is an inference we make in both science
and daily life.) The connection will only incline but not necessitate, to quote Kane quoting Leibniz. If there is any blurring at
all between the entities referred to in macrodiscourse
and those referred to in microdiscourse, the laws
that govern the entities in one domain cannot necessitate the behavior of
entities in the other. And the degree of blurriness will correlate inversely
with the force of the inclination that connects the two domains{7}.
This
argument is in some ways similar to Davidson's argument for anomalous monism,
but unlike Davidson I am a pluralist, not a monist, which saves me from many of
the objections raised against Davidson. Like Kim, I agree that if psychological
events are genuinely anomalous, then they must have their own causal powers,
and therefore we lose causal monism. But unlike Kim, I do not see this as a
problem, because there is no longer compelling evidence that we always have
causal monism even within physics itself. Kim believed that the lack of
emergent causality (implied by what he called mereological
supervenience, or the causal dependence of the whole
on its parts) was an essential characteristic of all physical explanations. His
theory of epiphenomenal causation discussed earlier cannot work without this
assumption. But if we are to take philosophical naturalism really seriously, we
cannot treat mereological supervenience
(or anything else) as an a priori truth. The evidence I have presented
indicates that macroscopic physical events do have emergent causal powers. And
as far as I can see, Kim is correct in claiming that we cannot grant causal
powers to other macroscopic physical events and still deny
those powers to psychological events. If rocks can have emergent causal powers,
there is no reason to assume that minds cannot. And if the causal powers of
minds are emergent, they can be as distinctively mental as the causal powers of
rocks are distinctively geological.
Because
of the context in which this discussion takes place, it's also important to
repeat that this ontological independence does not imply that it is possible to
have an autonomous psychology. Psychology still consists at least partly of the
physical laws that impinge upon the psychological. Understanding how those laws
affect those dynamic systems we call human organisms is thus an essential part
of any scientific psychology. But if the laws that relate atoms to organic and
other dynamic systems are probabilistic, not deterministic, those laws do not
eliminate emergent causal systems, they only enable us
to make probabilistic predictions about their behavior. This, of course, is the
way it has always been, for physical science has never been able to predict the
behavior of biological systems with certainty. Accepting that biological
systems have emergent causality requires us to reject the LaPlacean
faith that all future explanations are atomistic, but it does not require us to
reject any scientific facts.
Nor
does this grant any special status to folk psychology
as we currently understand it. What folk psychology embodies is our folk
understanding of this emergent causality, and disciplined inquiry will
certainly have as revolutionary an effect on our understanding in this area as
in any other. The effect might even be so revolutionary as to cause physicists
and psychologists to start describing their subject matters with very similar
vocabularies. But regardless of the sociological significance of this change,
it would not eliminate emergent causality in psychological systems.
Causality, Sense, and Reference
The
philosophy of science community rejected bridge laws for empirical reasons, but
there are also other more abstract philosophical considerations that are
relevant here. Even if we put aside questions of upward versus downward
causality, there are problems involved with establishing identities between
entities named in any two independently conceived domains
of discourse. And it is especially difficult to establish identities
which preserve causal powers. The philosophical distinction between
sense and reference, which seems straight forward with a small handful of
examples like "Cicero" and "Tully" or "The Morning
Star' and "the Evening Star" is what underlies the idea that two
domains of discourse can describe exactly the same entity. But too much focus
on those examples have made it misleadingly easy to assume that the same
individual described under two different descriptions would always have
identical causal powers. From this it has been easy to conclude that describing
a brain as a group of molecules would take nothing away from its causal powers,
and that describing a mind as a brain would take nothing away from its causal
powers. When we move away from the paradigm cases of Cicero and Venus, however,
this is not obvious, even if we do not move very far. Consider the following
dialogue:
A:
Is it true that Socrates' death was caused by drinking a cup
of hemlock?
B: Certainly
A:
Is it true that Xantippe's becoming a widow was also caused
by his drinking the hemlock?
B: That is also true
A:
Why were both of these events caused by drinking the hemlock?
B:
Because they were the same event, so talking about "both events" is
not really correct.
A:
Consider a different example: the cup's being empty and Socrates' death were
both caused by his drinking the hemlock, yet those are two separate events.
That is a very different case from Socrates' death and Xantippe's becoming a
widow, is it not?
B: Without a doubt. Socrates' death and Xantippe's
becoming a widow are really two different descriptions of the same event, not
two different events.
A:
Excellent. We must therefore conclude that we could have saved Socrates' life
by having him divorce Xantippe.
Kim
1993 chapter 2 , and Goldman 1970 chapter 1, deal with
this problem by renouncing its original premise: Kim literally, and Goldman
effectively, denies that Socrates' death and Xantippe's widowhood are the same
event. Both Kim and Goldman criticize Davidson's attempt to defend the idea
that one event can have several different descriptions (Davidson 1963, 1967),
using arguments similar to the above as reductio ad absurda.
Goldman
says that the only way to escape these absurdities is to say that only two synonomous descriptions can refer to
the same event, thus in effect throwing out the sense-reference distinction for
event descriptions. In Chapter 1, he specifically acknowledges that this will
create problems for the kind of identities that exist in scientific reductions,
but says he will side-step those problems until chapter 5 section 5. At that
point, he admits that the synonymy criterion might be too strong, but also
seems almost willing to defend it for physical identities like temperature
because he claims "in the case of temperature and mean kinetic energy,
there is a universal correlation between specific temperatures and specific
values of mean kinetic energy" (Goldman 1970 p.164) (The facts about the
multiple realizability of temperature at the
micro-level had not yet been brought to the attention of the philosophy
community by Enc and Churchland.).
On the next page, he cites the multiple realizability
of mental properties at the neurological level to be reason for accepting that
mental and neurological properties are not identical,
and uses this fact as the basis for attributing autonomous causal powers to
mental properties.
Kim
basically agrees with Goldman about the individuation of events, even saying
that Brutus' stabbing Caesar and Brutus' killing Caesar are two different
events.(Kim 1993 p. 44) Nevertheless, as anyone who
has been reading up to this point should realize, Kim also wants to deny causal
powers to mental events. Can he consistently maintain both positions? Only if
mental events are identical to physical events, in spite of the radically
different ways the two are described. Unlike Goldman, Kim does say that
descriptions don't have to be synonomous in order to
refer to the same event. For Kim, certain minor changes can be made in a
description without shifting its reference, which I will describe in more
detail later. For now, I will just say that Kim clearly has an uphill battle
proving that stabbing and killing are two different events, but that beliefs
and brain states are not.
Some
might want to claim that this difficulty is good reason for rejecting Kim and
Goldman's criteria for event differentiation, and embracing Davidson's identity
theory. In other words, claiming that the killing and the stabbing are the same
event, and therefore there is no problem with saying that brain events are
identical to mental events. But I think that Kim and Goldman are right that
there are two many absurdities that follow from this. If the killing and the
stabbing are identical, then if Caesar had been poisoned, he couldn't have
died. It will probably come as no surprise that I want to show that Kim should
keep his theory of events, and reject the mereological
supervenience that leads to the identity theory of
mind. In order to do this, however, we need to clarify a theory of causality,
which is always closely linked to any theory of events.
We
sometimes speak as if objects are causes. We say, for example, that the match
caused the explosion. But strictly speaking, of course, objects don't cause
anything, except in so far as they participate in events. Both causes and
effects are events, and the only thing that can cause an event is another
event. It was the lighting of the match that caused the explosion, and the
drinking of the hemlock that caused Socrates' death, not the match and the
hemlock all by themselves. Consequently our concepts of "event' and
"cause" are very closely linked. Not only are all causes and effects events, we differentiate events almost exclusively by
considering the causal impacts they give and receive. Subsuming an occurrence
to a single causal law, however, never tells the whole story about an event,
although we usually act and think as if it did.
We
almost always talk as if each event has a single cause: the match caused the
explosion; the hemlock caused Socrates' death. We even argue over which of two
things actually caused something, and many discussions of what is called "overdetermination" seem to imply that one cause should
always be sufficient to bring about any effect. But of course we admit when
pressed that the striking of the match by itself didn't cause the explosion.
The presence of the oxygen also caused the explosion, as did the presence of
the gunpowder, the desire for a united Ireland etc. Yet in most contexts, only
one cause is considered to be crucial. I think that two closely related
criteria are used (sometimes separately, sometimes together) for deciding which
cause is granted the honor of being called the cause.
1)
WHICH FACTOR CHANGED MOST RECENTLY. Presumably, the
oxygen, the gun powder, and the desire for a united
Ireland, were in existence for some time prior to the explosion. The match was
the most recently introduced factor, and therefore it is referred to as the cause. Emphasizing this aspect of causality makes it appear
that we can distinguish clearly between states and events, and say that only events can be causes. This would make
the presence of things like the gunpowder and the oxygen not really causes, and
enable us to speak of the match as the cause. But
although this distinction is an important one pragmatically, I think it can be
misleading if we use it as a way of attributing genuine metaphysical
responsibility. Without the oxygen, the explosion would not have happened, so
we cannot place the entire causal burden on the match. The only reason that the
state-event distinction is important is that it helps us to zero in on:
2)
WHICH FACTOR WE HAVE THE MOST CHANCE OF GETTING
CONTROL OF. In many cases, this will be one of the factors most vulnerable to
change, which is why we are capable of making it change{8}. When it was discovered that
mosquitoes (biting people) caused malaria, the fact that malaria could only
exist if people had a certain kind of circulatory system was irrelevant,
because there was no known way of changing people's circulatory system. There
were, however, ways of getting rid of mosquitoes, so once their part in the
coming to existence of malaria was discovered, the cry went out that we now
knew what caused malaria.
I
am going to refer to this common sense concept of causality as occurrent causality, and I want to
distinguish it from what I will call metaphysical
causality. When I refer to the metaphysical cause of an event, I mean
everything in the universe that was responsible for that event
taking place, whether anyone knew about it, or was able to have any
control of it. A metaphysical cause, unlike a occurrent cause, cannot be described with a single
sentence. But it is ontologically more fundamental, because it is less
dependent on particular perspectives and projects than is occurrent
causality. When we say that the explosion was caused by the striking of the match, that may be very useful in certain contexts, such as
a physics class. But if we are trying to understand
why an explosion occurred in Belfast, no one would be satisfied if a political
commentator appeared on the BBC and told everyone that it occurred because
someone lit a match. On the other hand, if the IRA had been without explosives
for several weeks, and then recently acquired some gunpowder, it would make
sense to say that the explosion was caused by (the
acquisition of) the gunpowder. If the explosion had been detonated on
the moon, it might have been that the astronaut's carefully timed release of
oxygen near a heating element could have been the occurrent
cause of the explosion, and so on.
These points have been made frequently by J.S. Mill and
others
(Mill 1851 vol.1). But it does not appear to me that their
full significance has been grasped by those who acknowledge them. This
is because within the context of a logical atomist philosophy, such as Hume's
or that of the Wittgenstein of the "Tractatus",
a single factor, describable by a single sentence, could be entirely
responsible for an event's taking place. In such an atomistic universe, it
would be possible for there to be no difference
between occurrent and metaphysical causality. Science
would consist of a set of independent sentences of this sort, each of which
described a distinct causal relation, and each of which could be true or false
independently of the others.
However,
in the holistic post-atomistic world we have inherited from Quine
and Sellars, metaphysical causality must be granted a
much wider extension than occurrent causality. It
could be that the net must be thrown so wide that it would include the entire
universe. But I think it is more likely that every event has a nexus of
responsibility that caused it to occur, and that an event outside of that nexus
did not cause the event to occur. If we were going to list all the events and
circumstances that metaphysically caused a particular explosion in Ireland, the
list would probably be so long that we could never finish writing it, and would
include elements that no one ever would or could know about. But the fact that
someone ate three mouthfuls of rice in Singapore two weeks earlier would
probably not be on that list, and neither would millions of other facts too
numerous to mention. Even though certain interpretations of chaos theory might
even include things as surprising as those three mouthfuls of rice in the
causal nexus of an Irish explosion, there would still probably be some border
where the nexus of responsibility would stop. In a completely deterministic
universe, the border would be sharp and precise. In a probabilistic universe,
the border would be as blurry as that universe was probabilistic. How wide
those borders extend, and how blurry they are, is an
empirical question in each case, although fully answering such questions is
almost certainly beyond our capabilities.
So
how does all of this relate to our opening paradox about Socrates drinking the
hemlock? I think that part of the job of an event description is to etch out
the rough outlines of the nexus of responsibility which
is the complete metaphysical cause of an event. When we describe the same event
in two different ways, we often end up referring to two different nexa of responsibility{9}, and thus end up giving the event different causal powers.
Goldman, as I said earlier, deals with this problem by saying that only synonomous descriptions can refer to the same event. This
means that each event can have only one property, and this is the property that
gives it it's causal character. Kim's position is less
extreme, saying that although each event has a unique constitutive
property, we must make a distinction between properties constitutive
of events and properties exemplified by them.
An
example should make this clear: the property of dying is a constitutive
property of the event. . . Socrates' dying at t. . .the property of occurring in a prison is a property
this event exemplifies, but is not constitutive of it. . . If Socrates'
drinking hemlock (at t) was the cause of his dying (at t'), the two generic
events, drinking hemlock and dying, must fulfill the requirement of lawlike constant conjunction (Kim 1993 p. 12).
I
would like to give this distinction another set of labels, because this would
clarify it's relationship to the problems we have been discussing. I would like
to call the properties that constitute events causal
properties, and those that are only exemplified by them epiphenomenal
properties. Earlier, we used Kim's term "epiphenomenal causation" to
describe a relationship between two events which appears to be a cause and
effect relationship, but in fact is merely a reflection of some other
underlying causal process. If we bracket the assumption that causal processes
are always "underlying" in some sense (an assumption that really
springs from assuming that all causation must be that of parts controlling their
wholes), we can see that the concept of "epiphenomenal" can be valid
even when we are talking about entities on the same mereological
level. Dan Dennett, for example points out that the fact that you cast a shadow
is epiphenomenal when you make yourself a cup of tea, because the shadow has no
causal impact on the tea making process. (even though
it does have a cooling effect on the surface it spreads itself over) (Dennett 1991 p. 402). This seems to be essentially the same
distinction as the one between the causal property of drinking hemlock and the
epiphenomenal property of being in a prison. When a causal relationship exists
between two events, some of the characteristics of the event
which is the cause are actually responsible for the effect occurring,
and some of them are just along for the ride. The latter attributes are
epiphenomenal in the sense I am using the term here.
It
may be possible to describe the same event in two different ways without
referring to two different nexa of responsibility.
But in many cases, the referred causal nexus will shift when we move from one
description to another. When this happens, certain attributes will be genuinely
causal under one description of an event, and only epiphenomenal under another
description. Under the description "Socrates' death" the fact that
Socrates is married to Xantippe is epiphenomenal. Under the description,
"Xantippe's becoming a widow", the fact that Socrates is married to
Xantippe is causal. Thus under the first description, we can have a causal
impact on the event only by saving Socrates' life. Under the second description
we can have a causal impact on the event by either saving his life or having
him divorce Xantippe. Even though we are inclined to think of them as the same
event, they are metaphysically caused by two different nexa
of responsibility. Similarly in the case of Brutus killing Caesar the fact that
he used a knife is epiphenomenal, but the fact that Caesar's heart stopped
beating is causally essential. Brutus could have killed Caesar with a club or
with poison, and he still would have killed him as long as Caesar's heart
stopped at the end of the process. Conversely, in the case of Brutus stabbing
Caesar, the fact that Caesar's heart stopped is causally epiphenomenal. Brutus
would still have stabbed Caesar even if he had only wounded him.
Physical
descriptions and mental descriptions outline different nexa
of responsibility, and therefore we can never substitute one for the other,
even when they both refer to the same events. Physical causes are not the only
"real" causes, and mental causes are not dismissable
as mere epiphenomena. Under physical descriptions, physical attributes are
genuinely causal, and mental attributes are epiphenomenal. But under mental
descriptions, physical attributes are epiphenomenal and mental attributes are
genuinely causal. This fact was discovered when the first arguments for
functionalism appeared, and this is why, despite what Kim and many others
think, that the arguments for multiple realizability
prove the existence of genuine mental causality. The reason this is not obvious
is that Kim's dilemma, (quoted earlier in this article) presupposes an
atomistic causality that conflates occurrent and
metaphysical causality. This dilemma only occurs if we assume that a single
physical cause P can produce an effect P* all by itself. Consider again the two
posited layers of explanation that give rise to the dilemma.
M
causes M*
P
causes P*
In
this diagram, a single mental event M is seen as causing another mental event
M*. This mental event is physically realized (for example in a brain state) by
a physical event P, which causes P* i.e. the physical realization of M* .
When
the situation is described this way it seems undeniable that the top layer does
no real work. But when we remember that all causal explanations presuppose a
nexus of responsibility, we can see that physically describing a mental event
takes it out of the nexus of responsibility which is
the metaphysical cause of the event. We cannot describe the nexus of
responsibility of a mental event in purely physical terms without losing the
ability to distinguish the genuinely causal from the epiphenomenal. This can be
illustrated by a well worn example. Let us assume that
P is a neurological event taking place in the brain of
Sellars' friend Jones. Let us replace P with another
physical event Q that takes place in a silicon module newly installed in Jones'
brain, and which now performs the exact same functional role
as did the neurological event P. Because the silicon event Q is functionally
identical to the neurological event P, we still get M* resulting from Q just as
we got it from P. This means that with respect to M*'s coming into being, the
difference between P and Q is epiphenomenal, because it is only physical. That
difference has no causal effect on whether M* occurs or not, just as whether
you are casting a shadow or not makes no difference to the process of making
tea. If the sun went down and your shadow disappeared, the tea making process
would continue triumphantly on, and therefore the shadow is epiphenomenal with
respect to that process. Similarly, if Socrates had taken the hemlock in the
public square, rather than in the prison, he would still have died. In the same
way, when we change neurological state P to silicon state Q we still get M*,
and therefore the physical characteristics that differentiate P from Q are
epiphenomenal with respect to the mental processes{10}.
A
great deal of significance is usually attached to the assumption that even if
the neurons are replaced by silicon, the whole brain can still be described in
physical terms. But strictly speaking, this is not true. What is true is that each part of the system can be described physically, but that
does not mean that all of it can be described physically.
To describe something physically means to take it out of its original context
of discourse and relate it to the discourse of physics. But once it is in it's new context, that part would lose whatever emergent
causal powers it may have had when it was within the original system. It still
possesses some its causal powers, and a knowledge of
those remaining causal powers is often useful in understanding that system. It
is essential in enabling us to fix the system when it is broken, for example.
From this, many people have concluded that when a system breaks, it is
revealing its true (physical) nature, and therefore the physical stance has a
higher ontological status than whatever stance is necessary to understand the
system when it is working. But this does not follow. If a system does possesses emergent properties, it would lose those
properties when the system doesn't work anymore. And this would be true even
though every single part of the system can be studied from the physical stance.
Another
reason that the physical stance is assumed to describe the only true causality
is that physical laws are presumably the only laws that are exceptionless.
But this ignores the fact that a physical law is only universally true when one
adds the phrase "all other things being equal". Because things
frequently are close enough to being equal, physical laws are the most useful
and effective predictors we have. But this happy circumstance, although good
enough for occurrent causality, does not grant
physical laws exceptionless metaphysical causality.
No single physical law is ever entirely responsible for an event's taking
place, even though our knowledge of that law may enable us to predict that it
will take place. I think, however, that old atomistic habits die hard, which
why it is usually assumed that physical laws are metaphysically, and not just occurrently, exceptionless. Why
else, for example, would Kim say that "what we
ordinarily take as a cause is seldom by itself a necessary or a
sufficient condition for the event it is said to have caused" (p.21
emphasis added). If Kim were really free of the old atomistic presuppositions,
he would have acknowledged that a single cause is not just seldom sufficient, it is never
sufficient. There are always changes that can be made in the background
conditions that could stop any physical cause from producing its effect.
As
Kim points out, we have very deeply held convictions that there can be only one
complete and independent explanation for any single event or phenomenon. (Kim 1993 p.250). If physical laws really were exceptionless, we would have good (perhaps even
unassailable) reasons for believing that physics provided such explanations.
And from this, a determinism that ruled out emergent causality (i.e. made it
epiphenomenal) seems an inevitable corollary. But the predictive power of our
laws of physics is not entirely intrinsic to them, it is also a function of our
good luck in living in a universe where the phrase 'all other things being
equal" is extremely helpful. Because of this, it is easy to forget that
our physical laws do not provide exhaustive accounts of any event's complete
metaphysical cause. Consequently, there is nothing in our science that
contradicts the possibility that all causality, including physical causality,
only inclines, and does not necessitate. If this were true, it would explain
why dynamic systems cannot be explained by the laws that govern their parts,
why bridge laws cannot be found linking macroscopic and microscopic phenomena,
and why what is causal and what is epiphenomenal varies depending on how one
describes an event. We cannot, of course, prove that there is not a single
determinate system that our science cannot discover, and which governs
everything by laws that permit no emergence and renders all causes we know
about to be epiphenomenal. But we can no longer claim such a
metaphysics is the only one compatible with science. On the contrary,
given the state of science today, the existence of emergent causality seems
more likely than not.
Notes
{1}There is another alternative which is
frequently discussed, which both Kim and I feel is fundamentally self
contradictory: Mental Epiphenomenalism, which says that M exists but has no
causal powers. As the purpose of this paper is to demonstrate why mental events
and other emergent properties have causal powers, I will not discuss this kind
of epiphenomenalism, although later on I will distinguish it from another kind
of epiphenomenalism that I do think is meaningful. [Back]
{2} Kim seems to be acknowledging the possibility of something like pluralism
when he says "The ontological picture that has
dominated contemporary thinking on the mind body problem is strikingly
different from the Cartesian picture. The Cartesian model of a bifurcated world
has been replaced by a layered world, a hierarchically strucutre
of levels or orders of entities and their characteristic properties." (Kim
1993 p.337) Both Kim and I agree, however, that this striking difference is
more apparent than real, because non-reductive physicalism
cannot consistently grant causal powers to any layer except for the bottom
(physical) layer, which is assumed to have no emergent properties. The main
theme of chapter 17 in Kim 1993 is why the assumptions that Kim shares with the
non-reductive physicalists renders this layered
ontology incoherent. Consequently, Kim usually speaks as if he believes the
only alternative to materialism is dualism, and this is the position I will
attribute to him for simplicity's sake.[Back]
{3}Silberstein 1998 also argues that there are quantum effects in EPR -Bohm systems that can only be explained by accepting the
existence of emergent properties. He also claims that once we have accepted the
existance of emergence in quantum physics, there is
no reason to reject it anywhere else (e.g. in theories of consciousness or
mental causation). His arguments are similar to mine, although they are based
on different data, and provide many more strands in the tissue of arguments
needed to justify claims for emergent properties in physics. Arguments either
for or against emergence are, as Kim admits, empirically based, and therefore
gradually acquire credibility as more and more supporting data is discovered. [Back]
{4}Some might say that quantum mechanics only
justifies the existence of microindeterminacy, not macroindeterminacy. Garson 1993, however, argues that
because chaotic nonlinear effects are so sensitive to initial boundary
conditions, it is plausible that even quantum indeterminacies could be
amplified by the chaotic patterns into macroscopic indeterminacies. Kane 1996
uses this argument in his justification for the possibility of free will. [Back]
{5}for example: "each supervenient
property necessarily has a coextensive property in the base family" (Kim
1993 p.72) or: " The reduction of one theory to another is thought to be
accomplished when the laws of the reduced theory are shown to be derived from
the laws of the reducer theory, with the help of 'bridge principles'"
(ibid. p. 150). Kim admits in an acompanying footnote
that "whether this is the most apropriate model. . . could be debated". But apparently he feels
that the bridge law model is good enough for him, for he continues to work with
it for the rest of the chapter. See also p. 248 and p. 260. [Back]
{6}The fact that many determinists, such as B.F.
Skinner, also want to have an autonomous kind of mental causation shows that
mental causation is distinct from free will, and that the intentional stance is
not the only possible theory of mental causation. As I said earlier, it would
require another set of arguments to get from mental causation to free will, and
that is a subject for another time. [Back]
{7}Chapter 5, section 5 of Kim 1993 is titled
"Global Supervenience Strengthened: Similarity
vs. Indiscernability" and here Kim considers the
possibility that the macroscopic supervenes only loosely on microscopic events.
Surprisingly, he seems quite comfortable with this idea, perhaps because it was suggested by a commentator, and he couldn't see
anything immediately wrong with it. For the reasons I gave above, however, any
such looseness would imply some level of emergent macroscopic causality. [Back]
{8}But not always, of course. We say that the
hurricane caused the destruction of the house, even though we had no hope of
saving the house from the hurricane. But identifying a single factor as the
cause is always the first step in controlling that cause, even if human fraility makes further steps impossible. [Back]
{9}I realize that Kim goes to great lengths to prove not
only that Socrates' death and Xantippes' widowing are
not the same event, and but that the connection between them is not causal. I
don't think his arguments still hold up when one is talking about what I call
metaphysical causality, though I will make no attempt to demonstrate that here.
Kim's stabbing/ killing example is included for those who have trouble with the
Socrates/Xantippe example. [Back]
{10}Amie Thomasson objected
to this example (in correspondence) by saying that subsitutability
does not imply epiphenomenality. We could, she
suggested, substitute a wooden table leg with a metal one (or with a cardboard
box) and the table would still stay up. But this does not prove the wooden
table leg had no causal role in keeping the table up. I found this objection
convincing until I realized that it was confusing substiting
an object with substituting an attribute. It was not the woodeness
of the table leg that was holding up the table, it was it's rigidity, which is
why it was possible to substitute a variety of other non-wooden objects that
all possesed the functional attribute of rigidity.
Consequently the woodeness of the table leg was
epiphenomenal in this context, and it's rigidity was
causal. (rigidity, by the way, is a functional
attribute, not a physical one, because it could be multiply realized in a
variety of physical substrates depending on the context in which it occurs. A
piece of spagetti could be rigid enough to support a
cardboard doll house table, and a wooden table leg may
not be rigid enough to support a heavy marble table.) [Back]
Bibliography
Beckermann, A., Flor,
H. and Kim, J. eds. (1992) Emergence or Reduction? Berlin,
De Gruyter.
Bickle, John (1992a) "Multiple Realizability and Psychophysical Reduction" Behavior
and Philosophy Vol 20 #1
Bickle, John (1992b) "Mental Anomaly and
the New Mind-Brain Reductionism" Philosophy of Science 59 pp.
217-230
Bickle, John (1992c)
"Revisionary Physicalism" Biology and
Philosophy 7: 411-430
Bickle, John (1993) "Connectionism, Eliminativism, and the Semantic View of Theories" Erkenntnis 39 359-382
Bickle, John (1998) Psychoneural Reduction: The New
Wave. MIT Press, Cambridge
Churchland,
Patricia (1983) Neurophilosophy MIT
Press Cambridge Mass.
Churchland, Paul (1989)
a Neurocomputational Perspective. MIT Press
Cambridge
Davidson,
D. (1963) Actions, Reasons, and Causes The Journal of Philosophy LX
p.686
Davidson,
D. (1967) 'The Logical Form of Action Sentences" p.84 in Nicholas Rescher,ed. The Logic of
Decision and Action University of Pittsburgh Press Pittsburgh,
Dennett,
D. (1979) "True Believers: the Intentional strategy and why it works'
reprinted in Haugeland, J. (Ed.) (1997) Mind
Design II MIT Press Cambridge,
Dennett, D. (1991) Consciousness
Explained Little, Brown New York
Garson,
J. (1993) Chaos and Free Will. Paper delivered to the American
Philosophical Association, Pacific Division Meeting
Goldman (1970) A Theory of Human Action Prentice Hall
Englewood Cliffs, New Jersey.
Hooker, C.A. (1981) "Towards a General Theory of
Reduction" in Dialogue XX #1-#3.
Kane, R. (1985) Free Will and Values
SUNY press Albany
Kane,
R. (1996) The Significance of Free Will Oxford University Press New York
Kelso,
J.A. Scott (1995) Dynamic Patterns MIT Press Cambridge, Mass.
Kim,
Jaegwon (1993) Supervenience
and Mind Cambridge University Press. Cambridge, England
Mill,
J.S. (1851) A System of Logic, Rationcinative and
Inductive vol.1 John W. Parker, London
Silberstein, M. (1998) Emergence and
the Mind-Body Problem vol. 5 no. 4
Sosa,
E., and Michael Tooley (1993) Causation Oxford
University Press.
Thomasson, A.L. (1997) A non-reductivist solution to mental causation (presented at
Pacific APA , Berkeley, CA)