Some Different Ways to Think
Ruth Garrett Millikan
University of Connecticut
Daniel Dennett has offered a helpful framework in which to consider the evolution of mind, calling it "the tower of generate and test" (1995, 1996). On the bottom of the tower there are "Darwinian creatures," whose patterns of behavior result from the effects of natural selection alone. Next come "Skinnerian creatures," whose behaviors continue to be modified during their individual lifetimes by trial, reward and punishment. Third are "Popperian creatures," capable of learning, as well, by trying things out in their heads. Last are "Gregorian creatures," who learn through interaction with culture. I have spent some time trying to construct a similarly broad and rough framework in which to consider the evolution of mind, but focused more narrowly on the development of increasingly sophisticated inner representational systems. The idea was to explore possible forms of representation, first within perception, and then within thought as it frees itself from perception. But as I progressed, this simply conceived project was soon out of hand. At times I thought I must be trying to reconstruct the whole of Kant's Critique of Pure Reason in transcendental realist idiom. But in the end I think the project should not be all that difficult, and I have tried to present here, as originally intended, just a skeleton.
The project is to survey something of the variety of ways that seem, a priori, to be possible, that a creature might employ inner representations to help govern its behavior. A poor imagination for these possibilities seems likely to hamper empirical studies of how particular animal species in fact do perceive or think. On the other hand, empirical studies, as they proceed, cannot help but alter our ideas of what is possible. Unavoidably, the task is a bootstrapping one.
Dennett called his scheme a tower because each level rested on the one below, not merely in evolutionary progression, but in individual animals. Thus humans, who have built the tallest tower, are at bottom Darwinian, then Skinnerian, then Popperian and finally also Gregorian. Recall that Aristotle's vegetable, animal, and rational souls formed the same kind of tower. Similarly, the simpler kinds of representational systems that I describe may all still be used within humans, more sophisticated ways of perceiving and thinking being reserved for quite sophisticated projects of the whole person. Most of our purposes and most of the facts we take into account may be represented far below the level of belief and desire and intention. Sweet tastes may represent nutritional value, for example, but they are not beliefs about nutritional value, nor is a sweet tooth a desire for nutrition. Similarly, it is not plausible, in general, that the purposes with which we use conventional language forms are expressed in intentions (Millikan 1984, chapter 3).
1. IntenTionality: Introducing intentional icons and signals
The most primitive intentional structures that I will describe hardly deserve the title "representations," and certainly they are not "thoughts." I call them "intentional icons," from Brentano's term "intentionality" and C.S.S. Peirce's term "icon." "Intentionality" is that peculiar property of representations that makes them appear sometimes to bear relations to nonexistent things, for example, to be about states of affairs that aren't actual. Peirce's "icons" are signs that work by bearing a similarity or abstract isomorphism to what they are about. Roughly, intentional icons (inner ones) are plastic states or structures, physical modifications of an organism, caused by its experience, the forms of which vary in a systematic way so as to parallel certain variations in the organism's environment. There are two basic possibilities here. (1) The icon helps to guide the responses of the organism, enabling it to perform certain context-dependent activities in the environment it is actually in, and does this by bearing an isomorphism to the relevant contextual features or structures. Call these "fact icons." (2) The icon helps to guide the responses or activities of the organism so as to produce a structure or state of affairs isomorphic to the icon. Call these "goal icons." By "isomorphism" I mean merely this. There are certain kinds of possible variations (mathematically speaking, "transformations") of the icon that correspond systematically to certain possible variations of the environmental context such that in the perfectly well-functioning animal, every significant type of icon corresponds in a systematic manner to a specific type of environmental structure or state of affairs. We can call the rules of correspondence "mapping rules," and speak of the icon as "mapping" (in the mathematical sense) the corresponding structures or states of affairs in the environment, calling them, in turn, "the mapped affairs." Given this simple description of "intentional icons," even the Gibsonians, indeed, perhaps especially the Gibsonians, must conclude that animals typically employ intentional icons in perception. If there are systematic mappings of distal features of an organism's environment onto patterns of ambient energy impinging on the organism's perceptual organs, and if the animal is tuned to be directly guided by these patterns in appropriate context-dependent behaviors or "perception-action cycles," this must be because a relevant isomorphism to the distal environment shows up within the perceptual processes of the animal, first in patterns of sensory organ response or transduction, later wherever translation from perception to springs of action occurs. I call these icons "intentional" because if anything should disturb the normal isomorphism between icon and environment, and should the organism proceed to be guided in the usual way by the misaligned icon, a nonadaptive response is the likely result. A misaligned icon is like a false representation of the environment. It apparently bears a mapping relation to a nonexistent state of affairs, that is, it could function properly only if this state of affairs were actual.
There are limiting cases, "zero cases," of intentional icons that I call "intentional signals." Intentional signals are icons that vary significantly only with respect to their time and/or place of occurrence. Consider the neural signal that triggers a protective eye blink reflex. Its job is to help prevent foreign objects from entering the eye. The time at which it occurs corresponds, when things go right, to the time of approach of a foreign object that might otherwise damage the eye. Thus it is a fact signal. This time also corresponds to the time at which the eye-blink response is to be produced. So it is also a goal signal. Vary the time of the signal, and both the time of the potential damage and the time of the needed response vary accordingly. Similarly, adrenalin running in he bloodstream is an intentional signal indicating some simultaneous circumstance requiring a sudden burst of strenuous activity.
Notice that this treatment of intentionality differs fundamentally from treatments, such as Dennett's, that link intentionality to rationality. Rationality is a property of whole animals. Here intentionality is understood to characterize icons used by systems that employ no inference but govern behaviors of subrational animals, and by subpersonal subrational systems within higher animals, as well as by the rational faculties of higher animals. Intentionality occurs on many levels, and information represented on one level within an organism is not routinely available on other levels.
Besides inner intentional icons and signals, there also exist outer ones, such as beaver tail splashes (signals of danger) and bee dances (icons of the location of nectar). I will not discuss outer intentional icons as such, but I will use them occasionally to illustrate general points that apply to both inner and outer icons.
The description of representations and more primitive intentional icons offered above can be interpreted as a variety of what Jerry Fodor calls "informational semantics," about which he remarks, "if meaning is information, then coreferential representations must be synonymous" (Fodor 1997?, p. 12). Given the kind of informational semantics described here, however, intensionality, which allows representations to differ along other dimensions of meaning without differing in extension, can show up in at least two different ways . To see this, consider first the following hypothetical example.
Suppose that Chinese characters worked more simply than they actually do and were mapped one to one directly onto meanings. And imagine that some of these characters corresponded to meaning elements such as tense, number, and so forth, and that these had irregular phonetic transcriptions in Chinese, as they do, say, in English where, for example, past tense may be indicated either by a change in the verb stem or with the suffix "Ced." Now consider using such a system of characters to refer indirectly to the sounds of the Chinese words and sentences whose meanings were directly represented. Contrast this way of forming representations of sounds with the way a perfectly regular alphabetic system might represent the sounds of the same Chinese words and sentences. Many of the same phoneme strings might be mapped using either system, but the mapping rules would articulate these strings differently. One could perform transformations on alphabetic strings representing the sounds of meaningful sentences that would produce new representations of sounds but that had no meanings, hence could not be represented using the character system. As a system for representing sounds, the alphabetic system would be more finely articulated than the system of characters, even though many of the same phoneme strings could be represented. Also, there would be transformations performable on character strings that would yield new (indirect) representations of new phoneme strings but that would not correspond systematically to alphabetic transformations because of the irregular correspondence of meanings to sounds. Although the two systems would be capable of representing many of the same extensions, they would differ both in how they articulated these extensions and also in the form and regularity of the rules of correspondence employed, the latter depending on different kinds of contingent relations mediating the mapping from representation to represented.
In another context I would argue that the second of these ways of differing is primarily what accounts for the intensionality of contexts such as "says that.." and "thinks that..." used to describe human sayings and thinkings, but the first way of differing is what is important in the current context. Natural language representations, and perhaps also human beliefs and intentions, differ in articulation from more primitive representations having the same extensions. Consider, for example, the dance of the honey bee. It is an icon of the location of nectar relative to the bees' hive and the sun, but there are no possible significant transformations of it that would tell about nectar location relative to any other objects than the hive and sun nor about the relation of anything other than nectar to hive and sun. Similarly, there are no significant transformations that can be performed on the beaver tail splash to indicate danger five minutes ago at the beaver dam rather than danger here now, nor any to indicate food rather than danger here now, or to indicate that it is the rabbits that are threatened rather than the beavers. The simplest way for us to describe the environmental affair to which an intentional signal or simple intentional icon corresponds is to use an English sentence that maps onto the same affair, even though such a sentence articulates the affair quite differently. For example, we say, "that beaver tail splash means that some danger is now threatening the beavers" using the sentence in the "that" clause to map the affair it is the job of the beaver splash to map. But by using this technique we manage to represent only the extension and not the articulation of the beaver's intentional signal, for the articulation of the English sentence used does allow transformations into the past tense, transformations that change the location referred to and also what is represented as being present, and to which animals it is present.
The intensionality of contexts that talk about simple intentional signals and icons usually consists then in this peculiarity. While a full description of any item exhibiting intentionality would require telling what the relevant articulations and kind of mapping rules for it were as well as telling what affair it purported to map, usually no English sentence can be found that will do this job directly for simple intentional signals and icons. English indicative sentences have, at a minimum, a subject and a predicate, either of which might be replaced without losing significance. English sentences are also, all of them, subject to a negation transformation. On the other hand, no English sentence has significant transforms that result merely from displacing its form along some physical continuum -- by making it louder, say, or longer, or higher in pitch, and so forth. These properties set English sentences far apart from a host of simpler intentional icons, including many kinds of inner intentional icons that may help govern behavior on levels lower than that of rational thought. To describe these simpler icons is harder than to describe the more complex thoughts of humans, for there are no simple English sentences that can express the same contents with a corresponding lack of contrast or articulation.
Thus to state the truth conditions of a bee dance in English we must mention the nectar, the hive and the sun, and we must use a clause that contrasts with another clause that would be its negation, but the bee dance itself does not map (does not "mention") the hive or the sun or the nectar as such, nor does it have a negation. Bees have no way of saying where there isn't any nectar, so don't bother looking. The bee dance is an undifferentiated icon relative to its English translation. An important part of the story of the evolution of cognition must concern the emergence of various new forms of articulation for intentional icons and also the emergence of negation.
3. Pushmi-pullyu Icons and Signals
The most primitive intentional icons and signals are at once fact icons and goal icons, telling both what is the case and what to do about it. Thus each of the various chemical "messengers" that run in the blood stream "tells" about some particular state of the organism's physiology and, in stimulating a physiological response appropriate to that state, also "tells" what to do about it. The famous "fly detector" in the optic nerve of the frog produces an icon that is at once a fact icon, telling when and at what angle there is a fly in front of the eye, and a goal icon telling when and at what angle to snap with the tongue. The dances of honey bees are icons at once of where the nectar is and of where the watching worker bees are to go. Elsewhere I have called representations having this sort of lumped double structure "pushmi-pullyu" representations" after Hugh Lofting's mythical creature of that name (Millikan 1996). The primitive pushmi-pullyu representations I will discuss here can be called pushmi-pullyu intentional signals and icons, "P-P signals" and "P-P icons" for short.
At the opposite end of the spectrum from pure P-P signals and icons are human desires and beliefs. Here fact-iconing and goal-iconing functions have completely separated. Our beliefs often concern facts that we have no notion how to use in action, and our desires include many we have no idea how to use actual conditions to satisfy. Having separated fact icons from goal icons, it is necessary somehow to reassemble them for use. Practical inference is needed in order to do this. But the result, of course, can be a huge gain in flexibility of action. What kinds of steps might there be in the evolution from systems employing only simple P-P signals and icons to a system employing beliefs, desires and inference?
The neural impulse, the P-P icon, produced in the frog's optic nerve by a passing fly, though it has minimal articulation, reports when and at what angle the fly passes and demands a correspondingly definite response from the frog's tongue. The impulse forms part of a simple reflex arc which, in this case, cannot be inhibited. It is not depotentiated even if the frog is completely sated. It reports a fact and issues an unconditional command.
Similarly, during the first few days of its life, a rat pup that feels a nipple touching its face responds by turning, grasping and sucking whether or not it is hungry. The neural "nipple detector" is a P-P icon that makes an unconditional demand. A few days later, however, this reflex response is potentiated only when the pup is hungry, depotentiated when it is sated. Spelling this out in intentional terms, the pup's system is now sensitive to a new intentional signal, a hunger signal that indicates a state of nutritional depletion and demands its rectification. The hunger signal will perform its function normally only if it is aligned with a state of nutritional depletion and only if it effects rectification of this situation by means of causing the pup to suck on a nipple. So it is a P-P signal indicating nutritional depletion and demanding sucking-hence-hunger-satisfaction. But unlike the frog's fly-detector, the fact that it is aligned properly with what it is designed to fact-signal and that it occurs in a well functioning organism does not guarantee that it will produce the result it demands. It cannot do so unless coupled with a properly aligned firing of the pup's nipple detector, which cannot occur unless there happens to be a nipple there to detect. The hunger P-P signal indicates a fact and sets a goal, but without mediation by a second properly functioning P-PI it cannot cause that goal to be reached. Having set your goal in acknowledgment of certain of the facts does not guarantee that you know how to reach that goal from where you happen to be. Similarly, many small animals take cover if they see a small shadow gliding over the ground, such as would be cast by a flying predator. My guess is that in at least some species, this response cannot be inhibited. The shadow produces a very simple P-P signal that means, though it says this with minimal articulation, predator overhead so take cover. But though this is the demand, it does not always help to effect its satisfaction, even in the normal animal. First, the animal has to perceive some place to take cover.
Returning to the nipple detector, suppose that it always responds when the pup's mouth touches a nipple but that unless the reflex is potentiated by hunger, the response is not passed on to the efferent nerves that control sucking. Intuitively, the nipple is perceived but the perception is not acted on. It is a P-P signal because it will actually serve a function in a normal way only if properly aligned and only if and when acted on. Using Gibsonian idiom, this sort of P-P signal or icon is the perception of an "affordance." Gibson spoke of affordances as being possibilities for action. The suggestion that possibilities are things that can literally be perceived can seem puzzling. Surely possibilities should not be reified and introduced whole into the causal order. But we can put Gibson's point our own way. The perception of the nipple when the sucking response is not potentiated is, in classical terms, a "first act" as opposed to a "second act" P-P icon. Similarly, although human desires actually serve their proper functions only when they are fulfilledCthat much is necessary to their being goal iconsCperhaps most human desires are held in abeyance most of the time because other internal conditions necessary for their proper release (for example, freedom from conflict with stronger desires) are not present. Most are merely first act desires. The presence of a first act P-P icon, then, is what Gibson had in mind when he spoke of the perception of a possibility for action. Presence of a second act P-P icon, depending as it does on a first act P-P icon, also involves perception of a possibility for action. Whether or not it is hungry, the rat pup that encounters a nipple perceives the nipple as affording sucking and affording nourishment. Similarly, the animal that is alert to the places around it where it might quickly take cover perceives these places as affording cover.
Notice why this kind of perception is not perception of mere facts, just as Gibson said it is not. There are mapping rules in accordance with which a P-P icon can be said to be properly aligned with the environment rather than misaligned, but this is so only because it has a function that cannot be performed normally unless it is so aligned. It is what's necessary for its function, not, for example, statistics on what it is most frequently aligned with, that determines what it is an icon of in the pushmi or fact-iconing direction. The relevant alignment is with some mapped affair bearing a description under which it is causally possible for it to help account, in normal cases, for the performance, specifically, of that function. This description, just as Gibson said, is determined relative to the abilities of the animal. What the icon shows must be some relation that the animal bears to its environment that will afford something to the animal granted its response is guided in the right way by that relation. It may well be then that from the point of view of physical science, what the animal perceives is a strangely disjunctive or gerrymandered affair. More important, the animal does not perceive this property or affair as a fact in a world with other facts but, intrinsically, merely as for being used in a specific way (Heidegger would have said "zuhanden").
This urgently raises the question what a pure fact-icon could possibly be. How can there be icons having functions requiring that they map by specific mapping rules without these functions themselves being specific? But before addressing this question, we can notice in passing the possibility of pure goal signalsCthe possibility of lopping off the pushmi part of the P-P icon.
What I have called a hunger P-P signal corresponds roughly to what ethologists traditionally have called a "hunger drive." What a drive does is to potentiate, often, a whole collection of lower first act P-P icons so that if triggered by perception, they will respond by directly activating behavior and/or by potentiating still other P-P icons. For example, hunger potentiates a disposition to activate perceptions of food affordance. But a standard view assumes that there are some drives that are not activated by perception, either of inner or outer conditions, but in other ways. They come into play either as the animal matures, or in a cycle, or merely as a result of periods of disuse of the relevant behaviors. Drives of this sort might be said to consist in pure "goal signals" (see p.000 above). They signal "if an opportunity arises, head for goal G," but without making any claims about the animal's current situation.
5. Perception and Cognition as Search Techniques
The most basic sort of P-P icon or perception of an affordance (1) shows a certain relation of the animal to some object or situation that is potentially a goal of action, (2) is produced by transduction of some pattern of energy structured by that object or situation and then impinging on or flowing over the animal and (3) given normal conditions, produces an invariant response of the animal which is describable as some definite function of that pattern so that it always reaches the same goal with respect to the object or situation. To be in a position such that a primary goal, such as having a fly in the stomach, is achievable by utilizing just one such perceived affordance is a blissful condition. Call such a condition a "B-condition".
It is typical of animals that do not move about that they merely wait for B-conditions to pass by them and then seize the moment. More sophisticated animals make an effort to maneuver themselves into B-conditions, for example, the frog has sense enough to sit in a place that attracts flies. The simplest way to attempt to maneuver oneself into some B-condition or another is, of course, just to wander about aimlessly hoping to bump into one. Better, one can use some sort of systematic search technique. One way to view the story of the evolution of perception and cognition is as a story about the acquisition of more and more sophisticated search techniques for maneuvering oneself into B-conditions. These are techniques for raising the probability of getting into places or positions from which one can act immediately and productively.
For many animals the cardinal principle involved in raising the probability of B-conditions is very elementary. Be constructed such that you can perceive affordances that will afford your probable placement in new positions from which you are likely to perceive new affordances that will afford your probable placement in newer positions from which ...and so forth...finally probably placing you in B-conditions. The trick is that this series of probabilities should have a product greater than the probability of B-conditions just happening along without your action, the higher the probability the better. Thus the search domain is narrowed and then narrowed again. The newborn baby's response to a touch on the cheek is to turn toward it, thus raising he probability of feeling a nipple on the mouth which will afford nourishment. Very simple animals show various kinds of taxis likely to take them into conditions where food affordances are prevalent or certain danger-avoidance affordances less likely to need to be utilized. And so forth.
Extremely complicated long and branching chains of affordances leading to the probability of finding one or another other affordances, leading to the probability of finding one or another...and so forth, may be grasped by some animals, resulting in highly flexible behaviors. And it may be that correctly quantified increases in potentiations of response dispositions resulting from other relevant stimuli encountered along the way help account for the tendency of the animal to chose, from among equally available and relevant affordances, those objectively associated, in the animal's particular circumstances, with higher probabilities of eventual success. The result would be an animal whose behavior is very flexibly governed by what Gallistel (1980) calls a "lattice hierarchy." Such an animal would be capable of navigating in the space-time-causal order from a great variety of different starting positions relative to its goals so as to reach them with reasonable probability.
On the other hand, such an animal might also be subject to failures that strike us as rather ridiculous. Dennett has popularized the example of the digger wasp that can be sent into a behavioral loop from which it never emerges by moving the prey it has paralyzed a few inches away from the door of its nest every time it goes inside to inspect preparatory to dragging the prey in (Dennett 1984). I once watched a pair of hamsters continually stumbling over one another as each returned a large cracker to its own corner over and over from the other one's corner just opposite. According to Gallistel, even in quite flexible animals, available behaviors are by no means always applied to relevant situations, even when the increment is very small. He tells, for example, of a hybrid species of lovebirds that repeatedly tried to carry strips of bark to be used for nest building by tucking them into their tail feathers, only invariably to lose them on the flight back to the nest. These birds were perfectly capable of carrying the strips safely in their beaks, but did so only 6% of the time (Gallistel 1980, 306-8). Even though their behaviors may be governed by many different intentional signals and icons hierarchically arranged in ingeniously functional ways, I think it would be natural to say of such animals that they do not think.
Nor does introduction of the capacity to learn in the manner of Dennett's Skinnerian animals, at least as conceived by the associationist tradition of psychology, add more than details to this general picture. In order to guide Skinnerian learning processes, an animal must have inborn capacities to represent when certain goals have been reached or when certain dangers threaten. For example, a sweet taste when there is no real nutrition in the food eaten, or a pain when there is no threat of tissue damage, is a false intentional signal. A positively or negatively "reinforcing" experience is, in general, an inner P-P signal or icon, claiming a certain kind of fact or situation and demanding continuation or repetition on the one hand, or ceasing on the other, of the behavior that has produced it. There are many interesting questions to be asked at this level about individual species, for example, which reinforcements are effective for which kinds of learning tasks, whether the animal can learn associations with unused affordances, such as learning where to go for water from the experience of having found water when not thirsty, and so forth. But in the end these are merely quicker ways than genic selection to forge what are basically the same kinds of behavior chains as before. Associative learning concerns the genesis of the lattice hierarchy only, and does not affect its basic structure. Programming during ontogeny rather than phylogeny may permit more flexible adaptations in the behavior of individuals, but the resulting control structures operate in accordance with the same principles. Skinnerian animals, just as such, do not think either.
6. The First Pure Fact Icons
If all an animal ever perceives are affordances, no matter how good it is at remembering the locations of previously unused affordances, it couldn't, in principle, construct pure fact icons. For example, a snake wired up this way that perceives (as it has been claimed some snakes do) a mouse for purposes of striking by sight, then traces it to where it has died by smell, and finally finds its head in order to swallow it by touch, never using any of this information in any other way, would merely perceive first a "strike me", then a "chase me" and finally a "swallow me." How is it possible to liberate the mouse from total submersion in the series of transitory interests someone takes in it, if not in the snakes mind, then at least in yours and mine?
There is a way that facts might enter before inference and without disturbing the lattice hierarchy in any way. Two simple principles would be involved. First is the use of multipurpose icons that represent always the same kind of world affair but afford the animal different possibilities for action given different motivations. Second is the production of icons containing a surplus of natural information over designed information, which natural information might become available for new uses not anticipated in the original design of the perceptual mechanisms. I'll discuss these two possibilities in turn.
Multiple uses for the same icons might arise merely as a side effect of economic construction of the perceptual apparatuses. If you eat both mice and frogs, it is not economical to have completely different perceptual processing mechanisms, for example, separate eyes, for perceiving these. Similarly, if you eat mice and flee from snakes. If you have a complex structure such as an eye, clearly you should use that same eye for as many of your various purposes as it can be made relevant to, avoiding specialized adjustments that will make it unsuitable for multipurpose use. This has important consequences when we consider the obstacles confronting the design of any apparatus with a sophisticated capacity reliably to make icons showing affordances of distal objects. To be as useful as possible, such an apparatus must enable recognition of the affording distal object or property and its relevant relation to the animal over as wide a range of object-animal relations as possible (not just dead center under the animal's nose) under a variety of mediating conditions (under various lighting conditions, sound echo conditions, etc.), despite distractive intrusions affecting proximal stimulation ("static" such as wind noise or shadows or extraneous smells). First, notice that it will be easiest to make it do this if, in the first instance at least, it registers simple objective physical properties and relations that fall under uniform physical laws rather than disjunctive gerrymandered properties and relations, even though the latter may be more immediately useful for certain entirely specific tasks. Second, the registration of simple objective physical properties and relations is more likely to be useful in the guidance of a variety of different activities. Consider, for example, visual perception of the arrangement, sizes, shapes, textures, orientations and relative distances of the objects in ones vicinity. This kind of information can be put to innumerable uses in the guidance of action. It is best, then, if at some stage of processing at least, the eye produces icons that are not merely of gerrymandered single use distal arrangements. But the more purposes it serves, the more disjunctive, hence indeterminate, is the "pullyu" aspect of the P-P icons it produces. The intentional icon that has a dozen or a hundred uses depending on the particular state of potentiation of the nervous system, all of which uses require it to be aligned with the world in exactly the same way, becomes at the limit an any-purpose, hence purely fact-representing, icon.
This leads immediately to the second principle, the production of surplus information. The more versatile such a perceptual apparatus becomes, the more likely it is to be relying on quite general principles in producing its intentional icons, which is likely to result in more natural information being captured than is consumed by the uses for which it was designed. If you build a visual system so that it that can see mice, frogs, snakes and also conspecifics, then it undoubtedly brings in enough information to see many other medium sized objects as well. One only needs then to design into the animal some principles or mechanisms by which experiments can be made in the use of this extra information, for example, principles by which it searches for patterns of association involving this information, and you have an animal that employs completely general purpose icons and employs them by design. It is designed to perceive any of certain general kinds of facts for as yet unspecified uses. You have an animal that harbors pure fact icons.
7. The Construction of Objective Space
Notice, however, what these fact icons represent. They represent relations that things in its environment bear to the animal. They do not show how things are independently of the animal, in relation to one another. For direct guidance of behavior by the environment, the organism only needs an awareness of environmental relations to itself. Moreover, the intentional icons that such an animal uses do not represent its relation to the environment explicitly. The sentence "there is a mouse a yard in front of Tabby" is articulated to show mouseness, the relation a yard in front of and also Tabby, each explicitly. But Tabby's perceptual view of the mouse as needed for stalking it does not show Tabby explicitly. No transforms of it show the relation of anything other than Tabby to things, nor do any transforms of it omit Tabby. Tabby's perceptual views of the world always concern Tabby since they always concern her position in the world, yet they never, as it were, mention Tabby.
Now for the Kantian part of my story. There are better strategies than constructing a lattice hierarchy network to search for convenient and safe paths from wherever one happens to be into B-conditions. These strategies involve the construction of inner icons of various aspects of the world as they exist apart from the animal's special position in it, icons of the world, as it were, "in itself" rather than relative to the animal. (This is the "transcendental realism.") The strategy requires a rudimentary form of inference. Starting with a simple illustration, namely, the advantages of employing cognitive spatial maps (a "transcendental aesthetic"), I will move on to suggest some richer applications of the principles involved (a "transcendental analytic" and "deduction of the categories").
Suppose that you wanted to find your way home, but that all you had to go by was a collection of memories showing, from the point of view only or your own past perceptions, paths you had actually taken at one time or another from one place to another. Perhaps these memories form an associative network, the want-to-go-home signal potentiating nodes representing the various places you have been, with potentials lessening as the number of links from home in the chain increases and also as the lengths of the individual links increases. You take the path that sends the strongest signal to the node representing the place that you now are in. The trouble with this arrangement, as with any arrangement resting on a record merely of previous orderings of ones own experience, is that this way of representing the paths tells nothing about the general geometry of the underlying space in which they lie. True, you would have enough information to get home from where you are, but you would be very lucky if this information happened to put you on a direct route. What you need to have mapped to tell how to get home fastest is how the various paths lie relative not to your own past history but to one another in Euclidean space. You need to know how the paths twist and turn, at what angles they intersect with one another, and so forth, within that space.
It is for this reason, presumably, that even some quite lowly creatures make maps, make inner icons, of the locales where they live. There is excellent evidence, for example, that bees do this. They apparently record the positions of various landmarks in their locale relative to other landmarks, rather than relative to themselves, in a medium having a mathematical topology and metric isomorphic to Euclidean space. Using a map one can be guided directly from one place to another regardless of whether one has traveled any part of the route before. Thus a bee, when transported by any route to any location in its territory, knows how to fly directly home to the hive as soon as it has taken its bearings. The bee knows how to take short cuts.
To make a map of an area requires not merely representing connections between places one has happened to go, but taking account of the general geometry of space so as to leave empty areas of the right kind on the map for the places one has not happened to go. A "tabula rasa" is the traditional term for a mind that comes into the world with no preconceptions. Any actual blank tablet, however, has a definite geometry. On the customary kind of tablet, only two dimensional Euclidean figures can be drawn. The bees' tabula rasa is apparently a blank isomorph of Euclidean space in at`least two dimensions, waiting to be filled in with landmarks. It is the bee's version of Kant's pure intuition of space.
Imagine the bee's cognitive map as like a road map with gas stations, motels, good restaurants and roadside tables marked on itCthe hive and good nectar-gathering sites and good places to colonize if necessary. The map might also show the current position of the bee itself, as the animated maps displayed in the front of some transcontinental airline coaches show the position of the airplane one is in. But whether or not the bee's map shows where the bee is as well as where it wants to go, notice that it could not stand alone as a guide to action. The bee will need to perceive its position relative to its environment directly as well, even if only to keep its image on the map in the right place. In order to use a map, you must first know where you are. The bee's map must be joined to its perception by identifying a place on the map with a place as directly perceived, so that the two together can guide action. And only if this overlapping of content, this shared middle term, this same place being represented in each, is recognized could these two inner representations be joined together to yield the relation of the bee to its destination. But the joining of two inner representations, pivoting on a middle term to yield new information or direction, is nothing more nor less than mediate inference, in this case, practical mediate inference. Bees must make inferences!
8. The Construction of an Objective World
The entire lesson can be generalized. It seems likely that the best way to find direct routes through the spatial-temporal-causal order from wherever you happen to be toward B-conditions, toward conditions in which you can act to satisfy needs immediately, is to begin to construct "maps," icons, of the relations of the various aspects of the objective order to one another rather than merely to yourself. Remembering the scenery that has happened to pass by on your own historical, private, wiggly, space-time line is not sufficient. Associative conditioning is not enough. In order to grasp possibilities for different kinds of safe and efficient action, anticipating new paths through the objective causal order to B-conditions, it is necessary to reconstruct something of the abstract structure of that order in itself. Rather than relying on mere associative conditioning, inner icons of the objective world need to be constructed. Then, just as the bee finds the direct route home by searching its inner map rather than its environment, direct routes toward B-conditions can be located by searching among one's icons of the world. Searching in one's head for paths to B-conditions is much safer as well as quicker than searching outdoors. Suppose, for example, that it were true that some snakes not only detect mice for purposes of striking, following and swallowing using three different sensory modalities but are incapable of recognizing a mouse for any of these three purposes through any modality but the assigned one (see footnote 00). Thus the mouse shows itself to the snake as three separate affordances that are not integrated into one object. It would be as though these three aspects of the mouse were entirely separate pieces of the world that just happened to lie juxtaposed on the time line of the snake's experience. Nor, of course, would merely multiplying the number of modalities through which a particular mousie affordance is recognized by the snake produce recognition of the mouse as an independent object rather than a mere string of associated affordances. Unlike such a snake, the animal that constructs icons of its objective world must gather together the fragments of the objective world it encounters and glue them together. Compare an archeologist who reconstructs ancient objects from a few broken fragments. Gluing pieces of the world together requires some sort of schematic plan of the general architecture it should have, its geometry and ontology.
The way this reconstruction is done is probably much as Kant suggested. The animal must first grasp the most abstract principles of the world's ontology, just as the bees must first grasp the basic structure of Euclidean space, and then attempt to flesh out this skeleton with details concerning areas proximate to one's possibilities for action. What kinds of general schemas for world structure might be available to an animal? What aspects of the world might it find useful to reconstruct? Some of these basic aspects of the world's ontology were probably forms recognized by Kant. Besides space and time, of particular importance, I believe, are the categories of substance and accident, cause and effect, and the ontology that makes negative judgments possible. Further, just as Kant supposed, it is likely that much or all of our most basic knowledge of world ontology is endogenous. Just as the individual bee does not have to experiment with a variety of different kinds of geometries to discover that the Euclidean kind works best, it is likely that much of the basic architecture of the objective world we represent to ourselves is not discovered by individual experiment either.
No icon of the objective world taken by itself can guide action, however. Icons of the objective world, even if they have one's destination clearly marked on them, that is, even if they are articulate goal icons, are powerless to guide action. To be useful, they must be joined to icons showing part of that same world structure but from the present point of view of the animal rather than objectively. This joining of two intentional icons to yield new information or instruction is a kind of practical inference. Thus, although intentionality does not require inference, objectivity does. That an animal makes inferences, however, does not yet imply that it is "rational" as that term is commonly understood. For example, its information may be tightly encapsulated and unavailable for general use. It is crucial that we not assume that if something is iconned in one part of an animal's system it is known in others, or that access by one part of a system to information gathered in another is a transitive relation. Also the animal may be unable to represent negation, hence unable to recognize contradictions in order to avoid them. More on these themes later.
Finally, an animal that uses maps of its world has to make maps of its world. So it will probably devote some energies specifically to this purpose, exploring and prospecting. This will be "theoretical activity", in Kant's sense. Its function will be mainly the acquisition of fact icons, gathered for very practical reasons to be sure, but without a preselected practical goal in view. On the other hand, much of this theoretical activity can be expected to resemble the way industry supports "theoretical research." There must be possible applications in view. What goes on the bee's map, for example is, presumably, just "places of interest" and landmarks potentially useful for navigation to these places. More generally, understanding the schemas that a particular animal uses in constructing icons of its objective world and the kinds of details it is likely to represent will depend closely on understanding the affordances it is capable of perceiving. These, in turn, will fit with the basic units out of which its behavioral repertoire is composed.
9. Representing Invariance in Substances
One kind of useful objective construction is reconstruction of an object or space in three dimensions from those fragments of the energy it structures that are either accidentally encountered or searched out by the animal. This is the sort of construction that David Marr tried to explain in his theory of vision, and it is well known that Marr had to postulate that the animal makes certain implicit or unrepresented assumptions about certain general properties of the layout of its environment to accomplish this task. An animal that can reconstruct objects and spaces in this way will be able to utilize affordances that show themselves directly only from perspectives other than its own. For example, it may grasp that an object affords climbing up on or that a space affords passing through if approached from a different angle. It may also discover affordances that depend on properties of the whole, such as its overall shape or its volume. Such an animal will also be in a far better position to reidentify places, objects and kinds of objects as it encounters them in different orientations.
The ability to reidentify various entities from a variety of perspectives is central to nearly every other reconstruction task. First, as already noted, no icon of the objective world can be used to guide action without joining it to icons from perception, and this joining is done by finding a middle term, by identifying part of what shows in one icon with part of what shows in another. Thus, for example, the bee needs to be able to recognize the same place from a variety of perspectives if it is to use its map effectively. Second, reidentification is required in order to construct maps and other icons. The bee will know where to place a new landmark on its map by noting its relation to old landmarks already on the map, so it must be able to reidentify these old landmarks. In order to glue fragments of a broken object together you must be able to recognize when two fragments fit together, which requires identifying the same surface shape in the convex and in the concave. Similarly, all the places where the glue goes in reassembling the world are properties or entities that need to be identified as the same ones again seen from different perspectives. Turning the coin over, it can never be taken for granted that any animal recognizes when different ones of its intentional icons contain elements that map over the very same portion of the world. Thus the (probably apocryphal) snake's peculiarity was that it did not grasp that what it chases, strikes and swallows is the same thing. Similarly, as Fodor has often been pleased to remind us, Oedipus had no idea that his thought "Mother" and his thought "Jocasta" had the same referent. Knowing how to reidentify a thing through all of its possible manifestations is clearly impossible. No animal could be perfect in this regard. It can always be that an animal harbors inner representations that overlap in content without grasping this.
It can be helpful to for an animal to have an objective representation of various items in its immediate vicinity and various of their relations to one another. But objects change and they come and go. For this reason, no permanent detailed mapping of them is possible, or not without adding the dimension of time, and what is the practical use of a map of the past? We may be the only animals who have not found that question rhetorical. Other animals store away knowledge only of the most stable structures in their environments. More exactly, I believe, they store away knowledge of what I call "substances," using something akin to the Aristotelian sense of that term. These "substances" include (but are not exhausted by) ordinary individuals (compare Aristotle's primary substances), various stuffs such as water, wind, rain and rock, and natural and historical kinds such as animal and plant species (compare Aristotle's secondary substances). Substances are distinguished by the fact that one can learn things about them on one encounter that will remain true with some reliability on other encounters. Thus you expect the sourness of one lemon from having tasted another lemon and you are ready for John's sourness on one day from having experienced it on days before. This is analogous to learning to recognize the existence of affordances available from other points of view on an object from the perspective one currently happens to have. The trick is, first, to grasp how to locate and quickly reidentify objective substances about which relatively stable knowledge can be had, despite great variety in a substance's possible manifestations to one's various senses. Second, one needs to grasp quickly what kinds of stable knowledge can be gathered about each. If a ripe blueberry is edible, another will likely be edible too, but if one fox is rambling in the open, that does not preclude that others will be waiting quietly in the brush. There is considerable evidence from the child development literature for a boost from endogenous factors toward recognizing categories of substances relevant to these tasks. I have given a great many details of this story elsewhere, however, and will not repeat them here (see Millikan 2000).
It should be clear that no animal is going to map more than a very small portion of its world and a small proportion of that portion's objective aspects. In general, an animal will be expected to reconstruct only certain aspects that are closely relevant to its needs. Nor need we think of the project as the progressive construction of a giant multi-dimensional model of the world in the animal's head. Perhaps the animal puts some fragments together but merely stores others, carefully preserving those aspects that mark known identities for it so it can join them up later if needed. That is, it prepares materials for later use in inference. Compare having a map of the whole of a city but in book form so that one must find various overlapping pieces and join them together to find relations among parts of the city that are distant from one another. Compare also having the pieces of a picture puzzle but not having put it together yet. Similarly, I can know that John is older than Jocasta and that Jocasta is older than Susan without having represented to myself that John is older than Susan. There are two requirements, however, for this information to be useful in the event that I need to know John's age relative to Susan's. First I need to represent Jocasta to myself in the two premises (she is the referent of the middle term) such that it is clear to me that the same person is being represented. Second, I must store these premises in a way that facilitates their co-retrieval should this becomes relevant. Part of the challenge of comparative psychology must be to discover what gerrymandered fragments of the objective world different animals are capable of reconstructing or learning to reconstruct, and what methods they use for storage and retrieval of the necessary information.
10. Representing Invariance in Processes
Locating B-conditions by representing where various substance constancies lie such that one can better recognize and/or head more directly toward known relevant affordances is a very good technique. But some animals may also represent constancies in world processes. Of course, every moving animal actively uses constancies in causal processes. Being guided by a perceived affordance in conditions normal for realization of that affordance is participating in a causal process the outcome of which is stable. But knowing how to, being wired to, engage in fruitful processes on propitious occasions is not knowing about the constancies that make this possible. It does not involve representing these constancies. We humans, at least, do represent constancies in processes. We are not just stimulated to pursue affordances when we happen to encounter them. We remember what turns into what, and what happens if you do what.
In the simplest cases, we merely think ahead to what will happen if we utilize a beckoning affordance, and react to the anticipated outcome with an advance or a withdrawal. This process does not look ahead much further than when one constructs the back side of a three dimensional object in looking for affordances. But it contains the principle by which mental explorations of potentially exponentially increasing complexity are constructed.
We can think of the matter this way. When an animal acts on the world, transforming it in some way, what the animal does has a causal outcome, one that it may be able to anticipate in thought. Similarly, when an animal roams about, the direction it takes from a given place has a "spatial outcome", one that it can anticipate if it has a mental map in its head. Suppose then that the animal's goal is to arrive at a certain place. It's goal is marked on its mental map. It perceives the place where it now is, identifies this place on the map, and joining percept with map, heads straight to its goal. Now suppose instead that it has as its goal to be in a certain situation in its world to which it must traverse not just spatially but causally. It wants, say, to be sheltered in a certain sort of house. It has a goal representation that icons this particular objective situation and it is to aim for this situation in the causal, not just the spatial, order. How will it use its knowledge of what leads to what in the causal order to direct its aim so that it starts off in the right causal direction? The difficulty here is that unlike ordinary space, the logical space of possible causal outcomes in time is not a connected space with a definite geometry. It is a space in which possibilities diverge and then diverge again in infinite variety. There can be no analogue here of dead reckoning.
For an animal to use knowledge of constancies in causal outcomes in a sophisticated way to govern behavior would require it to become, in Dennett's sense, "Popperian." It would need to make trials and register successes and failures in its head, imagining one by one various alternative chains leading from its current situation to others until it hit on some or another causal route to its goal. Though I have argued that bees too make inferences, an animal capable of this sort of inference would be far removed from the bees.
11. A Common Code
What eventually emerges in Homo sapiens is the ability to recognize and to map causal processes initiated either by the thinker or by extrinsic events, and the ability to represent a layout of ongoing events many of which occur in places at a great remove from the thinker. The bee constructs a three dimensional space containing enduring objects. This map may have to be revised or updated quite frequently, but it need not represent a temporal dimension, or at least not one with absolute dates (Gallistel 1990). A human, on the other hand, constructs a four dimensional map of a dated world in progress, mapping both things that endure (substances, places) and also what happens, both in its own locale and in other places. Many of these facts are represented apart from any known relevance to the thinker's practical interests, and inferences are made from these facts to further facts of the same disinterested sort. Ultimately, of course, the, or at least an important point of all this cognitive activity is to join up at crucial points with perception so as to guide action. But a more immediate aim often is merely the efficient production of representations of more and more of the world.
Now the perceptual representations that guide immediate action need to be rich in specific kinds of information, showing the organism's exact relations to a number of aspects of its current environment directly as they unfold during action. These icons may need to have variable structure of a kind that conforms closely to the variable structure of the organism-environment relations that need to be taken into account instantly. And because they need to be constructed quickly, they may be constructed by modular systems that are relatively cognitively impenetrable ( Fodor 1989). The job of the disinterested fact icons of cognition not this, but rather easy participation in mediate inference processes. This job makes its own special demands, there being no way to specify in advance in what specific kinds of inferences such a representation may need to be used. Facts are collected for whatever, if anything, they may prove to be useful for. While the representations of perception may need to be cast in highly structured multi-dimensional media suitable to the immediate purposes to which they are dedicated, cognitive representations may need to be cast in a simpler uniform medium that makes them easy to compare and combine. The ideal fact icon would be one that could be combined with any other fact icon having an overlapping content, a potential middle term in common.
Whether or not information can interact in inference depends not on its content but on its vehicle. Putting it graphically, if the first premise of an inference is represented with a mental Venn diagram and the second with a mental sentence, it is hard to see what inference rules could apply to yield a conclusion. Similarly, one might suppose, if the information coming in through the various senses were not translated into something like a common medium for the purposes of theoretical and practical inference, it could not interact in a flexible way. Possibly this is the fundamental difference between inner intentional icons that are more "perceptual" and those that are more "cognitive."
In any event, an important question when studying the mental life of any fact-collecting species must concern the degree and the kind of interaction in inference that can occur among the varieties of intentional icons it collects. Whether or not intentional contents can interact in inference does not depend on satisfaction conditions, but on how the content is articulated and represented, stored and retrieved, and especially, to what degree content identity is clearly marked across icons with overlapping content (Millikan 2000, Chapter 10).
12. Negation and Accidents
An animal constructing maps of parts of the world that lie at a distance from it clearly is at great risk of error. Compare generalization that connects experience to behaviors with generalization that connects experience to idle beliefs about facts. Practical generalization is naturally bridled. Resulting unsuccessful behaviors do not always produce punishment but they do waste time and energy, naturally diverting the animal's responses into other channels. What kind of bridle is there on false generalization in the case of theoretical inference?
At first the answer seems obvious. A primitive scientific method must be employed. The animal generalizes so as to expect certain things to be true, and then either makes observations systematically or just happens to observe things that either verify or falsify some of its conclusions. This happens often enough to keep its dispositions to generalize enough in check. But there is an important link missing in this explanation. The link is the capacity to represent something negative, which is needed to represent contradiction, which is needed to recognize falsification.
None of the intentional icons that we have discussed have been icons subject to negation. Consider, for example, bee dances. A bee dance represents where nectar is. There are no variations on bee dances that represent where nectar is not. No bee dance contradicts another bee dance, indeed, bee dances cannot even be contraries. If two dances show nectar in two different locations, if the bees are lucky there is indeed nectar in two different locations. In particular, it is obvious, yet important to note, that the failure of a bee to dance a dance showing there to be nectar at place p is not an icon showing there to be no nectar at place p. In The absence of an icon showing a certain fact is not equivalent to the presence of an icon showing the negative of that fact.
This is straightforward enough, but now apply it to perception. Suppose that by inference the thinker arrives at a fact icon showing that, since birchbalm was applied yesterday, today the wound will be closed. And suppose that the wound is not in fact closed, and that the thinker is observing the wound. He does not perceive that the wound is closed. But from this it does not follow that he does perceives that the wound is not closed. Assume now that he not only fails to perceive that the wound is closed but positively perceives that the wound is open. That is, he harbors an intentional icon of an open wound. Now a wound that is open cannot at the same time be closed. That is a fact, in some sense a necessary fact, about the world. But can there be an intentional icon that represents a wound as open without representing it as being not closed or, more generally, without representing it as being contrary to closed? That is the question we must ask here.
First, notice that just as one need not represent space with space or time with time, one need not represent contrariety with contrariety. The words "red" and "blue" are no more nor less contraries of one another than are the words "red" and "square." The physical forms that constitute two different bee dances are, of course, contraries of one another in that no bee can dance two different dances at once, but this contrariety does not correspond to a contrariety in what is represented. If contrariety in content between two representations is represented, presumably it will be represented by some relation or other between these representations, but not necessarily by the relation of contrariety. What we need to ask, then, is what it would be like for contrariety to be represented by some relation between intentional icons.
As with the content of any other intentional fact icon, the relation between two icons that represents contrariety must be a relation that guides the thinker appropriately with regard to its content. How then would one be appropriately guided by representations that represented contrary facts? If contrary facts are represented this must be because one's method of intentional icon production has failed. Somewhere in one's method there was an error. To be guided appropriately by the appearance of contrariety would be to backtrack, attempting a correction in one's premises or ways of generalizing wherever the most likely point or points of weakness are, after taking into account one's prior experience in trying to formulate consistent judgments. That is, to be guided appropriately by contrariety would be to think rationally.
In sum, having beliefs that are contrary in content is one thing. Recognizing that they are contrary and reacting appropriately is another thing entirely. For an animal to achieve the latter must require an important transformation of its inner representational system, namely, introduction of icons that explicitly represent contrariety. Introduction of explicit negation is a step even beyond this. Explicit negation is indefinite contrariety. The negative says that some contrary or other of this icon is true (Millikan 1984, chapter 14). It is a very sophisticated animal indeed that understands explicit negationCa fully rational animal.
An icon that shows its contrariness to certain other icons explicitly, or that is subject to an (internal) negation transformation, explicitly represents, in part, a property or, in Kant's terms, an "accident." An accident is, intrinsically, whatever lies within a range of accidents no two of which can be exemplified by any member of a certain category or categories of substance. Explicit representation of accidents makes explicit representation of contrariety possible. Explicit grasp of contrariety is a further step toward grasp of a fully objective world. This is because the objective identity of an accident is relative to the contrary accidents it opposes on the ground of its correlative substances, while the identity of an objective (or "theoretical, as opposed to a "practical") substance is, reciprocally, its refusal to admit contrary accidents from certain contrary ranges. Neither kind of identity is determined in any part by sameness of use or sameness of effect on the perceiving animal (Millikan 1984, Chapter 16).
Akins, Kathleen 1996 "Of Sensory Systems and the 'Aboutness' of Mental States," The Journal of Philosophy 93.7,
Burghardt, Gordon 1993 "Perceptual Mechanisms and the Behavioral Ecology of Snakes". In Richard Seigel and Joseph Collins, eds., Snakes; Ecology and Behavior (McGraw Hill), pp. 117-164..
Dennett, Daniel C. 1996 Kinds of Minds (New York: HarperCollinsPublishers).
Dennett, Daniel C. 1984 Elbowroom (Cambridge MA: MIT Press).
Gallistel, C. R. 1980 The Organization of Behavior (Hillsdale NJ: Earlbaum).
Gallistel, C.R. 1990 The Organization of Learning (Cambridge MA: MIT Press).
Gould, James & Carol Gould 1988 The Honey Bee (New York: Scientific American Library, a division of HPHLP).
Lorenz, Konrad 1977. Behind the Mirror (New York: Harcourt Brace Jovanovich).
Millikan, Ruth G. 1984. Language, Thought, and Other Biological Categories (Cambridge MA: MIT Press).
Millikan, Ruth G, 1996 "Pushmi-pullyu Representations," in James Tomberlin, ed., Philosophical Perspectives vol. IX, (Ridgeview Publishing) pp. 185-200. Reprinted in Mind and Morals, ed. L. May and M. Friedman (MIT Press 1996) pp. 145-161.
Millikan, Ruth G. 1997 "A Common Structure for Concepts of Individuals, Stuffs, and Basic Kinds: More Mama, More Milk and More Mouse," Behavioral and Brain Sciences. Reprinted in E. Margolis and S. Laurence eds., Concepts: Core Readings (Cambridge MA: MIT Press 1998).
Mllikan "Some Mythical Indexicals"
Millikan "In Defense of Public Language"
The Cummins-Millikan Symposium