Back to CQ Homepage

The Varieties of Cognitive Experience



      I am grateful to Markate Daly for forcing me to clarify my concept of the relationship between experience and know-how. She may be correct in saying that "None of the passive endurings and sufferings, loves, enjoyments and imaginings of Dewey's conception can be characterized as a part of 'knowing how' as it is currently understood." But I think that there is a similarity between passive experience and active coping that distinguishes them both from the allegedly "objective" sense data that Dewey was rejecting. Experiences of love and suffering don't simply present themselves to us as independent entities. They are richly interwoven with each other and with the world, in much the same way as the connections between muscles and perceptual affordances. I tried to explain why I saw both emotions and knowing-how affordances to be governed by the same principles in the section of the paper dealing with Gibson, but of course there is still a great deal more to be said.

      I like Daly's suggestion that "'Knowing how' to do something is related to this process of experiencing by being the reservoir of previous experiencings that 'funds' the current engagement between the self and the world." This would imply that not all of our experience is directly involved in enabling us to do something all the time, but our ability to cope is always based on our experiences. However, I would still see even passive experiences as being constituted by their potential to motivate action. Maybe this marks me as a type "A" personality, but for me even passive endurings presuppose some sort of goal that is either repressed or frustrated, and passive contentment presupposes a goal that has been satisfied. (or at least an optimal state of affairs that is being somehow maintained.)

      When I was working on "the Modularity of Dynamic Systems", I became more aware of the validity of Daly's objection that "connectionist net theory seems to be tied to the laboratory model of training for specific skills to be deployed is stereotypical situations." The passage connected to the previous link is strongly influenced by her comments. The important difference between connectionist nets and more general applications of Dynamic Systems Theory (DST) is that DST uses vector transformations and other related mathematical principles to model the behavior of whole organisms interacting with a world, rather than arrays of neurons in skulls. Modeling a system in an environment has a much better chance of capturing the essence of those numerous other social and emotional factors that Daly rightly accuses connectionist AI of ignoring. It's still a long shot (for reasons I outlined in "The Effects of Atomism on the History of Psychology"), but at the moment I feel it's our best bet.

      Garson gives a good summary of the connectionist/language of thought controversy that I was trying to sidestep in this paper. The last paragraph of my paper was meant to acknowledge Garson's point that if we ever did fully implement a connectionist language system, the distinction that I am trying to make in this paper would be difficult to keep in focus. But even if this distinction is difficult, I believe that it is still coherent and significant. Perhaps the simplest ontological relationship of all--the "made of" relationship"-- might help to clarify this. Much contemporary connectionist research, both philosophical and scientific, is concerned with the question of whether it is possible that a language of thought (LOT) could be "made of" connectionist nets. Some claim that once such a connectionist language device was made, it would be so different from a LOT that we would have to say that the entities of LOT had been eliminated. Other say it would be so similar that making a connectionist implementation is a mere hardware problem.

      In this paper, I am claiming that there are structures and functions other than language that can be performed by connectionist nets, and that these structures have a different sort of cognitive power that is an essential part of our conscious life. The question of whether connectionist nets could or could not be used to make language machines is simply not being discussed here. Whether or not linguistic processing is also implemented by a specialized form of vector transformation mechanism, vector transformations still have cognitive powers even when they are not performing linguistic functions. Even if all Chairs were made of wood, wood and chairs would still be very different things, because there are many other things that can also be made of wood, and wood has many intrinsic properties of its own. Similarly, even if language can be implemented by connectionist nets, this does not alter the fact that connectionist nets can also be used to perform non-linguistic cognition as well.

      In this paper, I am discussing the powers that we now know connectionist nets to possess, not how to put them together to give them new (i.e. linguistic) powers. And my claim is that the vector transformations that we have already modeled in connectionist AI are a plausible mechanism for explaining what Dewey calls experience. Dewey did not have the advantage of knowing about connectionism when he wrote, which made much of what he said about experience seem mushy and mystical in his time. But now that we know that connectionist nets can do many skillful things even without being made to produce language, it appears that Dewey's distinction between knowledge and experience could be explained if it turned out that what he called knowledge is implemented by some kind of linguistic processing, and what he called experience is directly implemented by vector transformations. I also believe the fact that linguistic cognition is inherently social, and most connectionist cognition is inherently private, partly accounts for the distinction between knowledge and experience that Chalmers calls "the hard problem."

Teed Rockwell