Back to CQ
Homepage
Back to Archives' Table of Contents
The Commentators Respond
Jim Garson
John Bickle
Mark Churchland
Sue Pockett
Peter Lloyd
Andy Clark
U.T. Place
Gary Schouborg
John Bickle (post 2)
Sue Pockett (post 2)
U.T. Place (post 2)
Thomas Alexander
Bo Dahlin
Jim Garson
writes:
I have a problem with your view about computers presupossing sense datum theory.
True the classical doctrine has information coming in. But isn't that a triviality? I
mean the brain just is affected by the input vector (i.e. activity values of all
sensory neurons throught time). It is a big leap from that uncontroversial fact to
the idea that there are sense data coming in. First of all, not all of the
classicist's input vector qualifies as sensed, for a lot of the information provided
fails to be part of our awareness. Even parts that COULD BE in awareness are often
not as attention changes. So the classicist's inputs are not identifiable with
sensations. Second it is not clear to me that even the inputs of which we become aware
count AS sense data, for it is far from clear that the inputs could even possibly be
encountered AS sensations. For that, one would need to encounter inputs related to
others in the right kinds of ways and this would involve finding the data within a
cognitive level theoretical structure of some kind. Note that the standard
classicist hopes to provide a functionalist account of mental states. So one expects
the classicist to be either mum on sensation or to provide a functionalist account of
sensations. But if the latter choice is taken, then raw inputs by this very theory
cannot count as the sensations, since their individuation AS sensations requires a
framework of interrelations provided by the functionalist theory that undergirds the
identification of mental states. I am reminded of a parallel moral to be drawn about
computers. Imagine we have a simple caclulator set up to do addition and subtraction.
The raw input might be arrays of 0s and 1s (or more carefully, switch settings). These
have no status as NUMERICAL inputs until we encounter their computational roles
defined by the machine. Computer states do not take on their intentional properties
in a vacuum. This is the sense that inputs mathematically construed (i.e.
intentionally construed) are never "raw". This is an important Kantian theme in
classical computationalism that seems to me to challenge features of "whole hog" sense
data theory. Further evidence that the horse is long
dead.
John Bickle
Writes:
I wanted to correct a factual error in the comments you quote from Sergio
Chaigneau. While his conceptual point about inputs to the LGN is correct--80-90% of
synapses on LGN relay cells come from axons other than retinal ganglion cells, the
majority come back from visual areas, primaerily V1 (primary visual cortex), but also
some from sites higher up in the visual streams) and from brain stem nuclei that
compose the (somewhat misnamed, especially concerning functional significance)
"reticular formation" (used to be called "reticular activating system). These nuclei
are important for general arousal states. LGN relay cells also receive extensive
projections from cells in the reticular nucleus of the ventral thalamus, a thin
network of exclusively GABAergic (hence inhibitory) neurons that project extensively
back into dorsal thalamic nuclei. (The region of the reticular nucleus that receives
collaterals from LGN relay axons bound for V1 and projects densely back into LGN (onto
both relay neurons and intranucleus inhibitory interneurons) is called the
perigeniculate nucleus.) No doubt that there are some synapses from "motor areas" (but
there are a lot of motor areas, and without a more precise description it is
impossible to evaluate even the approximate truth of Sergio's claim), but the bulk of
motor projections back into the thalamus don't go to LGN. It is a mistake to cite
Varela as an authority on the anatomy of the thalamus. The definitive work is Ed Jones
1985 book, The Thalamus. It has a chjapter on LGN and on reticular nucleus. You might
also take a look at the chapter from Mason and Kandel in Kandel, Jessell, and
Schwartz's definitive Principle of Neural Science (last volume published in 1991, a
new one either now out or shortly comning out.) If you wish more detailed accounts of
the neuroanatomy of thalamic nuclei, I can send you those as well (they are by real
neuroanatomists, published in real neuroscience journals, not the kinds of things that
cogntiive scientists and philosophers read and discuss--to the latter's detriment). My
computational neuroscience group has published a paper in abstract form describing
results from a computer model of LGN-V1-reticular nucleus circuitry. The results are
strongly suggestive that this circuitry provides a mechanism for stimulus-driven
selective visual attention. (The abstract is in Society for Neuroscience Abstracts 23
(1997), pg. 1589 (entry 620.7). We're currently preparing a paper for submission to
the Journal of Computational Neuroscience describing the model, the results, and their
implications for a mechanism of selective visual attention. If you wish, we can send
you a copy when it is ready to submit.
Mark Churchland
writes:
As a side point, '# of synapses' is not necessarily a good measure of the ability
of one area to drive another. It is often the case that feedback projections outnumber
feedforward projections. However, such feedback connections may be much weaker. This
may be visible anatomically (smaller synapses/ farther out on the dendrites) or it may
not. I don't wish to imply that feedback connections are never as poweful as
feedforward ones. I only wish to point out that '# of synapses' can be a misleading
metric of connective strength.
Sue Pockett
Writes:
With regard to the word qualia, I'd suggest you examine the traditional
psychologists' distinction between sensation and perception. My understanding of the
word qualia is that it could usefully be replaced by the word sensation, used in this
restricted trade sense
Peter Lloyd
Writes:
1. I can't see how a non-sensationalistic epistemology can ever get off the
ground. The mind is (amongst other things) an information-processing system that uses
information successfully in interacting with its environment. Therefore the mind
possess information. But it is a fundamental characteristic of information that it is
*not* endlessly reducible. The analysis of information comes to an end with raw data.
Therefore the mind possesses basic units of information, which we might call 'raw
data' or 'sensations'.
2. Talking to pragmatists in the past, I've got the impression that they think of
sensations only as 'naive' sensations. For instance, if you think of your visual
field as being consisting of pixels of visual sensation like a TV screen, then that's
what I'm calling a naive view of sensations. In fact, the sensations that make up the
visual field must be more complex and subtle than that. Nevertheless, the visual
field still consists of sensations. (I am thinking here of Hubel & Wiesel's
experiments, and my own experiences of migraine 'castellations', which suggest larger
visual elements as the raw units.)
3. As far as I have gleaned, the main attraction of pragmatist epistemologies is
that it explicitly accommodates the crucial role of mental *activity* in the
acquisition of raw data about the world. I do not, however, see any conflict between
perception's being active and its yielding sensations. Just think about a robot
exploring its environment: it carries out probing actions, such as switching on its
lights and cameras, and gets raw data back. Likewise, the mind performs actions, such
as listening to sounds, and gets raw sound
sensations.
4. Of course, sensations do not come *into* the mind from outside it (as John
Locke seemed to think). They are constructed internally in response to incoming
stimuli. I acknowledge that some philosophers would dispute this, but to me it is
incontrovertible. For instance, the qualia of red does exist in the rose I look at:
what the rose provides is elecromagnetic radiation in a particular range of
wavelengths, which yields a sensation of red in my conscious mind. That red sensation
is the raw datum.
5. Another thing with which anti-sensationalists seem to have difficulty is the
presence of pre-conscious processing of sensory data. This pre-processing does *not*
count against the sensationalist hypothesis. Take, for example, the filling-in of
shapes, such as the lines of a broken triangle. Clearly the brain is carrying out
pre-conscious processing of the visual data, to fill in the gaps. Nevertheless, what
the brain then delivers to the conscious mind are indivisible, unanalysable units of
raw sensation. Another example would be hearing phonemes belonging to a language with
which we are familiar: those phonemes *are* the raw conscious sensations. (Aside:
Some people I've conversed with just seem to unable to comprehend this, no matter how
many different ways it's expressed. I guess it's because they're locked into a
mind-brain identity metaphysic, in which - e.g. - the phoneme is identical with the
signals in the audotory cortex and therefore necessarily a compound entity. You do
not, however, have to buy into mind-matter distinction to accept my point here. You
can take an analogy from social constructs: legally, I am indivisible person, yet that
legal entity of the person corresponds to a compound entity of the physical
body.)
Andy Clark
Writes:
At times, I felt you just might be failing to distinguish the idea of sense data
from the idea of MERE INPUT. The point about sense data was that they were meant to be
both RAW and yet somehow REALLY CONTENTFUL: a combination that looks ultimately
incoherent. All that information theory really needs is the idea of an input to the
system, and a set of actions to select from on the basis of that
input.
Re the knowing-how/knowing-that business, I agree with Garson that we need to
beware fruitless distinctions. I think the point about data and process can be put
somewhat differently (and this bears on what I was doing in BEING THERE) viz, as a
difference between inner encodings that are distant from action control (in that the
route from the encoding to the action is computationally expensive) and ones where the
encoding is formatted in a way that is directly tailored to the control of appropriate
actions. An example of the latter would be an encoding of proximal food location that
JUST IS a motor command to move an arm to a spatial location,and to grasp and ingest
what it finds there. Ruth Millikan's PushmiPullyu reps seem to me to be in this
ballpark. Notice that this distinction applies as easily to connectionist systems as
to classical ones.
Ullin T. Place
writes:
A few points in response to your `Beating of an undead horse'. The first relates
to the difference between James' "big blooming buzzing confusion" and my Wundt and
"physical" v. "mental" pleasure and pain cases. What is true is that James identifies
what he takes to be an actual case where experience (as a whole - yes) remains
uninterpreted, because the child hasn't yet developed the required concepts. Wundt's
two forms of experience do not involve uninterpreted experience in this sense. It's
simply that according to him there are two different ways in which the SAME experience
can be interpreted which implies that the experience and its interpretation are two
different things. There is no reason to hold on to this view that any uninterpreted
experience exists, except perhaps momentarily before an interpretation is arrived at
or when switching from one interpretation to
another.
The pleasure/pain case is slightly different. Here the suggestion is that in the
case of "physical" pleasure and pain the emotional response DOES NOT DEPEND ON the way
the experience is interpreted. Again there is nothing that requires the actual
existence of uninterpreted experiences.
My second point relates to Broadbent's use of the term "evidence". When I use
this term in my own work, I always put it in quotation marks. This is because,
according to me, it is not evidence in the ordinary sense of that word. It is
precisely because the use of the term `data', together with the phenomenalist
theoretical framework in which it is embedded, treats sense-data as evidence in the
ordinary sense that leads me to say that sense-data do not exist. The same
incidentally goes for qualia, if it is taken to be part of the definition of a quale
that it is a functionless epiphenomenon. But if you say that sense-data are data only
in metaphorical sense or if you allow that qualia have a vital function in the process
that leads to sense perception, I am happy to use both expressions and say that a
sense datum is a private sensory experience and that a quale is a property of such an
experience by which we recognise the stimulus situation confronting us as one of this
or that kind.
What is wrong with treating sensory experience as evidence for the belief that one
is confronted by a situation of this or that kind in one's external environment is
that we ordinarily use the term `evidence'
(a) when talking about the relation between two statements or sets of statements,
the evidence on the one hand and the hypothesis it is evidence for on the
other,
(b) where the evidence consists in one or more observation statements and where
the hypothesis for which the observation statements provide evidence is something that
cannot itself be directly observed.
In the case of the relation between sensory experience and the categorization of
it as an encounter with a situation of this or that kind, neither of these conditions
apply.
(a) In the categorization of sensory input there are no statements involved.
Sensory experience and the categorization for which it provides the evidence are
neural processes which occur in the brains of animals just as much as in the brains of
humans. Even in the human brain identifying the kind of object or situation with
which one is confronted is a distinct process both from that of naming the object or
situation and putting what is observed into words in the form of a
statement.
(b) Contrary to the opinion of the phenomenalists, in the ordinary sense that
word we DO directly perceive the objects and situations in our stimulus environment
for whose presence sensory experience and its qualia provides
"evidence".
Contrary to the view expressed by Ryle in THE CONCEPT OF MIND, there are cases
where we can quite properly be said to observe our sensations and other private
experiences. After filling a particularly deep cavity in one of my teeth recently, my
dentist asked me to check any pain I might subsequently have to see whether it was
caused equally by hot and cold stimuli (good) or only by hot (bad, particulary if
throbbing). This, however, is a rather sophisticated form of observation which we
learn only AFTER we have already learned to observe what is going on in the world
around us. When I say I rejected the doctrine of sense-data more than fifty years
ago, what I rejected was the idea that in observing what is going on around us, we
begin by observing our sensory experience, formulate those observations in the form of
a sentence in a private sense datum language and then use those private observation
sentences as evidence for the existence and nature of what we NEVER observe, namely
the objects and situations in the world around
us.
That, of course, means that I rejected - here following Wittgenstein - the notion
that the observation sentences which provide the foundation of empirical knowledge are
sentences in a sense-datum language describing the private sensations of a single
individual. What it did not mean is that I denied either the possibility of describing
private experience or the idea that empirical knowledge has to be anchored to
observation statements. With regard to the former, I have been insisting for more
than forty years that our ability to describe our private experience is parasitical on
a prior ability to describe what is going on in the public world. With regard to the
latter, I have long assumed, but rather more recently begun to insist, that the
observation statements which anchor our language to the reality it enables us to
depict are statements describing a publicly observable state of affairs (events
disappear too quickly) on whose correct description any competent speaker of the
natural language or technical code in current use will agree. It is because I take
this principle as axiomatic that I describe myself as a behaviorist. See `A my
radical behaviorist methodology for the empirical investigation of private events'
BEHAVIOR AND PHILOSOPHY, 1993, 20, 25-35.
One final point in this connection. The relation between a sensory experience and
the categorization of the current state of the stimulus environment for which it
provides the "evidence" is a straightforward causal relation; whereas the relation
between evidence in the ordinary sense and the hypothesis for which it provides
evidence is a logical relation. Logical relations such as this can, of course, act as
causes in persuading an individual to accept (or sometimes reject) the hypothesis for
which it is evidence. But that does not alter the fact that logical relations, as
such, are not causal relations. The analogy between the two cases is that in both, it
is important for the individual to GET IT RIGHT. The difference is that in the
experience-categorization case what the individual has to get right is what it is he
or she is currently observing; whereas in the evidence-hypothesis case what the
individual has to get right is a verbal description of something that is NOT currently
available for direct inspection.
Another difference is that all the might of natural selection is mobilised to
ensure the conformity of our perceptual categorization to the way things are; whereas,
except in a handful of cases where getting it right is a matter of life or death,
there are only a few relatively weak social sanctions to ensure that our hypotheses
are and remain consistent with the available
evidence.
Sergio Chaigneau's mention of J.J.Gibson reminds me of my own excitement when, as
a very inexperienced psychology teacher at the University of Adelaide, I read Gibson's
first book THE PERCEPTION OF THE VISUAL WORLD when it appeared in (?) 1951. Here for
the first time was a psychologist doing experimental work within a conceptual
framework entirely consistent with what I had learned from Austin's `Sense and
Sensibilia' lectures - so different from the ghastly conceptual confusion of the
Gestalt Psychologists, whose work had been endlessly thrust down my throat during my
psychology course at Oxford in 1947-9 and which was the principle target of my
critique of the phenomenological fallacy in `Is consciousness a brain
process?'.
During the winter of 1955 after I had returned to Oxford from my four years at
Adelaide and while I was waiting for `Is consciousness a brain process?' to appear in
print, I had the privilege of getting to know Gibson personally. He had a visiting
appointment at the Institute of Experimental Psychology where I was registered as a
candidate for the D.Phil., a degree which I never managed to obtain. I tried to
persuade him, unsuccessfully as it turned out, that his position would be more
consistent if he dropped the phenomenological veneer and stated it in a
straightforward behaviorist way. Interestingly, I was supported in this by his wife,
Eleanor Gibson, who not only worked on perception in animals, but had been a student
of Clark Hull at Yale. I have a copy of my correspondence with J.J.G. during this
period on file on my computer and could e-mail it to you, if you're interested.{I was
and he did. see the Gibson-Place Correspondence on this
Website--WTR}
You might also be interested, in connection with Ruth Milliken's deployment of
Ryle's `knowing how' and `knowing that' distinction, in a section of my chapter on
`Ryle's behaviourism [sic]' in W.O'Donohue and R.Kitchener (eds.) HANDBOOK OF
BEHAVIORISM which is forthcoming from Academic Press. In it I discuss the distinction
and suggest that it marks a failure on Ryle's part to study the grammatical objects of
psychological verbs with the same thoroughness with which he explored their aspectual
characteristics. This left room for Roderick Chisholm to introduce his linguistified
version of Brentano's intentionality, thereby generating a new piece of conceptual
confusion for philosophers to pick over.
This, of course, needn't undermine Ruth's thesis which I would express in my
behavioristic way by saying that getting one's propositions right depends on a great
deal of contingency-shaped learning of semantic conventions which in turn depends on
the, part contingency-shaped, part innate, pre-linguistic categorization ability found
in animals.
Gary Schouborg writes:
As I recall your discussion of the relationship between sense data and qualia, let
me suggest that each is a theoretical concept employed to answer a different question
from the other. Sense data are employed to explain how we can adjudicate conflicting
beliefs. Qualia are employed to explain or characterize 1st-person experience. The
Given is relative to beliefs / hypotheses / interpretations about it. The Given is not
irreducible, but that which disputants will accept as settling their differences.
Similarly, one does not need Absolute Leverage to stand up on her own two feet, only
something sufficiently stable to do the job. Even in this relative sense, I don't
believe The Given is an immediate given, but a theoretical concept. Seems to me, we
don't begin with sense data, but will real chairs, streams, sticks, etc. Since we
found our judgments changing, such as with the classic bent stick half immersed in
water, we have developed a theoretical concept, sense datum, to explain how such
judgments can differ and how they might be adjudicated. Your discussion about sense
datum should therefore be placed in that context, to wit: if sense data cannot explain
how judgments can differ and how they might be adjudicated, what can? Seems to me,
this leads ineluctably to some interactionist view. Note that to say we begin with
real chairs, etc. is not to espouse naive realism, which overlooks the mind's
contribution to perception. It is only to identify phenomenologically, where we in
fact begin. Related to this is a principle which I am currently touting as crucial:
Innocent Until Proven Guilty. I believe Plato got us in a mess by saying we begin with
opinion and try to move to knowledge by justifying our opinion. This has led us in a
fruitless search toward the chimerical grail of foundationalism -- thinking we know
nothing until we've provided incorrigible evidence for it -- as if each of was guilty
before the law until we proved we were innocent. Rather, I think the way we actually
do things is according to the principle of Innocent Until Proven Guilty -- we say we
know something unless we have some good reason for doubting it. I think this
perspective sheds an entirely different light on knowledge and The Given. To the
question, then, of -- Do we know anything? -- the answer is, Of course we do. Asked
for an example, we can come up with most anything -- e.g., that this Mac before me is
there even when I'm out of this room. It is not for me to prove this, but for you to
provide a reason why I should doubt it. As to qualia, I am working on an article,
Being and Function, that argues that consciousness explains nothing, only function
does. For reasons we do not, and probably never will, understand, cs accompanies some
functions. For those who find this an unacceptable counsel of despair, let me point
out that we accept essentially this position with regard to the question, Why does
anything exist rather than nothing at all? This most medieval of questions is now all
but universally considered to have no answer. Not that it is unanswerable in principle
-- who can say? -- but it is beyond the scope of explanations that we seem to be able
to provide. Unlike the medievalists, we now limit ourselves to explaining only the
shifting forms and relationships of things, not why anything is there in the first
place. I increasingly see cs as parallel. Functionalism is the (?) science of cs, but
the existence of cs itself is beyond explanation, just as is the existence of
contingent being.
John Bickle writes:
With regard to your last CQ, I have some things to say about Hubel and Wiesel's
work, although I'm not sure how much it really pertains to the sensations versus
perception distinction in psychology. (With regard to that, I'd suggest looking in any
popular text for Sensation and Perception classes in Psychology departments.) Hubel
and Wiesel won the 1980 (or 1981) Nobel Prize (they split it with Soerry) for their
work from the late-1950s and early 1960s on information processing in visual cortex.
Their initial work followed up on the work of Stephen Kuffler, who used electrodes to
measure activity in retinal projections to thalamus (lateral geniculate nucleus) upon
presentation of a visual stimulus. Basically, they used his procedure to measure the
receptive field properties of neurons in primary visual cortex (dubbed V1 by some
authors, and Brodman's area 17 by others). They worked on anesthesized cats. At first,
they were using slides with black-white contrast as visual stimuli. After almost a
year of just trying to get their experimental set-up working, they initially got
really disappointing results--activity in V1 neurons seemed almost random. They did
get some activity to one slide, however, and discovered that a shadow was falling on
it when it was presented, creating a bar of light black-white contrast. They then
started showing bars of lights and measuring V1 activity, and discovered that V1 cells
do indeed have bars of light at particular orientation and location as their receptive
fields. For example, if a neuron was most active on presentation of a vertical bar of
light in the upper left quadrant, it would discharge more and more vigorously as the
visual stimuli approached that location and that orientation. In later studies (early
1960s), they worked up a theory of how V1 neurons get their receptive fields, based
upon the receptive fields of LGN relay neurons, their projection to stellate cells in
V1, and the latter's projection to simple cortical cells (in V1). With their work was
born the idea of a hierarchy of visual processing areas, where lower cortical regions
extract simple information from the visual stimulus (e.g., bars of light at particular
orientations and locations) and project this information to higher areas, which
extract increasingly abstract information (curves and edges in V2, etc.) Visual
processing splits into two streams past extrastriate cortex, a ventral stream
projecting into inferotemporal cortex extracting information about a stimulus's
identity, and a dorsal stream projecting into posterior parietal cortex extracting
information about motion and location. I have a very brief description of these
streams in a review essay I published in Philosophical Psychology in December 1997,
and the paper I'm going to send you talks in some detail about LGN-V1 projections
Sue Pocket writes:
Hubel and Wiesel became probably the most famous neurophysiologists in the world
(ah, how restricted fame is in these fields - if ya wants fame, become a rock star or
a heavyweight boxer) by doing the following sorts of experiments. They took cats,
anaesthetised them and opened their skulls. Then they recorded from single cells in
the visual cortex while displaying various visual stimuli to the cat, through lenses
calibrated so that they knew the stimuli were focussed on the cat's retina. What they
found was that certain cells in the brain would fire action potentials in reponse only
to certain very restricted kinds of visual stimulus - a light/dark edge in a certain
orientation (say 45 degrees from vertical top to bottom) moving in a certain direction
across the visual field, for example.
They found that cells lower down in the brain (closer to the input end) registered
simple features of stimuli (just edges, just movement etc) and cells progressively
closer to the outside of the cortex registered progressively more complex
features.
This is of course a very simplistic account of a lifetime of experimentation - you
can find more detail in any recent neurophysiology text, if you want
it.
Quite what this has to do with the present discussion is doubtful. I can see what
your correspondent means, but he is perhaps overlooking the fact that the cats were
anaesthetised and thus by definition not experiencing visual (or any other)
sensations. So what Hubel and Wiesel were studying was preconscious processing, not
consciousness.
It would be technically possible to repeat these experiments using unanaesthetised
but paralysed cats, but that would be universally considered by the majority of
neurophysiologists to be unethical (and this judgement is institutionalised by ethics
committees the world over and by the publication code of journals in the field, which
would simply refuse to publish such expts)
for definitions of the difference between sensation and perception
try
Lezak M.D. (1995) Neuropsychological Assessment (3rd Ed) Oxford University Press,
New York, Oxford. pp 25-26
(this is the bible in its field)
or
Sims A. (1988) Symptoms in the mind. An introduction to descriptive
psychopathology. Balliere Tindall, London Philadelphia Toronto Sydney
Tokyo
(shrinks spend quite a bit of time trying to figure out whether or not their
patients are experiencing hallucinations).
U.T. Place writes:
I would like to comment on the sensation/perception issue.
The traditional view of this matter to which I subscribe
holds that sensation + concept = perception. This formula implies
that there can be such a thing as a `raw', i.e., uninterpreted,
sensory experience. As evidence that such a notion is needed, I
would cite the distinction we draw between `physical' pleasure or
pain, where the pleasure or pain reaction is a response simply to
the quality of the sensory experience, and `mental' pleasure or
pain, where it is a response, sometimes to the very same
experience, once it has been conceptualised or interpreted, e.g.,
as a symptom of some fatal illness.
This notion of `raw' unconceptualised experience is
anathema to the Kantians and the phenomenologists; and there are at
least three sets of considerations which lend support to their
view. One is the relatively trivial point that you can't say
anything about an experience until it has been conceptualised in
SOME way. Another is the point that the qualia merchants are in
danger of overlooking, namely, that an unconceptualised experience
is like a unfertilised egg, an entity that has failed to fulfill
its biological function. But it is the third consideration which,
to my mind, is the most interesting. It is a point which is
suggested by a lot of recent neurological and neuropsychological
work, particularly the work that has been done on the functions of
the extra-striate visual areas, V2-V5. Contrary to what is suggested
by the adjective `raw', it is now becoming clear that a great deal of
complex processing has to go on in assembling the experience, BEFORE
it becomes what Broadbent (1971) calls "a state of evidence"
capable of suggesting an interpretation/conceptualisation/
categorization. What seems to happen in visual areas V1-V5 is that
there are specific neurons in these areas which are "tuned" to
respond to features of the input which become more and more
abstract and are triggered by retinal stimulation over wider and
wider areas the further removed they are from V1. These features
are things like an edge, a gradient of texture (interpreted as a
surface at certain angle of slope relative to the horizontal -
Gibson 1950) or a stationary object with a background moving to the
right (interpreted as watching an object moving to the left -
Gibson op.cit.) which are seldom, if ever, conceptualised as such,
but which, when "bound" together with other such features result in
a recognisable "image" of an object of some identifiable kind.
When one way of "binding" a set of features together fails to yield
an identifiable object, another way of "binding" the features may
be tried and, failing that, the standard reaction is to look again,
this time more closely.
Morover, the phenomenon of simultanagnosia which results
from lesions of this so-called "ventral stream" and which consists
in an inability to perceive the relations between different
objects in a visual array, even though the objects themselves are
recognised normally, suggests that the interpretation of a complex
visual array proceeds in two stages. In the first stage the
individual objects are identified. In the second the
experience/"evidence" is revisited in order to conceptualise the
relations between them.
The complexity of this process and that of the processes
of response-selection and response execution which ensue, not to
mention the linguistic processes of assigning a name to a concept or
a concept to a name and of organizing and deciphering complex
sentence structures, explains why it is that only PROBLEMATIC
INPUTS (i.e., those that are either unexpected or significant
relative to the organism's motivational concerns) are processed in
this way. The task of separating the problematic from the
unproblematic, alerting consciousness to the former, while either
ignoring the latter or routing them automatically and unconsciously
along well-worn channels to output, falls to the automatic-pilot or
"zombie-within" as I call it
Thomas Alexander posted this to the Dewey List, partly in response to my
posting of the first CQ post on that list, and Frank Ryan's
reply
Just a note to Frank Ryan's fine posting. I agree with him, especially regarding
the silly criticism that pragmatism "lacks foundations." You don't need "absolute
foundations" to build a house--the degree of solidity of the foundation is related to
the function, size and context of the building.
But all this talk about "foundations" is recent (I never heard it used in graduate
school). It is a metaphor, perhaps ultimately coming from Descartes' Discourse but
popularized by Rorty in 1979 in Philosophy and the Mirror of Nature. It is a bad
metaphor.
Instead I propose that we talk about "roots" instead of "foundations. So many
philosophies today are "rootless" instead of "rooted." Rorty, comes to mind. If
philosophical systems are living and growing things (and they are), it makes more
sense to speak of their "roots" than their foundations
anyway.
The person who used this metaphor irst, by the way, was Empedocles of Akragas who
called the ultimate principles of nature (Earth, Water, Air, and Fire)
"rhizomata"--the roots of phusis, of nature, of "that which is born." A far better
example to follow than a mechanist like
Descartes!
Bo Dahlin posted this to the Dewey
list:
Dewey had a streak of radical experientialism: all philosophical questions should
be answered from reflection on concrete, lived experience, not from abstract
conceptual analysis. We do not experience "sense data", unless we take an artificial
attitude to our experience, therefore sensationalism is false. Merleau-Ponty argues
exactly the same.
From this radical experientialist point of view, I perceive a fallacy in your
argument, viz. to exclude thinking from experience in general. Dewey also had a
tendency to look only at the outer "doing" and "suffering" aspects of experience, but
in some passages he reasons in a way that implicates THINKING in itself as a form of
EXPERIENCE.
If thinking is admitted as a form of experience, we do not end up in absolute
scepticism, because that which is lacking in sense-experience, viz. SELF-REVELATORY
MEANING, is present in active, conceptual thinking. Naturally we may still be
*mistaken* in our comprehension of the world, so there are still no grounds to be 100%
certain of anything we know. This uncertainty is, however, contingent, not principal,
as in scepticism. In active, conceptual thinking we always have a basis, a
"foundation", on which we can proceed towards deeper and deeper *experiences* (sic!)
of truth and reality.