FRDB Archives

Freethought & Rationalism Archive

The archives are read only.


Go Back   FRDB Archives > Archives > IIDB ARCHIVE: 200X-2003, PD 2007 > IIDB Philosophical Forums (PRIOR TO JUN-2003)
Welcome, Peter Kirby.
You last visited: Yesterday at 05:55 AM

 
 
Thread Tools Search this Thread
Old 02-22-2003, 12:52 PM   #1
Regular Member
 
Join Date: Nov 2002
Location: myrtle beach
Posts: 105
Default Is materialism true?

Historically, the main alternative to dualism about the mind and body is materialism. On a materialist view, there is no mysterious “ghost in the machine” that is the mind, but rather that the mind and mental states are to be accounted for in physical terms. This view has two advantages:

(1) It avoids the interaction problem. If the mind is a physical thing, and mental states are really physical states, then there is no problem of saying how it is that an immaterial
thing interacts with a material thing.
(2) It’s simpler than dualism. Instead of postulating two distinct kinds of things, on a materialist view there is only one kind of thing. As simplicity is itself a virtue, materialism then is more attractive than dualism.
(3) We can then approach some of the questions about the mind from a scientific standpoint. If the mind and mental states are physical, then we can use the method of science to study
it. As science has already provided us with a great deal of knowledge about the world already (presumably), then we can expect the methods of science to deliver some answers about the mind as well.

Allow me to breifly consider the different materialist theories on the relationship of the mind and the body.

1. Logical behaviorism. We typically make inferences about the mental states of others from their behavior. For example, when someone yells “Ouch!” we infer that they are in pain. And since we don’t have direct access to the physical states of their brains, this is an efficient way to go in forming beliefs about the mental states of others.

So what would a behaviorist analysis of the state of being in pain look like?

x is in pain iff x yelps, screams, says ‘ouch!’, etc.

Is this first analysis adequate? It seems not, for

(1) One might yelp, scream and say ‘ouch!’ even though one isn’t in pain. It’s possible to act like one is in pain without actually being in pain.
(2) One might be in pain, yet not show any of the appropriate behavior. One might just “tough it out” and not act like one is in pain. So a revision is needed to this first analysis of pain. One way to do it is to construe pain not as a range of actual exhibited behavior, but as a disposition to exhibit a range of behavior. So then we get:

x is in pain iff x has a disposition to yelp, scream, say ‘ouch!’, etc.

There are still some problems with this revised behaviorist analysis of pain.

(1) How does one identify dispositions to behave?
(2) The ‘etc.’ is a problem--in fact, it seems there is virtually an infinite range of behavioral ways to exhibit pain, and all of them would have to be included in the analysis in order for that analysis to be correct.
(3) The analysis ignores the “inner aspects” of pain, namely what it feels like to be in pain. Just saying how that pain might be exhibited in behavior doesn’t account for what it is like to be in pain. Such “inner aspects” or “raw feels” or qualia aren’t accounted for by behaviorism. Remember the mutant, who would have the same dispositions to behave as you and me but would feel something different given the same stimuli and dispositions to behave.

What all of this seems to suggest is that there’s more to being in pain than something like behavior. Rather, there’s also something inside that should be part of the analysis of a mental state. This leads us to the next theory.

The identity theory. This view holds that the mind is identical to the brain, and that mental states are identical to a physical state.

So, on this view a mental state like being in pain is identified with a physical state of the brain.

Supposition: For sake of argument, let’s assume that neuroscientists have found out exactly what happens in the brain when someone feels pain. Suppose that for any kind of pain whatsoever, be it the pain from stepping on a nail, the dull pain of a headache, emotional pains, etc., there is a certain kind of neuron that “fires”. Call those neurons “C-fibers.” So, for sake of argument assume that whenever someone is in pain he/she
has C-fibers firing, and whenever someone has C-fibers firing, he/she is in pain. Then, according to the identity theory, the analysis of pain goes like this:

x is in pain iff x has firing C-fibers
While this account of pain looks clear and simple, there are some serious problems to overcome:

(1) Multiple-realizability. It seems that even if materialism is correct, there are many different ways a mental state like pain might get realized as far as physical states are
concerned.
For instance, it seems possible that an alien could experience the very same pain as you or me, yet the alien might have a radically different physical constitution. It might not even have C-fibers at all (but instead have D-fibers, E-fibers, or something else going on in its brain). It seems possible that the same mental state could be physically realized in many different ways, but this is ruled out by the identity theory. The reason is that the identity theory identifies a mental state with a specific kind of physical state.

(2) The knowledge argument. This argument purports to show that the identity theory doesn’t account for qualia, or what it is like to be in a given mental state. The argument goes like this.

Suppose Mary is a brilliant neuroscientist who unfortunately has been imprisoned in a cave her entire life. While imprisoned her environment has been carefully controlled such that she has never had any visual sensations of color. Her clothes, computer screens, TV, the paint in her cave, etc. are all in black and white.
Nevertheless, Mary has learned everything there is to know about how the brain functions, what stimuli give rise to color experiences, how changes in wavelengths of light changes the appearance of objects, etc. Now comes the argument:

(P1) Mary knows all of the physical facts about the experience of red.
(P2) If Mary knows all of the physical facts about the experience of red, and if what it is like to experience red is a physical fact, then Mary would know what it is like to experience red.
(P3) Mary doesn’t know what it is like to experience red (since she would learn something new upon her release from the cave and seeing a red apple for the first time).
(C1) So, there is something about the experience of red that is non-physical.
(P4) But if the identity theory is true, then having the experience of red would be a physical state.
(C2) So, the identity theory is false.

A weakness of the identity theory seems to be that it doesn’t account for what it is like to be in a mental state like pain, or experiencing red. Mary just wouldn’t know what it is like to experience red, even though she knows all the physical facts about how the brain works. So, the identity theory is inadequate since it leaves out the “raw feel” of what it is like to be in a mental state.

What about funtionalism? This view of mental states is similar in some ways to behaviorism, but it gives an account of what is “in the head” and also allows for multiple-realizability of mental states. A functionalist analysis of the state of being in pain is in terms of (1) inputs, or what stimuli would give rise to pain, (2) outputs, in the form of behavior, and (3) relations to other mental states, in the form of what mental states would bring about
pain and of what other mental states pain would cause. So, a functionalist analysis of pain specifies what role that state plays with respect to inputs, outputs, and other mental states.

Take an object that serves a given function, like a carburetor. A
carburetor is a device that takes air and gasoline as inputs and mixes them in an appropriate ratio for output to the piston chambers of an engine. A carburetor also is related to other engine parts for the greater function of producing an engine’s power. Now, we wouldn’t want to identify a carburetor with a specific combination of physical parts, for there are many ways to make a carburetor. But this isn’t done by a functional definition of a carburetor. Anything that performs that function is a carburetor, whether it is made of steel, aluminum, ceramic, etc. The idea with respect to mental states is to analyze them in the same sort of way: in terms of their functional role. Another example would be an analysis of being president of the United States, and in terms
of the president’s relations to other parts of the government (such as relations to the legislative and judicial branches), along with inputs (such as suggestions of legislation to propose), and outputs (such as executive orders).

Here’s a more detailed analysis of functional states, and again for a machine. Take a coke machine. We can define two states for the coke machine in terms of inputs, outputs, and relations to other states of the coke machine. Call those two states S1 and S2.

The machine is in S1 most of the time, just waiting for someone to come along and buy a coke. Here are a few conditionals that the machine obeys with respect to S1:

If the machine is in S1, and a dime is inserted into the machine, then the machine gives a coke as output and remains in state S1.
If the machine is in S1, and a nickel is inserted into the machine, then the machine gives nothing as output but goes into state S2.

Now here’s state S2:

If the machine is in S2, and a nickel is inserted into the machine, then the machine gives a coke as output and goes back to state S1. If the machine is in S2, and a dime is inserted into the machine, then the machine gives a coke and a nickel as output and goes back to state S1.

The idea with respect to a mental state like the state of being in pain is to account for such states in the same kind of way. So, inputs would be such things as pins pricking the body, stepping on nails, touching hot stoves, etc. Outputs would be various sorts of behavior, such as saying ‘ouch!’, ‘that hurts’, pulling one’s hand back from the stove, etc. Relations to other mental states would be relations to the state of being angry, the state of being embarrassed, memories of the pain, etc.

Problems with functionalism:
(1) How should we distinguish mental states from other functional states? In the absence of any distinction, then ordinary machines like thermostats and carburetors could have
mental states.
(2) The qualia problem remains. Functionalist accounts of mental states don’t seem to include any account of what it is like to be in that mental state. Furthermore, one might still have the intuition that mutants could exist and satisfy a functionalist account of
pain, for instance. Yet those mutants wouldn’t feel pain--they would feel something else, such as hearing a middle-C, for instance.
The same sort of worry is made with the “inverted spectrum” example given in
Blackburn's book THINK pp. 72ff. It seems possible that someone could have their experiences of color “inverted” in the sense that for things that we experience as red, they would experience
them in the same way that we experience green things, and vice versa. They would still call ripe tomatoes and fire engines ‘red’, and healthy grass ‘green’, but their color experiences would be different. However according to functionalism we would all be in
the same mental state upon viewing a ripe tomato. But this doesn’t seem right since those with inverted spectra would be experiencing something different. So it seems that functionalism has left something out in its analysis of color perception--namely what it is like to experience colors.

The Chinese Room. In his paper “Minds, Brains, and Programs”,
John Searle makes a similar objection. We are to imagine being locked in a room with two slots in the door--one for inputs and one for outputs. Outside the room, a Chinese speaker writes questions in Chinese and passes them through the input slot. Inside the room, I don’t have any understanding of the Chinese language, but I do have with me a series of manuals with instructions for doing the following. I take the “question”, and
from the shapes of the characters alone I follow the instructions in the manuals for writing down new shapes. In following those instructions I produce a series of shapes on a piece of paper and pass it out through the output slot. As it turns out, the manuals
are instructions for converting questions in Chinese symbols into answers in Chinese symbols. We can imagine those instructions to be of sufficient complexity that the person outside the room couldn’t distinguish between the answers passed out through the
output slot and answers given by a real Chinese speaker.
The problem is that I don’t understand any Chinese at all. I’m just following the instructions in the manuals. However, according to functionalism, it seems that either I or the whole system of me following the instructions would understand Chinese. But
since I don’t understand Chinese, and nothing else in the room does, it seems that functionalism has left something out in what its analysis of understanding Chinese would be.

matt
mattbballman is offline  
Old 02-22-2003, 02:46 PM   #2
Senior Member
 
Join Date: Aug 2000
Location: Chicago
Posts: 774
Default

I personally agree with your assessment of the materialistic theories that you have presented.
However, in most cases, the materialist can simply reply that a particular materialistic account of mental phenomena has failed to be more complete because all of the data concerning the function and nature of the brain and nervous system isn't in yet. And since no one can be omniscient, omniscience shouldn't be a requirement for the confirmation of a theory.
Irrespective of the cogency of that kind of reply, the list of advantages of materialism that you provided at the beginning of your post suggests (to me) that, irrespective of its truth, materialism can be (and indeed has been in the history of philosophy) useful as a perspective from which critical questions about observed phenomena can be posed, (if not truthfully answered).
jpbrooks is offline  
Old 02-22-2003, 09:19 PM   #3
Regular Member
 
Join Date: Nov 2002
Location: myrtle beach
Posts: 105
Default

Ah, but how sure are we that this supposed 'data' actually exists? It looks like a little bit of faith might have to be smuggled into this issue.

I also do not see why you bring omniscience into this. Could you explain that?

I also would agree with you on its usefullness. Its good to have the varying metaphysical explanations as a philosophical wall to lean up against when crucial ontological questions are being investigated.

matt
mattbballman is offline  
Old 02-22-2003, 11:24 PM   #4
Contributor
 
Join Date: Jul 2001
Location: Florida
Posts: 15,796
Default

jpbrooks writes:

Quote:
However, in most cases, the materialist can simply reply that a particular materialistic account of mental phenomena has failed to be more complete because all of the data concerning the function and nature of the brain and nervous system isn't in yet. And since no one can be omniscient, omniscience shouldn't be a requirement for the confirmation of a theory.
Omniscience shouldn't be required for the confirmation of a theory but surely confirmation should be required. One cannot use the omniscience argument as a substitute for confirmation and the materialist theory has not been confirmed.

Likewise, the materialist cannot argue that we simply don't have enough knowledge of the brain yet. That is an argument from ignorance - a logical fallacy. Moreover, one presupposes the "yet" in that argument. We have no reason to believe that further scientific investigation will lead us closer to a materialist explanation. It could very well lead us further away. Imagine if an idealist argued that idealism is true and the only reason he can't prove it is because science hasn't learned enough about the brain yet to prove it. Materialists would assert that his claim is based on an absurdity. Yet materialists use this same argument all the time to support their position.
boneyard bill is offline  
Old 02-22-2003, 11:39 PM   #5
Contributor
 
Join Date: Jul 2001
Location: Florida
Posts: 15,796
Default

Posted by mattballman:

Quote:
(2) It’s simpler than dualism. Instead of postulating two distinct kinds of things, on a materialist view there is only one kind of thing. As simplicity is itself a virtue, materialism then is more attractive than dualism.
Quote:
(3) We can then approach some of the questions about the mind from a scientific standpoint. If the mind and mental states are physical, then we can use the method of science to study
Thank you for a very succinct and yet fairly complete statement of the current state of the debate on this issue. However, I have to take issue with the two points you make above. Your second point, that materialism is simpler than dualism only applies to Cartesian dualism. The position often referred to as "property dualism" is not really a dualism. That position claims there is really only one substance but mind and matter are both fundamental characteristics of that substance. So it is really a mind-matter monism. David Chalmers calls it "naturalistic panpsychism." This avoids the dualist terminology. Unfortunately it sounds like some kind of New Age religion.

You third point is that, in presuming materialism, we are therefore able to subject our questions about the mind to scientific study. What does materialism have to do with that? We can surely study the mind scienfically without presupposing materialism. In fact, if materialism is false, such a presupposition will simply lead us to proposing one false solution after another. So a materialist presupposition is a detriment to studying the mind in a truly objective way.
boneyard bill is offline  
Old 02-23-2003, 12:01 AM   #6
Contributor
 
Join Date: Jan 2001
Location: Barrayar
Posts: 11,866
Default

Quote:
Originally posted by boneyard bill
jLikewise, the materialist cannot argue that we simply don't have enough knowledge of the brain yet. That is an argument from ignorance - a logical fallacy. Moreover, one presupposes the "yet" in that argument. We have no reason to believe that further scientific investigation will lead us closer to a materialist explanation. It could very well lead us further away. Imagine if an idealist argued that idealism is true and the only reason he can't prove it is because science hasn't learned enough about the brain yet to prove it. Materialists would assert that his claim is based on an absurdity. Yet materialists use this same argument all the time to support their position.
This is all totally incorrect. The reason materialism gets the benefit of the doubt is that it has been successful in the past, unlike supernaturalism. Further, we have every reason, based on constant previous progress in the brain sciences and all other sciences, to believe we will know more in the future than in the past. Third, supernaturalism fails for reasons that are independent of this discussion -- philosophical and historical.

Fourth, it is fallacious to assume that if materialism lacks and explanation, supernaturalism must be the explaining factor. That is why the correct answer is "We must wait until we receive more information." This is not an argument from ignorance. The argument from ignorance takes place when one makes a stupid positive claim like "We don't know X so it must be god," not, "we don't know X so let's wait until we get more information."

Vorkosigan
Vorkosigan is offline  
Old 02-23-2003, 12:45 AM   #7
Veteran Member
 
Join Date: Aug 2000
Location: Australia
Posts: 4,886
Default

mattbballman:
That's a pretty long post... well I think I'm basically in the "functionalism" category.

Quote:
Problems with functionalism:
(1) How should we distinguish mental states from other functional states? In the absence of any distinction, then ordinary machines like thermostats and carburetors could have mental states.
In the What is consciousness? thread I quoted some of my main ideas... it is probably too long to requote in full here.
I talked about "the hierarchy of intelligent systems":

1. Processing Systems [or Programmed Systems]
...receive [or detect], process and respond to input.

2. Aware Systems
...receive input and respond according to its goals/desires and beliefs learnt through experience about how the world works
(self-motivated, acting on self-learnt beliefs)
["self" refers to the system as a whole]

I understand what a thermostat is better so I'll just stick with that... thermostats don't *develop* (learn) their own problem solving strategies like animals do so they don't meet my requirements for awareness. They meet my requirements for a basic processing/programmed system though. And even if did satisfy my requirements for awareness like an animal does, it is a different story whether it is aware of its thought processes in a detached way, like we often are.

Quote:
(2) The qualia problem remains. Functionalist accounts of mental states don’t seem to include any account of what it is like to be in that mental state. Furthermore, one might still have the intuition that mutants could exist and satisfy a functionalist account of pain, for instance. Yet those mutants wouldn’t feel pain--they would feel something else, such as hearing a middle-C, for instance.
The pain signal is the message that the thing it is associated with should be avoided - depending on the signal's intensity. The brain then tries to avoid it now (assuming the signal outweighs pleasure and pain signals associated with that course of action) and learns to avoid it in the future... I've heard that some people with damaged limbic systems can sense the pain signal from bodily pain, but not the pain. (The urge to avoid it)
That mutant could conceiveably exist - it would be a bit like synesthesia where the senses are mixed up in a consistent way... except that the original "sense" (pain) isn't experienced at all. But it wouldn't be functionally equivalent!!! So it wouldn't satisfy all functionalist accounts of pain (like mine). If the middle-C sound wasn't unpleasant, and they weren't taught to avoid it, they'd have little reason to try and avoid "hearing" it (feeling "pain"). Maybe they got some advanced cancer one day that was painful. They'd just hear the noise. They might think that there is something wrong with their ears because the noise doesn't go away... but other people would feel the pain signal being associated with a specific area of the body, and they'd be compelled to avoid the signal, depending on its intensity. So they'd go to the doctor, and tell them they feel pain in that area - rather than saying they can hear a musical note a lot.

Quote:
The same sort of worry is made with the “inverted spectrum” example given in Blackburn's book THINK pp. 72ff. It seems possible that someone could have their experiences of color “inverted” in the sense that for things that we experience as red, they would experience them in the same way that we experience green things, and vice versa. They would still call ripe tomatoes and fire engines ‘red’, and healthy grass ‘green’, but their color experiences would be different. However according to functionalism we would all be in the same mental state upon viewing a ripe tomato. But this doesn’t seem right since those with inverted spectra would be experiencing something different. So it seems that functionalism has left something out in its analysis of color perception--namely what it is like to experience colors.
"However according to functionalism we would all be in the same mental state upon viewing a ripe tomato."
We'd be in *functionally equivalent* mental states! Not identical mental states.

It's like how 24-bit colours are represent on a computer. Usually the first 8 bits represent the red channel, then the green channel, then the blue channel... but sometimes the order is 8 bits for blue, then green, then red.
So 12-34-56 in red-green-blue format would be 56-34-12 in blue-green-red format.
And there are also other 24-bit colour systems - like hue-lightness-saturation - where the hue is a number representing the colour it is like (like red or blue or yellow), lightness is how dark or light the colour is, and saturation is how greyish or colourful the colour is.
Anyway, the same picture can be stored on a computer in lots of different ways, but they can be output in the same way...
I think neural networks (like our brain) extract patterns from their inputs in idiosyncratic ways, depending on the previous state of the neural network... getting back to that computer analogy, it could represent colours in red-green-blue format - or hue-lightness-saturation format - or YUV format (Y is the brightness, and U and V are the colour components)... or something even more obscure.
I can adjust to new lenses after a while - at first they seem distorted. (the floor seems very high)
There's also this:
Quote:
Stratton (1897) has shown that wearing inverting goggles (turning the image 180°) perfect visuomotor coordination could be obtained within a few days....Phenomenally, however, the world was still upside down. It is still a matter of debate whether after a week or two phenomenal experience would also adapt
They even rode bicycles, etc.
From here:
Quote:
Instances are for example Kohler's (1961) classic experiments, where a person wearing left-right inverting goggles for a few weeks will adapt in a piecemeal fashion, going through periods where in the same spatial location an object can appear somehow both correct and inverted: for example, an automobile might be seen on the correct side of the road, but with its licence plate written in mirror writing. This would be accounted for in the sensorimotor approach by saying that there is no coherent image-like internal representation of the visual world: the orientation of writing and the location on the road of the car are constituted by such things as the possibilities the subject has to read and write on the one hand, and to orient his gaze, on the other hand, and these may correspond to sensorimotor sub-systems which may adapt independently to the rearranged vision.
http://www.d.umn.edu/~dcole/inverted_spectrum.htm

This talks about functionalism and inverted sensory inputs.
excreationist is offline  
Old 02-23-2003, 02:21 AM   #8
Veteran Member
 
Join Date: Aug 2000
Location: Australia
Posts: 4,886
Default Re: Is materialism true?

Quote:
Originally posted by mattbballman
....The Chinese Room. In his paper “Minds, Brains, and Programs”, John Searle makes a similar objection. We are to imagine being locked in a room with two slots in the door--one for inputs and one for outputs. Outside the room, a Chinese speaker writes questions in Chinese and passes them through the input slot. Inside the room, I don’t have any understanding of the Chinese language, but I do have with me a series of manuals with instructions for doing the following. I take the “question”, and from the shapes of the characters alone I follow the instructions in the manuals for writing down new shapes. In following those instructions I produce a series of shapes on a piece of paper and pass it out through the output slot. As it turns out, the manuals are instructions for converting questions in Chinese symbols into answers in Chinese symbols. We can imagine those instructions to be of sufficient complexity that the person outside the room couldn’t distinguish between the answers passed out through the output slot and answers given by a real Chinese speaker.
The problem is that I don’t understand any Chinese at all. I’m just following the instructions in the manuals. However, according to functionalism, it seems that either I or the whole system of me following the instructions would understand Chinese. But
since I don’t understand Chinese, and nothing else in the room does, it seems that functionalism has left something out in what its analysis of understanding Chinese would be.
The "understanding" of Chinese was "programmed in" by the creators of that system - and they would have understood Chinese. This is different from "aware systems" (which I defined earlier) - they learn patterns about how the world works for themselves. And they interact with the world and apply that knowledge. And when they are pursuing goals, they are using problem-solving strategies that they (the system) developed themselves (itself). Over time, aware systems can "master" problem domains (like learning how to catch a ball) and so "understand" it on some level. The understanding is their own because the system learnt about it itself through its own "experiences" (interactions with the environment).
excreationist is offline  
Old 02-23-2003, 06:33 AM   #9
Senior Member
 
Join Date: Sep 2002
Location: San Marcos
Posts: 551
Default Mattb

On your criticism of Identity theory:

Quote:
(1) Multiple-realizability. It seems that even if materialism is correct, there are many different ways a mental state like pain might get realized as far as physical states are
concerned.
For instance, it seems possible that an alien could experience the very same pain as you or me, yet the alien might have a radically different physical constitution. It might not even have C-fibers at all (but instead have D-fibers, E-fibers, or something else going on in its brain).
Problem is that the material is not so radically different if it is essentailly the same substance.


Quote:
It seems possible that the same mental state could be physically realized in many different ways, but this is ruled out by the identity theory. The reason is that the identity theory identifies a mental state with a specific kind of physical state.
To show the problem with this statement let me make an anology:

DNA is held to be the chemicals that underly it. Yet an eagle can have very different DNA then me. Thus DNA can not be merely the chemistry underlying it.

The problem is that similarity in function/quality can still be reduced to the exact material states underlying it. For example: My right hand and left hand are still hands, if you removed all the underlying matter, without replacement, I would cease to have hands.

Thus pain is mainly an unpleasant experience that an organism tries to avoid, this is generally the definition and though it is functional, so broad enough to include aliens,computers and humans: the pain is ultimately reduced to underlying material elements in each paticular case i.e. at the ultimate level. Thus though they may be different types of pain they are still pain. Just as though there may be very different types of hands, there are still hands and each hand is ultimately made up of its material bits.

Quote:
(2) The knowledge argument. This argument purports to show that the identity theory doesn?t account for qualia, or what it is like to be in a given mental state. The argument goes like this.

Suppose Mary is a brilliant neuroscientist who unfortunately has been imprisoned in a cave her entire life. While imprisoned her environment has been carefully controlled such that she has never had any visual sensations of color. Her clothes, computer screens, TV, the paint in her cave, etc. are all in black and white.
Nevertheless, Mary has learned everything there is to know about how the brain functions, what stimuli give rise to color experiences, how changes in wavelengths of light changes the appearance of objects, etc. Now comes the argument:

(P1) Mary knows all of the physical facts about the experience of red.
(P2) If Mary knows all of the physical facts about the experience of red, and if what it is like to experience red is a physical fact, then Mary would know what it is like to experience red.
(P3) Mary doesn?t know what it is like to experience red (since she would learn something new upon her release from the cave and seeing a red apple for the first time).
(C1) So, there is something about the experience of red that is non-physical.
(P4) But if the identity theory is true, then having the experience of red would be a physical state.
(C2) So, the identity theory is false.
Problem with the argument rests in premises: P2-P3, C1.

P2-P3: Mary knows about the physical facts of red and what it is like to experience red only in a sense that she can describe the neural activity for someone who has done so. The problem is with ambiguity: in one sense by "knows what it is like to experience red" you mean Mary knows how to understand it when someone else experiences red. In another sense you mean know what it is like via experience for herself.

C1 simply does not follow, there are different ways of understanding the same thing and experiencing it.

The problem seems to lie with hardware/software input not the composition of experience. Mary has only recieved the right material information in a matter suitable for her to understand red in a certain way, but not enough to allow her to understand red as an experience she has had. Mary thus understands the experience of red at the purely conceptual level, not at the level of having her eyes imprint the data of "red" on her brain when absorbing the color red.

Quote:
A weakness of the identity theory seems to be that it doesn?t account for what it is like to be in a mental state like pain, or experiencing red.

Mary would know in a sense, conceptually. But not in the sense of having the hardware made to acquire/recognize the specific data utilized. Thus what this indicates is a lack of data collected by a certain means, not a radical division in substance. Maries understanding can thus be viewed as incomplete instead of incorrect.

Quote:
Mary just wouldn?t know what it is like to experience red, even though she knows all the physical facts about how the brain works. So, the identity theory is inadequate since it leaves out the ?raw feel? of what it is like to be in a mental state.
The problem is a software one. Mary simply does not have the raw feel because certain data collected devices Mary has have not been engaged while other ones have. Had Mary been submitted to the right data with the right hardware she would know in the sense you mean i.e. as an organism that experiences sight through the actual excercise of its own eyes transimitting information to its own brain. Not merely as an organism that understand the process at the conceptual level.
Primal is offline  
Old 02-23-2003, 08:38 AM   #10
Veteran Member
 
Join Date: Aug 2000
Location: Australia
Posts: 4,886
Default Re: Is materialism true?

mattbballman:
About "The Identity Theory of Mind"
from the Stanford Encyclopedia of Philosophy:
Quote:
Consider an experience of pain, or of seeing something, or of having a mental image. The identity theory of mind is to the effect that these experiences just are brain processes, not merely correlated with brain processes.
Or to be more general, those experiences are just physical processes rather than merely being correlated with physical processes. For humans, this happens in our brains, in the same types of areas. But other creatures could have different brain structures - where their thoughts would solely involve physical phenomena in their brains or whatever they use to "think" - rather than involving non-physical things. (Like a "soul", etc)

Quote:
....Then, according to the identity theory, the analysis of pain goes like this:

x is in pain iff x has firing C-fibers
The C-fibers would need to be able to communicate to with the next part of the brain that uses that signal... and they'd need a properly functioning system that activates the C-fibers... a firing C-fiber on its own can't communicate a pain signal which is then put to use... in the same way that a propeller on its own can't fly long distances.

Quote:
....It seems possible that the same mental state could be physically realized in many different ways, but this is ruled out by the identity theory. The reason is that the identity theory identifies a mental state with a specific kind of physical state.
Aliens could have specific mental states for pleasure and pain, etc, and humans can have specific mental states for pleasure and pain. These mental states would differ greatly between aliens and humans but within the species or at least individual, the mental states are specific. Maybe I don't understand identity theory though.

Quote:
(P1) Mary knows all of the physical facts about the experience of red.
(P2) If Mary knows all of the physical facts about the experience of red, and if what it is like to experience red is a physical fact, then Mary would know what it is like to experience red.
(P3) Mary doesn’t know what it is like to experience red (since she would learn something new upon her release from the cave and seeing a red apple for the first time).
(C1) So, there is something about the experience of red that is non-physical.
(P4) But if the identity theory is true, then having the experience of red would be a physical state.
(C2) So, the identity theory is false.
[Technically, white light is made up of many colours of the spectrum, usually including red. - unless you mixed pure magenta and pure green light or something - but magenta would still be detected by our red sensitive "cones".]
She would have been taught that the brightness and darkness levels of her sight could become "coloured"... perhaps they could use music or spoken accents as an analogy... so her vision is affected while the brightness/darkness and shapes she sees remain the same. But she doesn't know how exactly it would change. She could be told that the apple would be red and the stop sign would have the same colour, and the leaves and grass would have another colour.
When she finally sees distinct colours, different signals would travel down into her brain... she wouldn't know in advance what those signals would be like.

Quote:
Mary just wouldn’t know what it is like to experience red, even though she knows all the physical facts about how the brain works. So, the identity theory is inadequate since it leaves out the “raw feel” of what it is like to be in a mental state.
Perhaps it is like this:
Let's say red signals going into a person's brain to be processed are in a form like "493019201223113432".
Our brains have about 100 billion neurons each connected to about 10,000 others so complex data would be passing through it. The signal I thought up is so long because it also describes the data type. (We'd have many different types of data types).
To a person, that data (which goes through the brain at about 50 cycles a second I think) is pretty meaningless. But to their brain, it is very useful. The brain would process that signal in many quite complex ways - ways that are probably far too complex to be fully comprehended by our brains. If we had superbrains with heaps of short-term memory we might be able to comprehend the processes going on in regular brains. (maybe) With regular brains the best we could hope to do is to try and comprehend a summarized version of the neural firings - which isn't the full picture.
excreationist is offline  
 

Thread Tools Search this Thread
Search this Thread:

Advanced Search

Forum Jump


All times are GMT -8. The time now is 10:14 AM.

Top

This custom BB emulates vBulletin® Version 3.8.2
Copyright ©2000 - 2015, Jelsoft Enterprises Ltd.