FRDB Archives

Freethought & Rationalism Archive

The archives are read only.


Go Back   FRDB Archives > Archives > IIDB ARCHIVE: 200X-2003, PD 2007 > IIDB Philosophical Forums (PRIOR TO JUN-2003)
Welcome, Peter Kirby.
You last visited: Today at 05:55 AM

 
 
Thread Tools Search this Thread
Old 02-20-2002, 02:03 PM   #171
Contributor
 
Join Date: Jul 2001
Location: Florida
Posts: 15,796
Post

excreationist writes:

Quote:
I see what you're saying... you explained it pretty well there.
But I disagree. Gravity doesn't just affect large objects... it affects ALL objects, including photons! (I don't know if the gravity that photons exert has been measured though)
A "massive" body in this context only means body that has mass. It doesn't refer to size.

But you're point is somewhat like what I've been saying. We infer that a photon has gravity because it has mass. But how can you get a bundle of photons together to measure their gravitational force when they travel at the speed of light? In the same way, we infer that material processes have some kind of potential consciousness but only under limited circumstances could we be able to detect it.

Quote:
If consciousness is such a fundamental property of matter, why do brains have to be so organized and complex to be conscious? I mean with gravity, you've just got to have a big object (like the moon) and the effects of gravity are obvious.
In the case of gravity, it is just related to its mass
yes. I assume consciousness would be relate to certain levels of complexity and dynamism rather than to mass. Maybe metabolism would be a necessary condition for measurable consciousness to arise, for example.

Quote:
BTW, I was wondering what you think about a fertilized egg. Is it conscious? If not, then when does it become conscious? I'd like to hear what your views are on this. If a fertilized egg is conscious, what about a sperm cell and an egg cell? If they are both conscious, what about partially developed sperm and egg cells?
As far as I know they are able to respond to some degree to their environment so that would suggest that they have some kind of sentience and so would have at least a low level of consciousness.

Quote:
And once you've got an approximate point when non-developed humans become conscious, what is different between the person before and after they are conscious? Do they have more neurons? Or more knowledge or what?
Off the top of my head I'd say they have more ways to respond to their environment.
boneyard bill is offline  
Old 02-20-2002, 02:24 PM   #172
Contributor
 
Join Date: Jul 2001
Location: Florida
Posts: 15,796
Post

Adrian Selby writes:

Quote:
Now, you've talked about pain and about orange, and how these things aren't reducible to physical processes. My problem here is the way you're thinking about the issue. Orange and c-fiber firing are both describing a physical event in a functioning brain. Neither refers to anything more, for me.
Now we're back to where I thought we started. But isn't the statement, "orange and c-fiber firing are both describing a physical event in a functioning brain. Neither refers to anything more." reducible to "the language describing x is describing the same thing as the language describing y." And isn't this reducible to "x is y." And so are you not simply assuming what you set out to prove? I don't see where the claim "it's nothing but a language problem" is anything but an assertion. How can it be "demonstrated" that it's nothing but a language problem? That is what is necessary to make the case.

Quote:
I am certainly not presenting a position where a physical property has a causal relationship with a non physical one, which is dualism.
I know it is sometimes referred to as a property dualism. But I don't see it as a dualism. I see it as the claim that mind/matter are a single thing. So I call it, rather awkwardly, "mind/matter monism." The current popular term seems to be "naturalistic panpsychism" but I don't like that term either. It sounds too New Agey.
boneyard bill is offline  
Old 02-20-2002, 06:39 PM   #173
Veteran Member
 
Join Date: Aug 2000
Location: Australia
Posts: 4,886
Post

Quote:
Originally posted by boneyard bill:
So you're trying to base a reductive explanation on memory. But your explanation evades the fundamental point. Why, whether through direct experience or through memory, does our observation take the form that it does? We take in the information "orange." Now we recall that information. But why, then, do we recall it as a visual experience?
It is difficult to explain this... but basically I think that we receive a few types of input - some that involve pleasure or pain, and others that just involve pure sensory data. The pleasure/pain information travels along with corresponding sensory data. (e.g. sight/hearing/touch/heat)
In the central parts of our brain, I think we just have lots of different incoming sensory channels and initially it is just information (except for the pleasure/pain signals, which compell us to seek/avoid them)
Then we learn to find patterns in this incoming sensory information. A newborn baby might already have some things, like vision, already initialized a little. I think the reason why babies can't see clearly at birth is because they haven't learnt to recognize smaller patterns, only the large ones. But over time, their senses are refined as they learn the large patterns and filter that out and then learn the subtle small patterns. This is like how people might initially think that all cows (of a certain breed) look the same, but after some refining of how they take in the data, they can appreciate all the subtle variations.
When we see orange or think about orange, we are accessing data directly. We aren't looking at the information and then consciously decoding it. Evolution doesn't have any reason to do that - the data can be decoded in the subconscious. Our conscious region is just given information and rules for how to deal with it.
Then it just follows those rules. It is very inefficient if it looks at the neurons firing and explicitly analyses which neurons are firing. To do that you'd need detectors on each neuron.
Maybe a related example is a calculator. This could represent the subconscious. We give it a problem to solve, and it solves it. We don't have a clue about which transistors are switching on or off or what binary patterns are going into and out of the CPU. We just give it inputs and it gives us an output. We then deal with this in a *direct* way. We have no mechanism to analyse the individual signals that make up the data - we just process the data directly. And as I said, it doesn't make sense for animals to evolve the ability to analyse the individual signals in their brain. And even if they could do this, they would have additional neurons, and what would be analysing that?, etc, to infinity.

Quote:
You've just shifted the problem from our direct observation to our recall of previous observations. Still, it does explain why the experience of orange is nothing put a material process.
Hmmm... so the memory comparison explanation is ok?

Quote:
Consider a very sophisticated modern submarine. It gets hit with a torpedo in the right bow. The computer automatically takes in the information and directs the proper hatches to be closed and undertakes the proper measures to stabilize the craft. We don't need to program the submarine to feel any pain.
I wouldn't call what the computer did "intelligence". It is only a bit more intelligent than a plant - after all, they can seek the sunlight and water. What that submarine does is just sit around (assuming no one is driving it) and then if it is hit it follows some step by step rules. It is just a simple reflex. No self-directed learning or *continuous* seeking of goals is involved. So I would categorize it as a programmed or processing system (not an aware one).

Quote:
The information is all that is necessary.
The thing about what submarines do is that they are supervised and they inform the crew with some kind of signal. I doubt that the submarine would do that action without informing the crew internally in some way. (Like print a message on a computer screen)
You'd probably say that you just need information, not pain. But mere information and goals without *the compulsion* to do these goals are meaningless. I mean say there's a computer and you've programmed in some different options that it can do. Then you just sit back and wait. The computer isn't going to do any of it unless it is forced or made to do it. And I'm saying that "we" are given a list of options that are determined to be things we want to seek/repeat or avoid and then we determine the optimal selection (or option) and force the rest of the brain to do this.
So we're kind of like a computer because the neurons in our brain don't do anything unless they are forced to.
What if were a toddler and we felt a sensation thats function was to inform us that our hand was on fire? The toddler would just think "that's a new sensation I haven't felt before - that's interesting"... "my hand feels nice and warm... look at that fire! Wow!" What reason would they have to stop the fire? The best thing would be for them to be forced to try and stop the perceived cause of that pain signal (the fire) IMMEDIATELY!
Otherwise the toddler might see their hand getting really black and then remember that their parents don't like them getting dirty and they tried to wipe the blackness off, but it isn't coming off. So they went to try and wash if off, then the fire went out. And the toddler would probably be disappointed because the blackness is still there but that fun fire and the nice feeling of warmth is gone.

Quote:
The same should be true of humans.
What do you think about that toddler and the fire example? It is fairly obvious that animals that don't feel compelled to IMMEDIATELY avoid intense pain signals would be much more likely to die and so not reproduce much.

Quote:
But in any case, none of this leads to a reductive explanation. It just addresses the function of consciousness. That is an important question also, but not the one we're addressing.
I think a motivational system is required for consciousness, so it is part of a reductive explanation. I don't think it is just a "function" of consciousness. That's like saying that a spinning motor is just a "function" of a spinning electric fan.

Quote:
Yes. Sometimes we get information in the form of pain. But why do we get it in this form? We don't need to get information this way.
What about the toddler and the fire example? And the shark example?

Quote:
The materialist can explain how we would be able to function by just getting the information but without feeling the pain. Still, this relates to the function of consciousness and not to a reductive explanation.
Well you quoted the shark example... except for the end part... I was saying that if and only if you felt the compulsion to avoid a certain intense signal (pain) would you escape fairly unhurt from the shark. Otherwise you'd get a lot more eaten. If materialists can explain how you'd just get the information but not feeling the compulsion to avoid this signal then why don't you explain this? I think you're just making an assertion. In the case of the toddler, the toddler doesn't know that that sensation must be immediately avoided. To them it is just an unfamiliar sensation. Explain why a toddler would try and stop the fire immediately without feeling the compulsion to immediately stop the signal. Otherwise you're just making unsupported assertions.

Quote:
But that's not what reductive means in this context. It's not about reducing to parts; it's about reducing the apparently immaterial to the material. It's about showing that consciousness is nothing but some material process.
I'm saying that it involves several interdependent processes, not a single process.

Quote:
You're not going to be able to come up with a material explanation by using mental terms like "desirable" and "undesirable."
What about things that "seem like a good idea" or "seem like a bad idea". The thing is, if I never tie my explanations into mental terms, you mightn't think it has much relevance to consciousness. Anyway, I think desirable and the compulsion to repeat or seek certain situations are synonyms. "We" experience this compulsion and can label it a "desire". And I think that when we feel compelled to avoid something we can label that thing "undesirable".
Other synonyms for desire are "dream" or "quest" or "love" - but I guess it is only appropriate to apply that to humans because their consciousness is much richer than the minimalistic aware system I am talking about.

Quote:
A "massive" body in this context only means body that has mass. It doesn't refer to size.
Well I've been reading about the mass of photons on different sites and actually it looks like they don't have any mass at all! I was mixed up with electrons - electrons have a tiny mass. (much less than protons or neurons) From a <a href="http://www2b.abc.net.au/science/k2/stn/archive2000/posts/March/topic43857.shtm" target="_blank">science forum</a> - "Even massless particles such as the photon can gravitate, because they have energy and momentum." I don't understand why though...

Quote:
But your point is somewhat like what I've been saying. We infer that a photon has gravity because it has mass. But how can you get a bundle of photons together to measure their gravitational force when they travel at the speed of light?
Well they gravitate towards masses - I'm not sure if they exert gravity then... but the point is, almost every other particle (maybe not photons) would exert gravity. (including individual electrons)

Quote:
In the same way, we infer that material processes have some kind of potential consciousness but only under limited circumstances could we be able to detect it.
Well the attraction between small lead balls can be measured and probably even microscopic ones as well. It would be hard to detect "proto-consciousness" though.

Quote:
yes. I assume consciousness would be relate to certain levels of complexity and dynamism rather than to mass. Maybe metabolism would be a necessary condition for measurable consciousness to arise, for example....
As far as I know they are able to respond to some degree to their environment so that would suggest that they have some kind of sentience and so would have at least a low level of consciousness.
...Off the top of my head I'd say they have more ways to respond to their environment.
....
ok... BTW, why do you think we (and other animals) evolved the ability to feel pain? You said that there is no reason for pain. You can conceive of a conscious person acting in the same way without having to feel pain. So why feel pain? Is it just an inherent side-effect of consciousness? Does consciousness require any positive or negative feelings? Can we just do things not because they feel good or bad, but because we just rationally or arbitrarily decided to do it? If there are no good or bad emotions, I don't see why a person should feel obligated to do anything. They might as well stand motionless for hours until they collapse of exhaustion onto the floor. I mean they wouldn't feel a negative emotion associated with tiredness or feel any positive emotion if they explored their environment. They are just content to do nothing.
excreationist is offline  
Old 02-21-2002, 01:08 AM   #174
Veteran Member
 
Join Date: Dec 2001
Location: Darwin
Posts: 1,466
Wink

Quote:
Message originally posted by Adrian Selby

"But the consciousness itself cannot be explained as nothing but matter. The consciousness itself is not material and cannot be explained as a material process simpliciter"
Most of the matter has nothing to do with consciousness as it consists quarks. Quarks are pretty dumb and bound by gluons in the strong nuclear force. It is the electromagnetic interactions between atoms that are the true seat of consciousness, memories, and a sense of self. Without those complex chemical interactions, all you will have is just pure weight.
Quote:
I don't see why not, I've tried to say it is simply a property of certain arrangements of matter, more precisely, arrangements that show certain activity. That arrangement, such as a living brain, is conscious. It is material. You'll have to expand more because I'm obviously missing something. Insofar as the above is an explanation of some sort for why consciousness is nothing but matter, because when the arrangements aren't there the consciousness isn't there, I guess I need you to make more clear why this is insufficient.
"A rock may not have consciousness, but the material in the rock has the potential to create consciousness when it is arranged in the proper configurations. That what I mean by "proto-consciousness.""
Yes I have stressed that frequently. It it the configuration of matter and not the matter itself that is conscious. I can demonstrate this by randomly shuffle the bits on a bit map and just end up with a gray screen on your monitor, but in a carefully planned arrangement I can turn them into the image of an "eagle" without adding a single proton of weight to your computer monitor. So it is the configuration of bits and not the bits themselves that created the eagle.
Quote:
Are you saying that proto consciousness means, among other things, a rock can be conscious when its configurations are completely altered, only it strikes me that the rock will stop being a rock, given the amount of change that would need to be undergone before it started looking like, well, a brain. If I define a rock as just this arrangement of molecules of certain elements, then if I have to change that in order to replicate systems that are conscious am I not unmaking the rock. I would not then say a rock could be conscious if my definition of a rock related to its molecular structure, if I also agree that it is only certain molecular structures of certain elements and compounds that display signs of consciousness. With that definition the rock would no longer be a rock, and could not therefore be said to have the potential for consciousness, though the matter, if changed radically, could. I don't know enough about chemistry to make assertions about the chemical composition of rocks, and whether we're talking about having to re-arrange atoms, or whether it is simply impossible to make a sliver of granite into a neuron.
Adrian
First get one infinite zoom lens microscope onto that sliver of granite and another one onto a neuron
If you zoom straight into any part of that neuron, right into the tip of one of its dendrites, at first you encounter a complex ensemble of neurotransmitters and proteins (clearly different from that sliver of granite) further zooming you may encounter some phosphates. Up to this point you could compare the images from the neuron to the images of the sliver of granite and you could clearly tell which one was the organic neuron and which one was the inorganic sliver of granite.
Then we zoomed right down to single phosphorous atoms in isolation not remarkably different from any other phosphorous atom of the same atomic mass inside the sliver of granite, further still you reach the nucleus protons, neutrons, then quarks identical to the quarks in that sliver of granite. So the more we zoom in the more homogenous the world becomes. I am confident that if and when it is discovered what causes consciousness it will be reduced down to one single unified principle like all our baryonic matter was reduced down to just quarks.

crocodile deathroll
crocodile deathroll is offline  
Old 02-21-2002, 10:57 PM   #175
Senior Member
 
Join Date: Jan 2002
Location: Farnham, UK
Posts: 859
Post

"And so are you not simply assuming what you set out to prove? I don't see where the claim "it's nothing but a language problem" is anything but an assertion"--BB

Well, all I can prove with regard to what exists are physical things.

I have never come across proof of a non-physical thing, indeed I can't see something non-visible by definition, ditto with regard to my other senses.

So there appears to be a conundrum with regard to the experience of orange and lots of neurons firing.

They seem to be different things.

If I can define the universe in purely physical terms, or if I believe that all outstanding questions about existence will be resolved in explanations that relate to a purely physical objective reality, because it seems less an evidential and more a logical problem that there could be non physical things that exist and can be imagined physically, I must try to find an explanation for the apparent difference.

My explanation is that orange and neurons firing might seem different but that's because, for want of a better expression, the neurons firing are my neurons, and so having those neurons firing, within a complex brain gives that brain something it characterises as an experience of orange. And I extrapolate that the problem comes about because of vocabularies and what it is to experience from within that which can only be described in different ways without. Me experiencing 'orange' for me, is the same as me experiencing 'those neurons firing'. I am not experiencing one thing and not the other, I am experiencing something that can be defined in either way.

The problem therefore evaporates, and it saves me the trouble of developing all kinds of tricky causal relationships with non physical properties of brains.

I would also add that I don't see that your claim that there are causal relationships with non physical things is anything but an assertion. You say it represents the best take on the evidence, yet you only have evidence for physical things, and make an assertion based on the assumption that because the experience of orange must be different to neurons, reductive materialism must be wrong. Sorry if this precis misrepresents in any way. I'd be interested to see how my assertions stray from 'the evidence' further than yours, given they're trying to show that different ways of talking are responsible for referring to purely physical things.

Adrian
Adrian Selby is offline  
Old 02-22-2002, 01:23 AM   #176
Veteran Member
 
Join Date: Aug 2000
Location: Australia
Posts: 4,886
Post

boneyard bill:

I'd like to go over what you said on <a href="http://iidb.org/cgi-bin/ultimatebb.cgi?ubb=get_topic&f=56&t=000032&p=6" target="_blank">page 6</a> again...
Quote:
I know they aren't the only alternatives. So with panpsychism, is it possible for video cameras or rocks to have human-level consciousness? And how exactly (in your opinion) is panpsychism different from epiphenomenalism?

I hardly think that video cameras or rocks have "human-level" consciousness. But at a minimum you have to say that matter and material processes have the potential to produce consciousness. So we could say they possess "proto-consciousness" or something like that. I have compared it to gravity. Our theory say that a flea must exert a gravitational force. But, of course, such a point is a trifling part of any reasonable definition of a flea.
You forgot to answer that question... it seems that you believe in a kind of epiphenomenalism - there is the physical world, with its straight-forward physical laws, and the realm of awareness (like gravity) except that maybe awareness doesn't have any effect on things physically - it just passively "observes". Or do you think awareness can interact with the world? If so, do you believe that this allows aware things to have free will? i.e. their decisions aren't at the mercy of ordinary physical laws?

*

And more about the feeling of pain... here's another example involving a toddler. Let's say that the toddler doesn't feel repulsion from bitter tastes (just a reflex) and it doesn't get pains from hunger. It gets sensations though.

When it was young, its mother would feed it quite regularly. And let's say this is in the jungle - since we evolved by a "survival of the fittest" process. Now say the toddler is separated from the mother somehow. Maybe she just got sick or there was some tribal warfare and she hid the toddler and the adults got massacred and no-one found the toddler.

So the toddler wouldn't have anyone to put food in its mouth. It would eventually feel some sensations which it gets before the mother feeds it (a hunger signal). So the toddler might think that it has to get something and put it in its mouth. It might get some dirt (since that can fit in its mouth). It may have a spitting out reflex and the toddler keeps on automatically spitting the food out. It might keep on trying, but the dirt keeps coming out. If the toddler kept its hands over its mouth it could override the reflex. So now it has a way to deal with the hunger signal - just find something (maybe leaves even) and put it in its mouth (and chew and swallow). After a while the toddler would experience a fairly unfamiliar sensation - the sensation of feeling sick because of the bad food. It wouldn't be displeasurable (mildly painful) though. It would just be an unfamiliar sensation, like hearing a new sound or seeing a new colour. The toddler wouldn't know what to do so would probably ignore it (since it has no reason to repeat or avoid the experience).
So the toddler would just stick to its regular routine of putting some dirt or leaves in its mouth, chewing and swallowing when it gets the hunger signal (since it had become a habit). After a while it might throw up. This wouldn't feel displeasurable since it can't feel any kind of pain or discomfort. It might just use the vomit (and also faeces) as food. It would work out after a while that water doesn't make the toddler automatically spit it out, so it would probably just drink lots of water instead of bothering to forcefeed itself. So it would fill up with water. And die after a few weeks due to malnutrition.

*

Just another thing about good and bad. (I'll just call them that)

For a system to be able to learn new behaviours and decide which behaviours are the best, there would have to be a rating of how "good" they are.

So with a young kid, he might touch some hot water and what happens is not only does the "pain signal" tell the brain that this situation *must* be avoided (assuming the intensity of the pain signal outweighs conflicting goals) - it also means to avoid this situation *in the future*.

Likewise, the "pleasure signal" doesn't just compell the brain to continue an experience (for as long as an intense enough pleasure signal remains) - it also makes the brain seek or repeat this situation *in the future*.

So in the case of a kid and the pain signal from the hot water, the kid's brain would try to avoid that situation in the future. By his age, he would have a basic understanding of cause an effect - so the kid would guess (with say 99% certainty) that his turning of the tap caused the signal - which his brain is forced to avoid. So therefore the tap should also not be turned. If he has had a lot of experience with taps (he probably would have), he'd guess that the amount of water coming out depends on how much you turn it, and hot water becomes colder if the cold tap is turned on as well. (Perhaps his parents would need to teach him)

Without information about what's "good" and "bad", the system would have no way of deciding what to do. It would only be able to learn "facts", but not learn behaviours and be able to operate in the world - it would be a passive observer - or it would have preprogrammed behaviour (not learn behaviour).

That's what makes humans (and some other animals) special - we can learn new behaviours - according to whether it is beneficial to us in some way (including to satisfy our "newness" or "connectedness" cravings) - like how to build cars and how to be productive in life without being a robot, etc.
excreationist is offline  
Old 02-22-2002, 01:30 AM   #177
Regular Member
 
Join Date: Mar 2001
Location: Winnipeg, Manitoba, Canada
Posts: 374
Post

This whole thing sounds like an argument from incredulity to me..

Quote material from boneyard bill:

Quote:
Again, it's not a question of proving that rocks are conscious. What I said was that we might think of rocks as being conscious in the sense that we claim a flea exerts a gravitational force.
This seems misguided. The difference is that fleas do theoretically exert gravitational force, because the only property required for something to be subject to gravity is mass. The requirements for consciousness would seem much more complex. No, we cannot prove that rocks aren't conscious, nor can we reasonably believe that all birds are space alien spies in cognito.

Quote:
But our theory of gravity posits that a flea exerts a tiny gravitional force. Likewise the material in a rock would have the potential to be conscious and that potential is inherent in matter. That is a necessary inference if matter causes consciousness but cannot be explained simply as a physical event and nothing else.
Are you implying that gravity cannot be considered a physical event, or that it cannot be explained in terms of material process alone?

If so, why can't it? Because of some age old 'materialist' convention that objects cannot exert force from a distance? Seems like a strawman to me.

Quote:
but the material in the rock has the potential to create consciousness when it is arranged in the proper configurations. That what I mean by "proto-consciousness."
First of all, are you sure about this? Do you have any evidence that the material found in rock is capable of producing a brain (this being the only thing we know of that is conscious)?

Second, assuming it does, so what? The material in a rock has the potential to create a lot of things, including statues. Are statues an "inherent quality of matter"? This all sounds like nonsense to me.

Quote:
Since I cannot reduce the consciousness to material processes, I accept the claim that consciousness will arise as a fundamental axiom of my system. Since it is a fundamental axiom, I don't need to prove it. That's why I don't have a burden of proof.
Umm.. huh?? Since your opinion is correct, consciousness is an inherent property of matter. Since it is an inherent property of matter, you don't need to prove its existence.

Would this be an accurate summation?

Quote:
I see no basis for claiming that I wouldn't feel pain, but the materialist explanation accounts for everything except the pain. That is the point. What a materialist cannot provide is a reductive explanation for the pain.
You've been given what seems to be a perfectly valid materialistic description of pain. Why do you feel that it is inadequate?

Is your argument based on the fact that it seems that materialism cannot account for concepts?

Quote:
If humans are made up of matter and material processes, and humans have memory, then isn't it reasonable to conclude that matter and material processes are capable of having memory?"
Sure. But I have a feeling we're talking about a loaded definition of memory. All matter has memory in the sense that it is affected by interactions with other matter. If i strike a stone, the stone may "remember" that strike in the form of a blemish. Human memory does not seem to be fundamentally different, except that it's far more complex.

Quote:
The point is that consciousness is an inherent quality of matter.
Is a door an inherent quality of matter? Is quake 2 an inherent quality of matter? Is a beehive hairdo an inherent quality of matter?

Quote:
It's not mysterious but it isn't a reductive explanation either. The color orange remains an immaterial phenomenon. A reductive explanation would show that the color orange is a material process.
It is not an immaterial phenomenon. It exists in your brain. I think that theoretically, we could remove the very parts of the brain that contain your memories and understanding of "orange". Do you disagree? Does this not seem to imply that materialism succeeds in this particular case?

Quote:

Early materialists wanted to explain gravity as matter. They sought a reductive explanation in terms of some process like a vortex or a vacuum. They rejected Newton's explanation because they claimed that "action at a distance" wasn't possible. But they finally had to accept the law of gravity as a fundamental axiom. It cannot be reduced to anything more material.
Gravity is as much a "materialistic" physical law as anything else. Is "heat" a fundamental axiom of matter, in opposition to materialism? What about apparent rules that matter seems to follow at the atomic level?

crocodile:
Quote:
True to a point, but what about phantom limb pains? The nerve fibers in the case of an amputated foot have quite obviously been cut, and the pain in that "ex-foot" is not a physical event if that foot does not exist, or it may instead be just overridden by a kind of pain in the brain.
Pain occurs in the brain. It seems reasonable that the brain might sometimes interpret lack of response from nerve endings (which have been damaged or removed) as pain. Or maybe memory of the event which caused the amputation is strong enough to induce this sensation.

bill:
Quote:
I don't see where the claim "it's nothing but a language problem" is anything but an assertion. How can it be "demonstrated" that it's nothing but a language problem? That is what is necessary to make the case.
Would it help if instead of calling it orange, we called it "the firing of fiber-X caused by specific light waves passing through our eyes"? If those who invented the word orange knew all there is to know about everything, is this not probably what they would call it (except for the fact that it's damn long to say )? You are suggesting that the color orange and my explanation are somehow different, and this distinction is 100% unsupported. They are utterly indistinguishable, and you haven't even attempted to distinguish between the two, beyond stating that they're inherently different.

Boneyard Bill, is there something inherently different between optically transmitting information from a disc in your dvd player to your television screen and playing a movie? Is this not simply a linguistic distinction?

devilnaut
Devilnaut is offline  
Old 02-22-2002, 03:17 AM   #178
Moderator - Science Discussions
 
Join Date: Feb 2001
Location: Providence, RI, USA
Posts: 9,908
Post

boneyard bill:
Again, it's not a question of proving that rocks are conscious. What I said was that we might think of rocks as being conscious in the sense that we claim a flea exerts a gravitational force.


Devilnaut:
This seems misguided. The difference is that fleas do theoretically exert gravitational force, because the only property required for something to be subject to gravity is mass. The requirements for consciousness would seem much more complex. No, we cannot prove that rocks aren't conscious, nor can we reasonably believe that all birds are space alien spies in cognito.

The "hard AI" position would tend to suggest that it's not the physical properties of our brains that makes us conscious, but just the causal structure they instantiate. Instantiate the same causal structure on a very different type of system--a computer, an abacus, an enormous network of billiard balls, whatever--and the system would have the same sort of consciousness.

If human-like consciousness emerges whenever a certain type of causal structure emerges, where do you draw the line? Obviously different types of causal structures might not experience human-like consciousness, but then most of us accept that animals have their own kind of consciousness, even if it's less "complex" than ours in some sense. But if you believe that consciousness is an "objective" property of certain systems (in the sense that it's not just a matter of outsiders' opinions, like 'cuteness'--I think I would still be conscious even if no one around me believed I was) then if you also believe that certain causal structures are entirely lacking in consciousness, then there'd have to be some sort of strict cutoff point between systems that are conscious and systems that are not. I suppose it's possible that such a strict cutoff exists, but it seems rather inelegant and strange. 1003 interacting neurons might be conscious while 1002 are not? A computation involving more than 156 steps leads to consciousness but a computation with fewer steps does not? I find it hard to believe that reality would be set up that way.

Most materialists will probably just take the other route and say consciousness isn't really objective in the sense I outlined above, that it is something like "cuteness" where it's all in the eye of the beholder. No one would suggest that since some objects are "cute" and others are not, there must be a strict cutoff point, but that's just because we recognize that in the gray areas different people can have different opinions about the level of "cuteness" and there's no real objective truth about who's right and who's wrong.

But, like I said, I can't imagine consciousness is like that. Are fish conscious? I don't know, but the truth about whether they are or not doesn't depend on our opinions, it only depends on whether the fish itself is having some kind of inner experience. There must be some sort of objective truth about the matter, even if none of the rest of us can ever know for sure. And once you agree to that, then you must agree that every system/causal structure either is or isn't experiencing some sort of (possibly quite limited) consciousness...again, to me it seems implausible that consciousness would just pop into existence at some sharp limit of complexity.

[ February 22, 2002: Message edited by: Jesse ]</p>
Jesse is offline  
Old 02-22-2002, 05:24 AM   #179
Veteran Member
 
Join Date: Aug 2001
Location: Los Angeles
Posts: 1,427
Post

Jesse:

To add to your point, I think there is a spectrum of consciousness within humans alone. The consciousness we experience as an infant, then as a child, is different from what we experience as an adult. The full self-aware machinery ("I am aware of being aware of being aware of...") probably doesn't kick in until somewhere around age 10, or maybe even later, I should think. So even within our own species, it's doubtful whether you can draw a sharp line -- you have a spectrum from an unconscious zygote to a fully conscious adult human. (And of course an adult human can experience varying degrees of consciousness depending on his situation.)

I would guess that the experience of consciousness for, say, a chimpanzee, might be comparable to that of, say, a 2-year-old human child, though of course there is no way to confirm such a conjecture at the moment.

[ February 22, 2002: Message edited by: IesusDomini ]</p>
bluefugue is offline  
Old 02-22-2002, 03:22 PM   #180
Regular Member
 
Join Date: Mar 2001
Location: Winnipeg, Manitoba, Canada
Posts: 374
Post

Quote:
The "hard AI" position would tend to suggest that it's not the physical properties of our brains that makes us conscious, but just the causal structure they instantiate. Instantiate the same causal structure on a very different type of system--a computer, an abacus, an enormous network of billiard balls, whatever--and the system would have the same sort of consciousness.
I suppose..

Quote:
If human-like consciousness emerges whenever a certain type of causal structure emerges, where do you draw the line?
It depends on your definition of consciousness, I suppose. Ultimately the cut-off will be somewhat arbitrary, but I fail to see how this is a problem for materialism.

Quote:
to me it seems implausible that consciousness would just pop into existence at some sharp limit of complexity.
It seems that sentences like this and in the previous paragraph (quoted below) betray a fundamentally immaterialistic conception of consciousness in the first place. If you realize that the 'complexity of the consciousness' will be a function of the complexity of the system itself, I don't really see how this is such an issue. I suppose you could say that fish are conscious, but they don't simply have some magical property of consciousness that humans also possess. They would have a very limited form of "consciousness" directly related to the complexity of their systems.


Quote:
if you also believe that certain causal structures are entirely lacking in consciousness, then there'd have to be some sort of strict cutoff point between systems that are conscious and systems that are not. I suppose it's possible that such a strict cutoff exists, but it seems rather inelegant and strange. 1003 interacting neurons might be conscious while 1002 are not? A computation involving more than 156 steps leads to consciousness but a computation with fewer steps does not? I find it hard to believe that reality would be set up that way.
The problem here I think is your conception of consciousness. I don't believe that it is something that suddenly pops into existence once a system becomes sufficiently complex. "Conscious" should be what we call a sufficiently complex system. It is not a separate property that matter strives to invoke, it is simply a label for a sufficiently complex system.

To me, this is not fundamentally different from debating what makes a hotdog a hotdog. Is there some point at which a configuration of buns and a weiner becomes a hotdog? Does a strict cutoff exist? Does the property of "hotdogness" pop into existence once the buns and weiner are arranged accordingly? Does matter posses hotdogness as an inherent quality? Can we think of rocks as hotdogs in the same way that we claim that a flea exerts gravitational force?

devilnaut

edit to add:

By sufficiently complex, I mean sufficient to produce whatever properties we normally associate with consciousness.

[ February 23, 2002: Message edited by: Devilnaut ]</p>
Devilnaut is offline  
 

Thread Tools Search this Thread
Search this Thread:

Advanced Search

Forum Jump


All times are GMT -8. The time now is 08:50 PM.

Top

This custom BB emulates vBulletin® Version 3.8.2
Copyright ©2000 - 2015, Jelsoft Enterprises Ltd.