FRDB Archives

Freethought & Rationalism Archive

The archives are read only.


Go Back   FRDB Archives > Archives > IIDB ARCHIVE: 200X-2003, PD 2007 > IIDB Philosophical Forums (PRIOR TO JUN-2003)
Welcome, Peter Kirby.
You last visited: Today at 05:55 AM

 
 
Thread Tools Search this Thread
Old 10-31-2001, 12:16 AM   #11
Veteran Member
 
Join Date: Aug 2000
Location: Australia
Posts: 4,886
Post

Quote:
Originally posted by tronvillain:
<STRONG>JohnClay, you seem to be taking the position that anything capable of appearing conscious actually is conscious. Is that right?</STRONG>
Well it would at least have an awareness of its senses and internal beliefs/desires. Consciousness usually means humans that have a voice in their head helping them reason about things. (So babies aren't conscious according to that rough definition)
In other threads I talked about how you could work out if they have a voice in their head - e.g. one idea is that you could teach it a song, then play it on a CD player while they sing along in their head, then you put headphones on so that they can no longer hear the music but you can. And they continue to sing along to it in their head. Then after a random amount of time (e.g. 47.3 seconds) they sing out loud - and you see how accurate they were. (And you could be tapping the beat to help them out)
If they can do that successfully, the simplest explanation is that they can have a voice in their head.

Quote:
<STRONG>I used to be a Turing Test kind of guy myself, but I've decided that it seems entirely possible that something could appear conscoius without actually being so. To decide either way requires an understanding of consiousness we don't have.</STRONG>
Well the think about the Turing Test is that it usually involves a short testing period and only average people doing the testing. My testing would be much more intense - it could last for months or years.
Also, human-level consciousness is about the things in the movie A.I. - where it has big dreams - so it is somewhat idealistic. And it is autonomous and somewhat self-centered. And it can actively learn things. So you could talk about an unfamiliar subject and explain it to them, and they might ask you some insightful questions about it. The zombie should also show signs of being bored about things and tell you about ideas or thoughts it had about things. And it must have had those ideas or thoughts - (or its puppet-master did or its just speaking randomly) even if it doesn't have a voice in its head, it still came up with the ideas. And if it says that it thought about those ideas (and shows evidence of this) then it must have, on some level.

So I'm really talking about "awareness" which is just about detecting, recognising, responding and learning about things. Even a computer can do that - well computers aren't very sophisticated at responding to things or are good at learning unfamiliar things (e.g. philosophy) autonomously.
excreationist is offline  
Old 10-31-2001, 12:18 AM   #12
Banned
 
Join Date: Jul 2001
Location: South CA
Posts: 222
Post

Quote:
Originally posted by NialScorva
Well, under materialism, if the zombie is like a human in everyway except lacking qualia (such as pain and pleasure given in this example), then there is no discernable difference between the two.
There is no “discernable” difference unless you are that person who knows you feel. This is a question for the materialist “How do you know you feel?”
Quote:
Originally posted by NialScorva
This example is about an attribute that cannot be distinguished. It's like saying that I have a bin of rubber balls, all exactly the same, only some are foo. Foo means that they look, act, and are perceived like rubber, but are not rubber. Your job is to tell them apart. In this regard, the quest looks rediculous. There's no way for you to know the difference, and I just made up the word to create a difference that cannot be perceived. The zombie problem is similar, it *defines* an adjective that cannot be differentiated from it's options, and as such is meaningless.
Foo differs from the zombie in that there is no way to tell whether Foo is rubber. Is the same true of zombies in the following situation: Does the materialist have no way of knowing whether or not they feel? It sounds like you’re saying a materialist doesn’t or can’t distinguish themselves from an unfeeling zombie.
hedonologist is offline  
Old 10-31-2001, 12:39 AM   #13
Banned
 
Join Date: Jul 2001
Location: South CA
Posts: 222
Post

Quote:
Originally posted by JohnClay
Well I think it is impossible for zombies to feel nothing. They mightn't feel the sensations we feel in exactly the same way, but some part of this system must be sensing the external world and its internal states otherwise it would be incapable of responding appropriately.
You’re referring to the zombie as “they” when it is only an “it”, in the hypothetical. You’re assuming that because you react based on subjective experience, that an organism would have to have subjective experience, to react in a similar way. How could you prove such a thing?
Quote:
Originally posted by JohnClay
Either it does it by chance (which is likely that it would continue for long) or it has a supernatural puppet-master - but here the puppet-master is its awareness because for the puppet-master to figure out the appropriate responses, it needs to be aware of the zombies senses and internal states.
Consider the Chinese Room hypothetical but with the question being, “How do you know if the AI really feels?” Do snails feel?

You would have to know an NCC to answer this, or what I would call a “physical correlate of an being who feels”.
Quote:
Originally posted by JohnClay
It would be amazing if such a zombie could even walk because balancing on two legs requires you to know a lot about the terrain and how level you are and the positions of your legs.
Well, say it has all that info stored in it’s brain, but unlike you, it has no desire, it is just programmed to react like it does. There are computers which can do things like walking and many things which humans can’t do. If seen tests which show a computer program that can swing a pole on a hinge, and balance it upright just as fast as you read this.

It seems like you are saying that freewill, in the form of will that is free of determinism, is necessary for things like walking, or whatever behavior you would choose. And that being conscious allows one to be free of determinism, thus to do things which a deterministic or randomizing machine, could not do.

Imagine an android who we omnisciently know is an aware and feeling being, who is discussing with his android friend whether humans are really conscious. He proposes a test of adding a million numbers in a minute. He thinks this is what would require “consciousness”. Of course a human can’t do this so he concludes that the human is not conscious. That is like your walking experiment.
Quote:
Originally posted by JohnClay
If it had no sense of the world then it would avoid the obstacle purely by chance.
It “senses” the world like a robot, it just doesn’t feel or desire. There is no being inside, looking out of the zombie, there is only an organic machine that anyone could only be “aware” of, from the outside.
Quote:
Originally posted by JohnClay
So I guess true zombies are possible, just highly unlikely.
Well how about a plant? These sort of react-- reaching for the sun, etc, is it possible they are “zombies”?

What if they can make computers that, if you had an online “relationship” with one (such as in chatrooms), and you were trying to figure out if they were a computer, you would not be able to tell the difference between them and a human. Would this prove this “AI” was a feeling being?
hedonologist is offline  
Old 10-31-2001, 12:41 AM   #14
Banned
 
Join Date: Jul 2001
Location: South CA
Posts: 222
Post

Quote:
Originally posted by Synaesthesia
hed: Is there a known material difference between one of these hypothetical unfeeling zombies and an organism that does feel?

That is an invalid question because, as Nial Scorva pointed out, by definition the zombie and human are physically identical.
First, I want to change the question to this. I’m asking this of a materialist “Is there any material difference which you know of, between yourself and a zombie?”

As you say “by definition the zombie and human are physically identical” so it would seem the answer would be “no” for that reason, so it is somewhat rhetorical. The obvious follow up question is “If there is no physical difference, how does a materialist named Bob know that Bob (himself) is not a zombie?” Why would it be an invalid question?
hedonologist is offline  
Old 10-31-2001, 12:43 AM   #15
Banned
 
Join Date: Jul 2001
Location: South CA
Posts: 222
Post

I have a question for whoever. Aren’t materialists saying that we are all zombies? If that is not what they are saying what are they saying about what feeling is?
hedonologist is offline  
Old 10-31-2001, 03:52 AM   #16
Veteran Member
 
Join Date: Aug 2000
Location: Australia
Posts: 4,886
Post

Quote:
Originally posted by hedonologist:
<STRONG>You’re referring to the zombie as “they” when it is only an “it”, in the hypothetical. You’re assuming that because you react based on subjective experience, that an organism would have to have subjective experience, to react in a similar way. How could you prove such a thing?</STRONG>
Either that, or it does it by chance, or is preprogrammed - and then in that case, its programmer would be aware of what's going on.

Quote:
<STRONG>Consider the Chinese Room hypothetical but with the question being, “How do you know if the AI really feels?”</STRONG>
The Chinese Room-type awareness is not autonomous or continuous - it just gives a reply when it is prompted. I think it has very brief moments of awareness. I think the term "feels" applies better to systems that can continuously respond (e.g. seek or avoid something).

Quote:
<STRONG>Do snails feel?</STRONG>
Well they retreat when you touch them... (as far as I know) they don't philophisize about their suffering, so their avoidance system is very primitive. Animals like cats and dogs though can reason about avoiding undesireable things much better though so they "feel" pain (or anticipate it) in more sophisticated ways.

Quote:
<STRONG>You would have to know an NCC to answer this, or what I would call a “physical correlate of an being who feels”.</STRONG>
What's an NCC? A non-computer character? Anyway about awareness - please check out Materialist explanations for sensory and self-awareness for my definitions about it. Note that throughout the thread I refined my view on it a bit.

Quote:
<STRONG>Well, say it has all that info stored in it’s brain, but unlike you, it has no desire, it is just programmed to react like it does. There are computers which can do things like walking and many things which humans can’t do. If seen tests which show a computer program that can swing a pole on a hinge, and balance it upright just as fast as you read this.
Well how about a plant? These sort of react-- reaching for the sun, etc, is it possible they are “zombies”?</STRONG>
My point is that those things demonstrate awareness/recognition and are actually taking in input from their environment and using it. But those examples you listed don't satisfy my requirements for awareness (see the definition in my thread) - they need to be good at learning new ways themselves of seeking their goals and avoiding undesireable things.
I would call things that don't quite meet my definition of awareness zombies. e.g. plants - individual plants don't autonomously develop beliefs about the world - they just follow hardcoded rules.

Quote:
<STRONG>It seems like you are saying that freewill, in the form of will that is free of determinism, is necessary for things like walking, or whatever behavior you would choose. And that being conscious allows one to be free of determinism, thus to do things which a deterministic or randomizing machine, could not do.</STRONG>
No I do not believe that we have true free will - I think we are deterministic - though we can be very unpredictable - but so can other complex systems like the weather.
There are 100 billion neurons in our head, and each one can have an effect on the decisions we make. In the same way, weather involves many individual particles.
So I think our behaviour is deterministic, but not deterministic in a simplistic way, like gravity or magnetism is.
Our brains can learn to be intelligent by learning the patterns in our experiences and applying these patterns (or recombinations of them) to different problem areas.
Random systems can't do this (well they can for very brief instances, but generally random systems aren't intelligent). Robots don't yet have the neural network reasoning abilities that we have so that their behaviour is not very intelligent.
My point about walking is that it involves some degree of awareness about your environment - assuming that it can autonomously modify its behaviour to maximize its expected pleasure and/or minimize its expected suffering.

Quote:
<STRONG>Imagine an android who we omnisciently know is an aware and feeling being, who is discussing with his android friend whether humans are really conscious. He proposes a test of adding a million numbers in a minute. He thinks this is what would require “consciousness”. Of course a human can’t do this so he concludes that the human is not conscious. That is like your walking experiment.</STRONG>
I'm just saying that an intelligently walking thing, like a dog, probably is aware of its environment. Otherwise its creator or controller is aware of its environment. Or it just moves its muscles in totally random ways and just happens to look like it is intelligently walking around and doing things.

Quote:
<STRONG>It “senses” the world like a robot, it just doesn’t feel or desire. There is no being inside, looking out of the zombie, there is only an organic machine that anyone could only be “aware” of, from the outside.</STRONG>
But it would have to seek goals by predicting what it needs to do to get a certain result (pleasure). Insects don't do this - they just respond in hardcoded ways. Desires involve having a prediction about how you want things to be like and then possibly seeking to change your current world into the desired world. Or if the currently world is inherently undesireable then you try and modify your experiences to avoid this undesireable element. (Or change your perception of it, so that it is no longer undesireable)
So basically it's about real-time autonomous sophisticated reasoning that involves autonomously learnt uncertain knowledge (not preprogrammed "facts") and learnt predictions about the world based on your possible actions (and even the actions of others). So most of this intelligence is *learnt*. In the case of most A.I. systems, the programmers just program the A.I. to be clever - but it is the programmer who had insightful awareness about the world - not the A.I.

Quote:
<STRONG>Well how about a plant? These sort of react-- reaching for the sun, etc, is it possible they are “zombies”? </STRONG>
Well what if you had a plant that reached out to the sun shining through your window - and then you turned it around. It might turn back towards the sun. I'd say that it was receiving some information about the environment. I don't think that it just changed direction by pure chance. They don't have beliefs or can intelligently learn though - they just follow hard-coded rules and tend to survive and pass on their successful genes. So since they don't quite meet my definition for awareness I'd call them "zombies".

Quote:
<STRONG>What if they can make computers that, if you had an online “relationship” with one (such as in chatrooms), and you were trying to figure out if they were a computer, you would not be able to tell the difference between them and a human. Would this prove this “AI” was a feeling being?</STRONG>
Well either it generated its responses based on pure chance (like a monkey on a typewriter) or it was programmed by a very intelligent insightful person. (Or it was a simulation of a baby and was taught to speak from birth so that it learnt itself and therefore had real intelligence)
Anyway, in most of those possibilities awareness and insight was involved. Well except for the possibility that it was completely random. e.g. if I had a chat with an illiterate monkey and it happened to show that it understood me. In that case no intelligence or real awareness is involved. (The monkey is aware of some things though, but not about what it appears to be aware of, according to its random yet insightful typing)
excreationist is offline  
Old 10-31-2001, 04:52 AM   #17
Veteran Member
 
Join Date: Aug 2000
Location: Australia
Posts: 4,886
Post

Quote:
Originally posted by hedonologist:
<STRONG>I have a question for whoever. Aren’t materialists saying that we are all zombies? If that is not what they are saying what are they saying about what feeling is?</STRONG>
Well if you define zombie as a deterministic system that just blindly follows the laws of physics then according to materialists we are all zombies. (Based on that definition of "zombie")
excreationist is offline  
Old 10-31-2001, 08:04 AM   #18
Veteran Member
 
Join Date: Jan 2001
Location: Median strip of DC beltway
Posts: 1,888
Post

Quote:
Originally posted by hedonologist:
<STRONG>I have a question for whoever. Aren?t materialists saying that we are all zombies? If that is not what they are saying what are they saying about what feeling is?</STRONG>

Does it matter? Does a massage feel any less pleasurable? Does a slap to the back of the head hurt any less?
NialScorva is offline  
Old 11-04-2001, 11:43 PM   #19
Banned
 
Join Date: Jul 2001
Location: South CA
Posts: 222
Post

to Nial also-- defining a zombie
Quote:
Originally posted by JohnClay
Well if you define zombie as a deterministic system that just blindly follows the laws of physics then according to materialists we are all zombies. (Based on that definition of "zombie")
That seems to be more the way that you were defining a zombie. I define a zombie as a sort of organism who looks like it reacts like we would, but really has no subjective experience such as pleasure or pain. That is not to say that we have a will that is “free of determinism”. I’m not saying the alternative of being a zombie is having a brain that is controlled by some sort of “immaterial” mind over matter. I’m more speaking of how the matter controls/effects the mind/experiencer. IOW that physical happenings in the brain correspond with what you might call an “immaterial” experience (ie subjective experience), such as pleasure. I’m treating the subjective aspect of an experience as “immaterial”, or what is happing to an “immaterial” self, in a sense. I’m not making a statement about whether the mind/experiencer also controls the body.
Quote:
Originally posted by NialScorva
Does it matter? Does a massage feel any less pleasurable? Does a slap to the back of the head hurt any less?
Well a zombie wouldn’t feel a massage and there is no reason to have any empathy for it. A zombie doesn’t exist as an experiencer, it is just a body that looks like it has an experiencer. The lights are on but nobodies home, so to speak. But since it seems theoretically impossible to know whether anyone other than yourself is a zombie, it only matters regarding the proof that we have what maybe called an “immaterial” aspect of experience. Since all value is subjective, it would seem that “materialism” (as I perceive it) would lead to nihilism, but I'm not really sure what the materialist believes does not exist.
hedonologist is offline  
Old 11-04-2001, 11:45 PM   #20
Banned
 
Join Date: Jul 2001
Location: South CA
Posts: 222
Post

Quote:
Originally posted by JohnClay
Either that, or it does it by chance, or is preprogrammed - and then in that case, its programmer would be aware of what's going on.
Not if its “programmer” is evolution, or if its programmer was a zombie. How could you prove other people are not zombies?
Quote:
Originally posted by JohnClay
The Chinese Room-type awareness is not autonomous or continuous - it just gives a reply when it is prompted. I think it has very brief moments of awareness. [1] I think the term "feels" applies better to systems that can continuously respond (e.g. seek or avoid something). [2]
1) Would you call something “aware” if there were no evidence of it having a subjective experience?

2) Feeling pleasure or pain is a purely subjective aspect of an experience. If you think some arrangement of matter other than your own body is “feeling” something, you are projecting your own subjective experience onto it, and assuming that it is a body with an experiencer, like yourself. “Seeking” and “avoidance” can be defined by objective behaviors, such as how a guided missile “behaves”, but feeling pleasure can not be defined by objective behavior without someone projecting their own subjective experience onto the “behavior”. Assuming that the “body”/machine is reacting based on subjective experience rather than just determinism or chance.

What sort of “seeking” or “avoidance” would prove that there was an experiencer “in” an organism or machine?
Quote:
Originally posted by JohnClay
Well they retreat when you touch them... (as far as I know) they don't philophisize about their suffering, so their avoidance system is very primitive.
“They” can be seen “avoiding”, but that doesn’t necessarily imply they feel pain.
Quote:
Originally posted by JohnClay
What's an NCC? A non-computer character?
Neurological Correlate of Consciousness. I heard of the term from reading some stuff by David Chalmers, but it seems to be a simple concept to me.
Quote:
Originally posted by JohnClay
So I think that awareness isn't just about input - it is about a system that can change its own behaviour so that its input matches its desires more.
You’re making a distinction between us and our brain. Without a subjective experiencer (ie “us”), desire can not exist.
Quote:
Originally posted by JohnClay
Pain is a signal in our brain where our limbic system tells us that something in our immediate experience is undesireable
I would say rather that we may interpret the subjective experience of “pain” as undesirable. The brain has no desire, only the feeler of the pain (“us”) has the desire to avoid pain. I define us as “experiencers”-- those who have subjective experiences, IOW.
Quote:
Originally posted by JohnClay
Pleasure is a signal that compells the brain to repeat the experience, and the compulsion depends on the intensity of the signal.
Here you are attempting an objective definition of pleasure, but all you can do is show a correlation between the subjective “pleasure” and the signal. What if you were one of the test subjects and everyone else says a certain signal is (what I call) “pleasurable” but when your brain emits that signal you feel (subjective) “pain”? How are you capable of conceiving what I'm talking about if you don’t know of the subjective experience of “pleasure”, through feeling it?

All value and significance, is subjective in the same way. So is color.

(Had to get rid of those damn parentheses eating smilies.)

[ November 05, 2001: Message edited by: hedonologist ]
hedonologist is offline  
 

Thread Tools Search this Thread
Search this Thread:

Advanced Search

Forum Jump


All times are GMT -8. The time now is 11:17 PM.

Top

This custom BB emulates vBulletin® Version 3.8.2
Copyright ©2000 - 2015, Jelsoft Enterprises Ltd.