FRDB Archives

Freethought & Rationalism Archive

The archives are read only.


Go Back   FRDB Archives > Archives > IIDB ARCHIVE: 200X-2003, PD 2007 > IIDB Philosophical Forums (PRIOR TO JUN-2003)
Welcome, Peter Kirby.
You last visited: Yesterday at 05:55 AM

 
 
Thread Tools Search this Thread
Old 12-08-2001, 11:22 PM   #31
Banned
 
Join Date: Jul 2001
Location: South CA
Posts: 222
Post

Quote:
Originally posted by Synaesthesia
Yea, I’d [have a brain transplant]. Better that than dead.
Now we have something to "disagree" about. Even if the body functioned, I would think you would still be dead, and there would just be someone else living in your former body. Because it seems to me that an individual consciousness somehow corresponds with or is caused by a brain. I don't think you would wake up after the operation, I think the same experiencer whose brain was being used for the transplant would wake up in your body.
Quote:
Originally posted by Synaesthesia
hed: 2) Is there any reason we experiencers (as opposed to our brains) would want to pursue pleasure and avoid pain? If so, what is it?

I can’t imagine what good it does for people to have a vested psycological interest in their physical well being.
I can. The good it does would be pleasure. The fact that I feel and thus that I exist as a feeler, causes all significance to me. Pleasure seems to be the root of the concept of goodness, and any value for that matter.

BTW, I'm not saying that we can not be altruistic or that we can't intentionally choose to bring pain instead.
hedonologist is offline  
Old 12-09-2001, 01:05 AM   #32
Veteran Member
 
Join Date: Aug 2000
Location: Australia
Posts: 4,886
Post

Quote:
Originally posted by hedonologist:
<strong>Now we have something to "disagree" about. Even if the body functioned, I would think you would still be dead, and there would just be someone else living in your former body. Because it seems to me that an individual consciousness somehow corresponds with or is caused by a brain. I don't think you would wake up after the operation, I think the same experiencer whose brain was being used for the transplant would wake up in your body. </strong>
Yeah, your memories, personality and consciousness would be contained in that brain. Maybe Synaesthesia is talking about having his brain transplanted into someone else's body.
excreationist is offline  
Old 12-09-2001, 01:24 AM   #33
Banned
 
Join Date: Jul 2001
Location: South CA
Posts: 222
Post

Quote:
Originally posted by excreationist
I don't think you are taking in what I am saying. If you were then you would know why I don't think that T.V.'s have desires or can feel in a meaningful way - by meeting my definition of awareness.
You seem to assume that feeling requires intelligence (or define feeling that way). The self-awareness of the cause of feeling requires intelligence, but I don't think that just the feeling of pleasure or pain would require as much intelligence.
Quote:
Originally posted by excreationist
hed: I don’t see why that “learning” would necessarily require any subjective experiencer.

Learning implies that they had incomplete knowledge and still have incomplete or mistaken knowledge or beliefs. This is from the "point of view" of the learning system - it's "personal" knowledge base. It learns patterns and makes inferences based on limited experience so that is why it can be mistaken. The inferences it makes aren't necessarily the Truth, they are subjective.
Here and elsewhere, like many people, you seem to define "subjective" as "uncertain". I suppose I use the word "subjective" for lack of a better term, but I'm speaking of something different than you. One can describe subjective experiences as I define them without making any objective truth claim, other than that they are correctly remembering what they subjectively experienced. The epitome of subjective experiences are pleasure and pain. These require a subject and they have no objective existence or definition, AFAIK.

So what you are calling "subjective inferences", I would rather call "uncertain inferences". If you are looking at some supposed AI, how would you know this "learning system" has any "point of view"? Point of view implies a motive, but you project this motive onto the "learning system". How can the AI be "mistaken" if there is no evidence that it has intention? How can AI have intention without being a subject who feels? It may act as you would when you intend to do something, but does this mean it is really trying (to fulfill a desire/preference)?

This is why I say that these learning systems, or any behavior could theoretically be simulated by AI, without the AI really having any point of view or (what I call) subjective experience. In this way, data in your brain is as objective as an object you see in front of you.
Quote:
Originally posted by excreationist
The term "feel" is fairly subjective since people might disagree about whether insects or plants or the planet "feels". But anyway, I try to only use the term when referring to aware systems that meet my requirements for awareness. As I explained, a computer doesn't meet these requirements.
Here again, you seem to be using "subjective" to denote a lack of certainty. It is somewhat uncertain whether other "organisms" feel-- the less like our body the organism is, the less we are able to project/relate our reactions to the "organism's". But I am certain that I feel, even though I am calling feeling a subjective experience. It is theoretically a matter of fact, whether or not insects feel, it is just a fact that is outside our realm of knowledge. It is outside my realm of knowledge to know whether or not you feel (though I assume you do), but if you do feel, you are certain that you feel.
Quote:
Originally posted by excreationist
So do you mean that pleasure and pain doesn't require aware brain-like structures to exist?
It would require a subject of some sort, but there may exist entirely different sorts of "organisms", which are made of the material we see around us, but who we do not recognize because their bodies are not similar enough to our own. And I don't know the (components of the) physical correlate of a subject of a pleasure, so I'm not sure how complex/intelligent the physical correlate would need to be.
Quote:
Originally posted by excreationist
hed: Here I’m not sure if you are defining “good” and thus “pleasure” merely by behavior (ie that “pleasure” is what is repeated, pain is what is “avoided”),or not. If so, again that says nothing about whether the organism has a “immaterial” subjective experience.

I'm talking about its intention to repeat or avoid that behavior. It can recognize that the pain of burning skin should be avoided but it could still subject itself to that pain if it determines that the action is beneficial overall.
How would it/you judge or define beneficial?

What is the "recognition" that pain should be avoided? Is "recognition" merely the data that causes it to avoid pain, or is it based on a (subjective) desire?
Quote:
Originally posted by excreationist
Pain also causes you to seek a solution to the problem... so in the short-term you just want to avoid the problem, but then you put your brain-power to work to try and find a solution (permanently avoid the problem).
You're explaining behavior in terms of subjective motives. This is not how we would explain the behavior of billiard balls. What if there was a physical cause of my behavior, instead of me feeling pain, and I was just a zombie who had evolved a radically different method of "survival" without any feeling of pain or subjective experience whatsoever? (I know I'm not a zombie but I'm assuming you do not, when I ask this.)
Quote:
Originally posted by excreationist
So I'm basically saying that the behaviour we see in mammals must come from a system of beliefs and desires - and that is what I believe pleasure and pain consist of. Our emotional response (pleasure and pain) is used to modify our beliefs and desires. (e.g. if our beliefs or desires result in pain/less pleasure, then they may need to be modified somehow. If they result in pleasure/less pain, they are reinforced)
Why would pleasure reinforce a choice instead of pain?
hedonologist is offline  
Old 12-09-2001, 03:22 AM   #34
Veteran Member
 
Join Date: Aug 2000
Location: Australia
Posts: 4,886
Post

Quote:
Originally posted by hedonologist:
<strong>You seem to assume that feeling requires intelligence (or define feeling that way). The self-awareness of the cause of feeling requires intelligence, but I don't think that just the feeling of pleasure or pain would require as much intelligence.</strong>
I'm talking about pleasure and pain which is used to motivate an intelligent, learning system. I don't find the concepts of pleasure and pain very meaningful if they don't involve such a system. And creatures that just react to stimuli using fixed behaviours, like insects, don't experience pleasure or pain, in my opinion. So T.V.'s don't feel pleasure or pain either, in my opinion.

Quote:
<strong>Here and elsewhere, like many people, you seem to define "subjective" as "uncertain".</strong>
What I mean by "subjective" in that case is that it is just an opinion that the system has developed, based on its limited experience. So it means that its opinions and conclusions are biased and possibly mistaken.

Quote:
<strong>I suppose I use the word "subjective" for lack of a better term, but I'm speaking of something different than you.</strong>
My use of the word "subjective" is similar to when people talk about "subjective morality" or "subjective opinions". When I said "subjective inferences" I meant "subjective opinions" - not direct subjective experiences.

Quote:
<strong>One can describe subjective experiences as I define them without making any objective truth claim, other than that they are correctly remembering what they subjectively experienced. The epitome of subjective experiences are pleasure and pain. These require a subject and they have no objective existence or definition, AFAIK.</strong>
Yes, a subject is involved. And those inferences that a system can develop are based on a system's limited (and possibly misinterpreted) knowledge about the world.

Quote:
<strong>So what you are calling "subjective inferences", I would rather call "uncertain inferences".</strong>
But it is very personalized - it is based on the knowledge that that system has accumulated. Saying something is merely uncertain doesn't imply that a personal knowledge base is involved. (When I say "personal" I mean that it is part of the system)

Quote:
<strong>If you are looking at some supposed AI, how would you know this "learning system" has any "point of view"? Point of view implies a motive, but you project this motive onto the "learning system". How can the AI be "mistaken" if there is no evidence that it has intention? How can AI have intention without being a subject who feels? It may act as you would when you intend to do something, but does this mean it is really trying (to fulfill a desire/preference)?</strong>
Well AI systems can be programmed to seek goals - e.g. get the opponent into checkmate and protecting your own king. But having goals isn't enough to satisfy my definition for awareness.
BTW, <a href="http://www.cyberlife-research.com/about/brainintro.htm" target="_blank">here</a> is some information about a robot called Lucy that will autonomously develop representations of the world so that it can work out how to achieve goals.

Quote:
<strong>This is why I say that these learning systems, or any behavior could theoretically be simulated by AI, without the AI really having any point of view or (what I call) subjective experience. In this way, data in your brain is as objective as an object you see in front of you.</strong>
Well we can have a voice in our head commentating our experiences. I think it will be a few years until robots will be able to learn how to do that. I think Lucy is a step in the right direction.
But I do think that some AI systems do satisfy my definition of awareness - at the moment they probably only have the autonomous learning abilities comparable to a mouse. Note that most A.I. systems you would come across can't autonomously learn how to seek their goals.

Quote:
<strong>Here again, you seem to be using "subjective" to denote a lack of certainty. It is somewhat uncertain whether other "organisms" feel-- the less like our body the organism is, the less we are able to project/relate our reactions to the "organism's".</strong>
I mean that it is based on personal opinions. And I guess that means that it isn't certain, but the thing is that whether something feels or not depends on what you mean by "feels" - so there can be no definite answer since the question is too vague.

Quote:
<strong>But I am certain that I feel, even though I am calling feeling a subjective experience. It is theoretically a matter of fact, whether or not insects feel, it is just a fact that is outside our realm of knowledge. It is outside my realm of knowledge to know whether or not you feel (though I assume you do), but if you do feel, you are certain that you feel.</strong>
I'm relating "feel" to my definition of awareness so that it is easier to work out whether something feels or not. (According to my definitions)

Quote:
<strong>It would require a subject of some sort, but there may exist entirely different sorts of "organisms", which are made of the material we see around us, but who we do not recognize because their bodies are not similar enough to our own. And I don't know the (components of the) physical correlate of a subject of a pleasure, so I'm not sure how complex/intelligent the physical correlate would need to be.</strong>
Ok, so a system that can feel pleasure would be physical and have a certain amount of complexity... I agree... so things like a piece of solid steel, etc, can't feel pleasure (according to materialism at least)

Quote:
<strong>How would it/you judge or define beneficial?</strong>
Either it is instinctually determined to be beneficial (e.g. bowel movement, eating sugars/fats/salts, sucking objects, connectedness) or it is associated with something that is instinctually beneficial.
And of course, we can learn to find sugars to be undesirable, but that is because it is being associated with negative things such as obesity, which in turn might be associated with social disapproval which involves a lack of connectedness. (I believe that connectedness is a fundamental human desire)
So basically we have many fundamental desires (e.g. avoiding hunger, avoiding physical pain, etc) but these can be outweighed us associating our even stronger desires with that behaviour. e.g. so the taste of sugar can become undesirable.
So anyway, the brain just determines if something if something is overall desirable or undesirable. Animals do this too. We can also try and work out the reasons why we feel that way about a particular thing, but it isn't necessary to do this for us to form an intuitive (animal-type reasoning) emotional response to some stimulus.

Quote:
<strong>What is the "recognition" that pain should be avoided? Is "recognition" merely the data that causes it to avoid pain, or is it based on a (subjective) desire?</strong>
Well for a particular potential behaviour, we combine all the emotions together and see if it is overall undesirable or desirable. If there are multiple options then we always choose the most desirable one. (Sometimes we might decide that hurting ourselves is the most desirable action, so we are still doing what we believe is most desirable)
Anyway, we endure pain if it is outweighed by a greater positive emotion. (Or if enduring the pain allows an even greater pain to be avoided)
So we might endure pain to get pleasure from the surprise and excitement of it (we have a 'newness' desire) or we might endure pain because we believe that we deserve it, and we want justice to be served (seeking connectedness).

Quote:
<strong>You're explaining behavior in terms of subjective motives. This is not how we would explain the behavior of billiard balls. What if there was a physical cause of my behavior, instead of me feeling pain, and I was just a zombie who had evolved a radically different method of "survival" without any feeling of pain or subjective experience whatsoever? (I know I'm not a zombie but I'm assuming you do not, when I ask this.)</strong>
Well there would still be a central decision making component of the zombies brain. To act in the same way that we do it would have to have its goals encoded in a universal format so that it can compare goals and go with the thing that appears to be most urgent. Then it would carry out the most urgent task. Assuming that this central decision making component can also learn so that it updates priorities and its responses to things, then it would experience pleasure and pain, according to my definitions.
Pleasure are the things the are seen as important goals to seek. Pain involves things that must urgently be avoided (depending on the intensity)
So basically I'm saying that desirable is a synonym with pleasure and undesirable is a synonym with pain. Of course human emotions involve other things too like differing breathing rates, facial expressions (to communicate our emotional state to others), energy levels, etc.
Anyway, part of the zombie's brain would have what I call pleasure and pain. Otherwise it wouldn't be capable of having the same behaviour as we do.

Quote:
<strong>Why would pleasure reinforce a choice instead of pain?</strong>
As I said, for me, pleasure and desirable are synonyms (maybe that grammar is bad though). And pain and undesirable are synonyms.
So if pleasure is associated with a situation (e.g. having a back-rub where muscle tension is relieved) then this situation IS desirable. This means that the situation should be repeated in the future, so the tendency to repeat behaviours that lead to that situation is reinforced.
Pain is undesirable (they are synonyms for me, as I said) so a system should avoid situations that are undesirable (IOW involve "pain") so then it would reduce the tendency to repeat the behaviours that lead to that situation.
So basically, say if you pay for a masseur then you get some pleasure (relief from tension) from the backrub. But paying money is undesirable but if the pleasure is great enough then the behaviour of going to masseurs would be reinforced.
If a situation is determined by your brain to be undesirable overall and it can be avoided, then you will avoid it. e.g. if you are wanting to cross a street and you realize that walking into a car will result in a lot of pain and you believe that the benefits aren't very great, you won't deliberately walk in front of the car. If you really need some excitement in your life and you don't mind risking your life then you might do it.
excreationist is offline  
Old 12-09-2001, 09:14 AM   #35
Synaesthesia
Guest
 
Posts: n/a
Post

For a humanoid zombie to exist, the mechanisms behind our behavior must be radically disconnected from consciousness. In other words, a sonnet or a symphony can be written while the person is totally unconscious. If behavior and sensation were indeed so utterly disconnected, I could be seeing pink elephants dancing around my house without being able to tell anyone or react in any way. Like Descartes' demon, the idea is immunized against falsification but is based upon exceedingly thin theoretical grounding. It seems doubly implausible because the sensations that humans experience give every indications of being the product of physically identifiable perceptual mechanisms.

Quote:
It is theoretically a matter of fact, whether or not insects feel, it is just a fact that is outside our realm of knowledge. It is outside my realm of knowledge to know whether or not you feel (though I assume you do), but if you do feel, you are certain that you feel.
I cannot help but see the gross irony in the contention that we merely assume that other people are conscious when at this very moment we are both engaging in activities that require our conscious attention. Certainly your brain does not affect me in the same way that it affects you (hence subjectivity), but it reflective thought has marked behavioral manifestations.

This epistemic issue is, I think, very near the heart of much of the controversy about consciousness. Although I'm not going to explore this issue in great depth at this moment, it might be interesting to start a thread on the theoretical issues surrounding our perception of consciousness in other beings.

hedonologist
Quote:
You seem to assume that feeling requires intelligence (or define feeling that way). The self-awareness of the cause of feeling requires intelligence, but I don't think that just the feeling of pleasure or pain would require as much intelligence.
I agree that the capacity for sensation isn't directly what we mean by intelligence. So, strictly speaking, it would be possible to describe an action as intelligent without being a conscious one. However, in human beings, our ideas of intelligence and conscious attention are almost as intertwined as intelligence and consciousness so they should not be utterly isolated in all cases.

Quote:
How can the AI be "mistaken" if there is no evidence that it has intention? How can AI have intention without being a subject who feels? It may act as you would when you intend to do something, but does this mean it is really trying (to fulfill a desire/preference)?
Intentionality is prior to consciousness. Primitive intentional systems evolved long before full fledged awareness did. In the sense of a system being about something or dealing with goals and sub goals, I think primitive intentional systems are very common. For example, Daniel Dennet's often misunderstood examples of simple feedback loops and combinations thereof.

I would agree with you that we have to be wary about viewing animals or computer programs too anthropomorphically. The so-called Eliza effect is the tendency to assume that vaguely human behavior is accompanied by other human properties such as feeling. However, I would also caution against the reverse. Simply because a function is implemented on silicone does not mean it isn't isomorphic to what humans actually do. I would suggest that distinctions we do make should be based upon careful development of our cognitive theories as opposed to our gut reactions.

Quote:
This is why I say that these learning systems, or any behavior could theoretically be simulated by AI, without the AI really having any point of view or (what I call) subjective experience.
Yes, it's certainly possible but such systems would not function in the same way that humans do.

Quote:
You're explaining behavior in terms of subjective motives. This is not how we would explain the behavior of billiard balls. What if there was a physical cause of my behavior, instead of me feeling pain, and I was just a zombie who had evolved a radically different method of "survival" without any feeling of pain or subjective experience whatsoever?
Explaining an event in terms of motivation and explaining it in terms of function are not mutually exclusive any more than describing a car in terms of atoms and in terms of aerodynamics are exclusive.

Creatures that have evolved to survive without subjective experience constitute the majority of life on earth. Single celled organisms, simple sea creatures et cetra. To describe and understand such creatures, nothing meaningful is gained by attributing complex intentionality and full consciousness to them. The same is not true for other human beings.

Regards,
Synaesthesia

"To me there is a special irony when people say machines cannot have minds, because I feel we're only now beginning to see how minds possibly could work -- using insights that came directly from attempts to see what complicated machines can do. Of course we're nowhere near a clear and complete theory - yet. But in retrospect, it now seems strange that anyone could ever hope to understand such things before they knew much more about machines. Except, of course, if they believed that minds are not complex at all." -Marvin Minsky
 
Old 12-09-2001, 11:26 AM   #36
Junior Member
 
Join Date: Oct 2001
Location: Tarzana
Posts: 88
Post

"He could show that pleasure and pain are uniquely associated with certain patterns of brain activity. Therefore one who experiences these states must necessarily differ in brain activity (thus not be identical to the last molecule). He could show that pleasure and pain have physiological and biological reprocussions (thus the zombie could not react in the same way to all stimuli as the 'real thing')"

I don't think that the reaction in terms of how each component in a brain system responds is the experience of pain. Its more the intent of the circuits purpose in the overall design of the life form. Viewing the issue from a perspective of a personal subjective defeats the purpose of the evolutionary design of the being. It generally feels pain to remember to avoid a situation. But that's not the over all-purpose of the design. The emotional complex of a being is actually designed to provide a means to arbitrate between behaviors.


"It is theoretically a matter of fact, whether or not insects feel, it is just a fact that is outside our realm of knowledge. It is outside my realm of knowledge to know whether or not you feel (though I assume you do), but if you do feel, you are certain that you feel."


Well does an insect learn from a painful experience. In other words if it wasn't pain that changed its behavior to avoid a hot light bulb after touching it then what did? If the insect avoids the bulb entirely because it could sense the heat from the bulb then feeling pain is not involved. So does an insect arbitrate based on emotional experiences? Considering that most insects don’t learn their environment based on good or bad experiences, but are successful based on random circumstance, emotions in an insect are unnecessary.

"If you are looking at some supposed AI, how would you know this "learning system" has any "point of view"? Point of view implies a motive, but you project this motive onto the "learning system". How can the AI be "mistaken" if there is no evidence that it has intention? How can AI have intention without being a subject who feels? It may act as you would when you intend to do something, but does this mean it is really trying (to fulfill a desire/preference)?"

It does if the AI is designed to arbitrate based on the degree of emotional gratification. The design of at least mammal brains are such that emotions arbitrate behavior. The entire system is based on internal rewards of chemical signals. Nothing in mammal brains is based on any notion of numerical analysis, nothing in a mouse's brain or that of a human signals that blood sugar levels are x parts per million engage "B" behavior. Everything is based on soliciting behavior based on emotional satisfaction or discomfort. Every aspect of the behaviors of mammals is to resolve emotional states. The sensors of our bodies are wired to produce these emotional states which the neural circuits of the brain then try to learn to resolve.
Emotionally arbitrated brains are more creative than heuristic, reflexive designs. Emotions change a life form or machine (AI) from being just that a machine to a selfish aware individual. With emotion everything becomes a reference to "me", “what do I feel like doing today”. This perception of reality or awareness is something that all mammals experience. The evidence lies not only in behavior but also in brain chemistry and brain design.

[ December 09, 2001: Message edited by: BrunosStar ]</p>
BrunosStar is offline  
Old 12-09-2001, 01:05 PM   #37
Synaesthesia
Guest
 
Posts: n/a
Post

Quote:
It is theoretically a matter of fact, whether or not insects feel, it is just a fact that is outside our realm of knowledge. It is outside my realm of knowledge to know whether or not you feel (though I assume you do), but if you do feel, you are certain that you feel.
Each person’s ideas about any sensation or thought can be viewed as theories about our own mind. As with the unconscious tracing of trajectories, these ideas are only approximations. Humans are not able to access all or even most of the information about our brain state and so automatically develop efficient although not thorough methods of comprehending what our minds are doing.

We can be dramatically mistaken and are never quite clear about the content of our own minds.

Of course, humans are incredibly good at discerning their own state of mind relative to our skill at discerning that of others. Obviously there are good reasons for this. We learn about the state of mind of other people through examination of features like their face, actions and words. In our own mind, each brain cell is attached to hundreds or thousands of other brain cells. Trillions of synapses allow for a remarkably detailed imagination of ourselves.

Don’t become to comfortable with what we know of ourselves however, sometimes our face betrays to another person thoughts that our own conscious minds have missed.


BrunosStar said,
Quote:
Emotionally arbitrated brains are more creative than heuristic, reflexive designs. Emotions change a life form or machine (AI) from being just that a machine to a selfish aware individual. With emotion everything becomes a reference to "me", “what do I feel like doing today”.
Motivational tendencies can exist in both conscious and unconscious beings. “Emotions”, in the sense you are using the word, presupposes awareness. How then could it transform “just a machine” into a conscious agent?

I don’t think there is a clear boundary of transformation between a mere machine and a network of perceptual agents. I think a much more primitive systems composed of similar functional elements could qualify as being aware without having direct analog to our common sense notions about emotion.

Regards,
Synaesthesia
 
Old 12-09-2001, 04:50 PM   #38
Veteran Member
 
Join Date: Aug 2000
Location: Australia
Posts: 4,886
Post

Quote:
Originally posted by Synaesthesia:
<strong>Motivational tendencies can exist in both conscious and unconscious beings. “Emotions”, in the sense you are using the word, presupposes awareness. How then could it transform “just a machine” into a conscious agent?</strong>
Well I think that aware systems need to have the ability to interact with the world so that they can see whether their actions give favorable or unfavorable results. I don't think a thing that only ever observes but can never influence its experiences is meaningfully aware. (It never demonstrates its competence, so there is no evidence that it has learnt anything useful about the world)
So basically aware systems need to respond to their emotions with some outward behaviour. (This may be suppressed a lot of the time though)
And they need to autonomously learn new problem solving strategies by determining which actions lead to desirable or undesirable results.
I think that an AI system that can't do this (most can't) may have a system that works similar to emotions, but I wouldn't call them real emotions that cause the system to be very versatile (like mice) and able autonomously learn how to respond to new problem domains. (e.g. chess computers are restricted to a narrow domain, so they are not aware, in my opinion)

Quote:
<strong>I don’t think there is a clear boundary of transformation between a mere machine and a network of perceptual agents. I think a much more primitive systems composed of similar functional elements could qualify as being aware without having direct analog to our common sense notions about emotion.</strong>
Well in my view of what awareness is, the boundary can be fairly clear but there isn't a objective defintion of awareness. (Though things like the numbers "1" and "2" have objective meanings)
excreationist is offline  
Old 12-30-2001, 01:36 PM   #39
Banned
 
Join Date: Jul 2001
Location: South CA
Posts: 222
Post

Quote:
Originally posted by Synaesthesia
hed:It is theoretically a matter of fact, whether or not insects feel, it is just a fact that is outside our realm of knowledge. It is outside my realm of knowledge to know whether or not you feel (though I assume you do), but if you do feel, you are certain that you feel.

Synaesthesia: …Of course, humans are incredibly good at discerning their own state of mind relative to our skill at discerning that of others. …
I’m not sure if you are disagreeing with what I said, but what is there to discern about the question of whether or not you feel? A feeling is defined for you, by what you experience, if you don’t experience you aren’t existing as one who knows what feeling is. Discernment only seems necessary to know if a mental image is true to something outside the mind. A feeling is just a mental “image” or “experience”, rather. So how could one be mistaken about what is in their mind (such as a feeling), when they only can think something is in their mind if it is?

They could be mistaken in thinking that the name of an experience they have is “love”, for example. Bob may think that an experience that he has which he calls “love” is the same thing Bill experiences, that Bill calls “love”, when Bill’s experience is actually much different and Bill would refer to the experience that Bob had as infatuation, if Bill had experienced what Bob did. So Bob could be mistaken if he thought that Bill knew what Bob was talking about if Bob said Bob was in love. Bill wouldn’t know that Bob meant that Bob was “infatuated”, because Bob didn’t convey an accurate idea with his words.
Quote:
Originally posted by Synaesthesia
In our own mind, each brain cell is attached to hundreds or thousands of other brain cells. Trillions of synapses allow for a remarkably detailed imagination of ourselves.
We only really have detailed images of experiences we have, be they memories, beliefs, values, sensations, etc. I’m defining the self as the experiencer, thus we are only “defined” by a combination of us and the material world. This combination is experience. From experience (ie “reality”) I “deduce” or assume the existence of the material world and myself as experiencer.

As for the question of whether or not I feel, and how this demonstrates the existence of a subject, it doesn’t matter how I view myself, just the fact that view anything suggests that I exist as a viewer.

[ December 30, 2001: Message edited by: hedonologist ]</p>
hedonologist is offline  
Old 01-01-2002, 12:03 AM   #40
Banned
 
Join Date: Jul 2001
Location: South CA
Posts: 222
Post

Quote:
Originally posted by excreationist
As I said, for me, pleasure and desirable are synonyms (maybe that grammar is bad though).
Bad grammar-- that is more like blasphemy! Desire can be utter torture whereas pleasure is (or comes from) satisfaction. An itch is a desire to scratch, for example. I hate being horny, that is why I seek sexual stimulation. Although the I find the desire for the opposite gender "desirable", such that I do it, it seems to be more longing sort of suffering than pleasure. On the other hand, I guess you could say I find it desirable to get rid of the undesirable itch, etc.

That was off the topic. I'm not sure I can get past the linguistic barrier on this. The only way I know how to is to abandon the pleasure argument and go back to the question of the brain transplant. That is really a different topic so I think I will make a new thread for it. I may come back to some of your posts exc, but I want to try some other approaches because of this linguistic barrier.
hedonologist is offline  
 

Thread Tools Search this Thread
Search this Thread:

Advanced Search

Forum Jump


All times are GMT -8. The time now is 09:24 AM.

Top

This custom BB emulates vBulletin® Version 3.8.2
Copyright ©2000 - 2015, Jelsoft Enterprises Ltd.