Freethought & Rationalism ArchiveThe archives are read only. |
12-08-2001, 11:22 PM | #31 | ||
Banned
Join Date: Jul 2001
Location: South CA
Posts: 222
|
Quote:
Quote:
BTW, I'm not saying that we can not be altruistic or that we can't intentionally choose to bring pain instead. |
||
12-09-2001, 01:05 AM | #32 | |
Veteran Member
Join Date: Aug 2000
Location: Australia
Posts: 4,886
|
Quote:
|
|
12-09-2001, 01:24 AM | #33 | |||||||
Banned
Join Date: Jul 2001
Location: South CA
Posts: 222
|
Quote:
Quote:
So what you are calling "subjective inferences", I would rather call "uncertain inferences". If you are looking at some supposed AI, how would you know this "learning system" has any "point of view"? Point of view implies a motive, but you project this motive onto the "learning system". How can the AI be "mistaken" if there is no evidence that it has intention? How can AI have intention without being a subject who feels? It may act as you would when you intend to do something, but does this mean it is really trying (to fulfill a desire/preference)? This is why I say that these learning systems, or any behavior could theoretically be simulated by AI, without the AI really having any point of view or (what I call) subjective experience. In this way, data in your brain is as objective as an object you see in front of you. Quote:
Quote:
Quote:
What is the "recognition" that pain should be avoided? Is "recognition" merely the data that causes it to avoid pain, or is it based on a (subjective) desire? Quote:
Quote:
|
|||||||
12-09-2001, 03:22 AM | #34 | ||||||||||||||
Veteran Member
Join Date: Aug 2000
Location: Australia
Posts: 4,886
|
Quote:
Quote:
Quote:
Quote:
Quote:
Quote:
BTW, <a href="http://www.cyberlife-research.com/about/brainintro.htm" target="_blank">here</a> is some information about a robot called Lucy that will autonomously develop representations of the world so that it can work out how to achieve goals. Quote:
But I do think that some AI systems do satisfy my definition of awareness - at the moment they probably only have the autonomous learning abilities comparable to a mouse. Note that most A.I. systems you would come across can't autonomously learn how to seek their goals. Quote:
Quote:
Quote:
Quote:
And of course, we can learn to find sugars to be undesirable, but that is because it is being associated with negative things such as obesity, which in turn might be associated with social disapproval which involves a lack of connectedness. (I believe that connectedness is a fundamental human desire) So basically we have many fundamental desires (e.g. avoiding hunger, avoiding physical pain, etc) but these can be outweighed us associating our even stronger desires with that behaviour. e.g. so the taste of sugar can become undesirable. So anyway, the brain just determines if something if something is overall desirable or undesirable. Animals do this too. We can also try and work out the reasons why we feel that way about a particular thing, but it isn't necessary to do this for us to form an intuitive (animal-type reasoning) emotional response to some stimulus. Quote:
Anyway, we endure pain if it is outweighed by a greater positive emotion. (Or if enduring the pain allows an even greater pain to be avoided) So we might endure pain to get pleasure from the surprise and excitement of it (we have a 'newness' desire) or we might endure pain because we believe that we deserve it, and we want justice to be served (seeking connectedness). Quote:
Pleasure are the things the are seen as important goals to seek. Pain involves things that must urgently be avoided (depending on the intensity) So basically I'm saying that desirable is a synonym with pleasure and undesirable is a synonym with pain. Of course human emotions involve other things too like differing breathing rates, facial expressions (to communicate our emotional state to others), energy levels, etc. Anyway, part of the zombie's brain would have what I call pleasure and pain. Otherwise it wouldn't be capable of having the same behaviour as we do. Quote:
So if pleasure is associated with a situation (e.g. having a back-rub where muscle tension is relieved) then this situation IS desirable. This means that the situation should be repeated in the future, so the tendency to repeat behaviours that lead to that situation is reinforced. Pain is undesirable (they are synonyms for me, as I said) so a system should avoid situations that are undesirable (IOW involve "pain") so then it would reduce the tendency to repeat the behaviours that lead to that situation. So basically, say if you pay for a masseur then you get some pleasure (relief from tension) from the backrub. But paying money is undesirable but if the pleasure is great enough then the behaviour of going to masseurs would be reinforced. If a situation is determined by your brain to be undesirable overall and it can be avoided, then you will avoid it. e.g. if you are wanting to cross a street and you realize that walking into a car will result in a lot of pain and you believe that the benefits aren't very great, you won't deliberately walk in front of the car. If you really need some excitement in your life and you don't mind risking your life then you might do it. |
||||||||||||||
12-09-2001, 09:14 AM | #35 | |||||
Guest
Posts: n/a
|
For a humanoid zombie to exist, the mechanisms behind our behavior must be radically disconnected from consciousness. In other words, a sonnet or a symphony can be written while the person is totally unconscious. If behavior and sensation were indeed so utterly disconnected, I could be seeing pink elephants dancing around my house without being able to tell anyone or react in any way. Like Descartes' demon, the idea is immunized against falsification but is based upon exceedingly thin theoretical grounding. It seems doubly implausible because the sensations that humans experience give every indications of being the product of physically identifiable perceptual mechanisms.
Quote:
This epistemic issue is, I think, very near the heart of much of the controversy about consciousness. Although I'm not going to explore this issue in great depth at this moment, it might be interesting to start a thread on the theoretical issues surrounding our perception of consciousness in other beings. hedonologist Quote:
Quote:
I would agree with you that we have to be wary about viewing animals or computer programs too anthropomorphically. The so-called Eliza effect is the tendency to assume that vaguely human behavior is accompanied by other human properties such as feeling. However, I would also caution against the reverse. Simply because a function is implemented on silicone does not mean it isn't isomorphic to what humans actually do. I would suggest that distinctions we do make should be based upon careful development of our cognitive theories as opposed to our gut reactions. Quote:
Quote:
Creatures that have evolved to survive without subjective experience constitute the majority of life on earth. Single celled organisms, simple sea creatures et cetra. To describe and understand such creatures, nothing meaningful is gained by attributing complex intentionality and full consciousness to them. The same is not true for other human beings. Regards, Synaesthesia "To me there is a special irony when people say machines cannot have minds, because I feel we're only now beginning to see how minds possibly could work -- using insights that came directly from attempts to see what complicated machines can do. Of course we're nowhere near a clear and complete theory - yet. But in retrospect, it now seems strange that anyone could ever hope to understand such things before they knew much more about machines. Except, of course, if they believed that minds are not complex at all." -Marvin Minsky |
|||||
12-09-2001, 11:26 AM | #36 |
Junior Member
Join Date: Oct 2001
Location: Tarzana
Posts: 88
|
"He could show that pleasure and pain are uniquely associated with certain patterns of brain activity. Therefore one who experiences these states must necessarily differ in brain activity (thus not be identical to the last molecule). He could show that pleasure and pain have physiological and biological reprocussions (thus the zombie could not react in the same way to all stimuli as the 'real thing')"
I don't think that the reaction in terms of how each component in a brain system responds is the experience of pain. Its more the intent of the circuits purpose in the overall design of the life form. Viewing the issue from a perspective of a personal subjective defeats the purpose of the evolutionary design of the being. It generally feels pain to remember to avoid a situation. But that's not the over all-purpose of the design. The emotional complex of a being is actually designed to provide a means to arbitrate between behaviors. "It is theoretically a matter of fact, whether or not insects feel, it is just a fact that is outside our realm of knowledge. It is outside my realm of knowledge to know whether or not you feel (though I assume you do), but if you do feel, you are certain that you feel." Well does an insect learn from a painful experience. In other words if it wasn't pain that changed its behavior to avoid a hot light bulb after touching it then what did? If the insect avoids the bulb entirely because it could sense the heat from the bulb then feeling pain is not involved. So does an insect arbitrate based on emotional experiences? Considering that most insects don’t learn their environment based on good or bad experiences, but are successful based on random circumstance, emotions in an insect are unnecessary. "If you are looking at some supposed AI, how would you know this "learning system" has any "point of view"? Point of view implies a motive, but you project this motive onto the "learning system". How can the AI be "mistaken" if there is no evidence that it has intention? How can AI have intention without being a subject who feels? It may act as you would when you intend to do something, but does this mean it is really trying (to fulfill a desire/preference)?" It does if the AI is designed to arbitrate based on the degree of emotional gratification. The design of at least mammal brains are such that emotions arbitrate behavior. The entire system is based on internal rewards of chemical signals. Nothing in mammal brains is based on any notion of numerical analysis, nothing in a mouse's brain or that of a human signals that blood sugar levels are x parts per million engage "B" behavior. Everything is based on soliciting behavior based on emotional satisfaction or discomfort. Every aspect of the behaviors of mammals is to resolve emotional states. The sensors of our bodies are wired to produce these emotional states which the neural circuits of the brain then try to learn to resolve. Emotionally arbitrated brains are more creative than heuristic, reflexive designs. Emotions change a life form or machine (AI) from being just that a machine to a selfish aware individual. With emotion everything becomes a reference to "me", “what do I feel like doing today”. This perception of reality or awareness is something that all mammals experience. The evidence lies not only in behavior but also in brain chemistry and brain design. [ December 09, 2001: Message edited by: BrunosStar ]</p> |
12-09-2001, 01:05 PM | #37 | ||
Guest
Posts: n/a
|
Quote:
We can be dramatically mistaken and are never quite clear about the content of our own minds. Of course, humans are incredibly good at discerning their own state of mind relative to our skill at discerning that of others. Obviously there are good reasons for this. We learn about the state of mind of other people through examination of features like their face, actions and words. In our own mind, each brain cell is attached to hundreds or thousands of other brain cells. Trillions of synapses allow for a remarkably detailed imagination of ourselves. Don’t become to comfortable with what we know of ourselves however, sometimes our face betrays to another person thoughts that our own conscious minds have missed. BrunosStar said, Quote:
I don’t think there is a clear boundary of transformation between a mere machine and a network of perceptual agents. I think a much more primitive systems composed of similar functional elements could qualify as being aware without having direct analog to our common sense notions about emotion. Regards, Synaesthesia |
||
12-09-2001, 04:50 PM | #38 | ||
Veteran Member
Join Date: Aug 2000
Location: Australia
Posts: 4,886
|
Quote:
So basically aware systems need to respond to their emotions with some outward behaviour. (This may be suppressed a lot of the time though) And they need to autonomously learn new problem solving strategies by determining which actions lead to desirable or undesirable results. I think that an AI system that can't do this (most can't) may have a system that works similar to emotions, but I wouldn't call them real emotions that cause the system to be very versatile (like mice) and able autonomously learn how to respond to new problem domains. (e.g. chess computers are restricted to a narrow domain, so they are not aware, in my opinion) Quote:
|
||
12-30-2001, 01:36 PM | #39 | ||
Banned
Join Date: Jul 2001
Location: South CA
Posts: 222
|
Quote:
They could be mistaken in thinking that the name of an experience they have is “love”, for example. Bob may think that an experience that he has which he calls “love” is the same thing Bill experiences, that Bill calls “love”, when Bill’s experience is actually much different and Bill would refer to the experience that Bob had as infatuation, if Bill had experienced what Bob did. So Bob could be mistaken if he thought that Bill knew what Bob was talking about if Bob said Bob was in love. Bill wouldn’t know that Bob meant that Bob was “infatuated”, because Bob didn’t convey an accurate idea with his words. Quote:
As for the question of whether or not I feel, and how this demonstrates the existence of a subject, it doesn’t matter how I view myself, just the fact that view anything suggests that I exist as a viewer. [ December 30, 2001: Message edited by: hedonologist ]</p> |
||
01-01-2002, 12:03 AM | #40 | |
Banned
Join Date: Jul 2001
Location: South CA
Posts: 222
|
Quote:
That was off the topic. I'm not sure I can get past the linguistic barrier on this. The only way I know how to is to abandon the pleasure argument and go back to the question of the brain transplant. That is really a different topic so I think I will make a new thread for it. I may come back to some of your posts exc, but I want to try some other approaches because of this linguistic barrier. |
|
Thread Tools | Search this Thread |
|