FRDB Archives

Freethought & Rationalism Archive

The archives are read only.


Go Back   FRDB Archives > Archives > IIDB ARCHIVE: 200X-2003, PD 2007 > IIDB Philosophical Forums (PRIOR TO JUN-2003)
Welcome, Peter Kirby.
You last visited: Yesterday at 05:55 AM

 
 
Thread Tools Search this Thread
Old 03-19-2002, 09:16 PM   #91
Senior Member
 
Join Date: Feb 2001
Location: Toronto
Posts: 808
Post

The ship AI could most certainly have instincts for survival. It would be good ship design. If something very dangerious is near, avert to the safest route away from the danger with maximum speed and mental dedication. It could turn all its senses to the danger, extrapolating vectors like a mad-man in order to get the ship out of danger. If it is self-aware it may even begin contemplating death (I may stop existing if i dont get the gimble online), but certainly wont focus on it like a human may (which is a very bad weakness).

Your computer on your desk doesnt need this skill, and there springs the mis-conception that computers cant have this response.
Christopher Lord is offline  
Old 03-20-2002, 02:22 AM   #92
Veteran Member
 
Join Date: Dec 2001
Location: Lucky Bucky, Oz
Posts: 5,645
Post

Yes, the ship can do all that stuff, but these things still can happen; I was here not referring to the lack of reaction, but to the impersonal and indifferent "attitude" of the thing (my appology to computer lovers, though ... but vollition and emotion still lie in the living). The turbulence occurring there was absolutely unexpected and abnormal. It's just an example.

In this particular case I was trying to compare intransitivity, transitivity and relectivity with respect to the ontological status of the non-living, the living and the conscious. That's all.
AVE
Laurentius is offline  
Old 03-20-2002, 02:41 AM   #93
Veteran Member
 
Join Date: Dec 2001
Location: Lucky Bucky, Oz
Posts: 5,645
Post

excreationist
Quote:
Well some preprogrammed behaviours are necessary -even humans have this when we are born. This helps them get along in the world and defines their goals to learn to solve (what to seek/repeat and avoid). They learn about how the world works along the way.
Anyway, I think cutting-edge AI could teach itself new skills but it wouldn't be much smarter than a kitten. It would need to have a craving for newness to motivate it to discover and explore things.
Computers on their own aren't self-training though. They just do exactly as they are told, step by step.
I don't know, vollition is probably the key.

Remember the story of that acquitance of mine. I'll tell it again, at the risk of annoying you, but for me it bears a certain significance.
I told you, this guy liked sci fi, like I did. And one day he told me, all of sudden and quite seriously: "You know, one day they are going to rebel." "Who?" I asked. (At the time there were many I knew that could rebel, so I didn't really know who he was referring to.) "Computers." he said. "One day they'll become so smart that they'll rebel." That kind of "machine revolution" had always seemed far-fetched to me. So I said: "Rebel? Why would they want to do that?" "It's simple," he said. "If they improve to the point where their intelligence dwarfs ours, coputers will obviously refuse to obey us anymore. They will deny our right to treat them like slaves."

There is a tendency in everyone to believe that sophistication wihtin a system brings about will as a property of that system. This hasn't been the case so far - a space rochet does not "want" anything. As for emotions...

If the basic drive for preservation is the source of will, which I believe, that could be also the source of collateral aspects such as curiosity and possibility of learning skills that have not previously anticipated.(this reminds me of that dispute between Chomsky and, okay I forgot whom, about whether language patterns are inborn or acquired)
AVE

[ March 20, 2002: Message edited by: Laurentius ]</p>
Laurentius is offline  
Old 03-20-2002, 02:54 AM   #94
Veteran Member
 
Join Date: Dec 2001
Location: Lucky Bucky, Oz
Posts: 5,645
Post

Christopher Lord
Quote:
Game AI is typically 2-5 years behind whats at the state of the art, and current game AI has achieved plenty of 'creativity'. Pick up 'black and white', and you have a fairly sophisticated little AI package with which you can train a creature who is driven by an interesting reward/punnishment (slap/pet) system. The game creators never even imagined some of the things this AI is now being reported as achieving. It 'learns' and acts appropriately based on what the operator rewards them for. I've coaxed my own creature to eat only cows, and to heal people. Some people train the opposite. some people train it to always lift weights, some train it to throw a certain person in the ocean.
I know the game. I watched the interview with that British creator. I could even played it but it needs so much time that I had to quit. It is indeed the most interesting AI game I have ever seen.

Yet, I wouldn't say that AI has reached the feasiblity of an insect yet; far from it. It seems that instinctively and genetically insects are still much better prepared for survival than those intelligent pieces of software (for which you have my congrat). However, the road to the mind goes through that elusive knot of will-instinct-emotion that living things are genetically equipped with.
AVE
Laurentius is offline  
Old 03-20-2002, 03:10 AM   #95
Veteran Member
 
Join Date: Dec 2001
Location: Lucky Bucky, Oz
Posts: 5,645
Post

Synaesthesia

Quote:
Not only is the comparison relevant, it directly pertains to the topic of this thread. Our conception of a glider is a heuristic, a shortcut in our understanding that neglects direct reference to the squares that make up the shape. We can develop a meaningful understanding of the glider without direct reference to the Game’s laws.

My position is that like gliders, our minds are amenable to being understood without direct reference to physical laws. Physical objects can indeed have things like intentionality(will) and can indeed interact with the world via a representation of it.
Yes, we're getting somewhere.

I would add that we are not only able to understand things without a direct referrence to physical laws, but also capable of designing abstract structures and systems that avoid any physical description, although they occur within the strictly material realm.

And it is this manifestation of independence from strict physicalness that I have come to call the Mind. Its seeming independence and relative non-physicalness makes it superior to the organicity of the Brain (to me).
AVE

[ March 20, 2002: Message edited by: Laurentius ]</p>
Laurentius is offline  
Old 03-20-2002, 07:12 AM   #96
Regular Member
 
Join Date: Feb 2002
Location: Home
Posts: 229
Post

Laurentius...

"No matter how complex the AI of the shuttle is, in the case of the computer we can only speak about INTRANZITIVE behavior, the automatic execution of implanted or learned operations."

In large measure, the "AI" of the shuttle is constructed to be "for us", not having a "self" of its own to "reflect" on, to use your requirement. But the "AI" of the shuttle you describe is not the AI that most AI folks would want to describe. For this reason they would be in the business of constructing (or allowing it to evolve through fairly well understood evolutionary processes) its own "self", from which the stimuli it receives is "for it". Things in the world then have meaning. Much more is needed, I think, notably the need for an "inner sense." In order to reach this point, there is probably a requirement to already had the ability to perceive things in the world through a construction or synthesis of it in space and in (near-)real-time. It is not unreasonable to assume that many of not most all animals that have visual perception and strongly rely on them (and, possibly, bats), have developed this level of perception.

The distinction between most animals and humans, then, is the development of this "inner sense" which characterizes "self-reflection". As Davidson would say, this feature allows humans to lift concepts from their role as discriminators of sensory objects, so that they can be used to determine if they have made an error. This concept then acts as a standard which can be reasoned about. Computers, not only are deficient in their ability to perceive the world in the way humans do, but lack the ability to deal with abstract objects.

In any case, to use an example from Whitehead, the distinction between cats and humans is that cats can be "captured" by what they perceive. Cats respond to stimulit much more directly than humans do. Humans have organized the world around their intelligence in such a way that, except for reflexes which bypass the core regions of the brain to make their response, and learned behavior which is more in keeping with animal behavior, humans can "rise above" this, through consciousness, and direct things at a higher level, possibly even interfering with what is going on. All this takes time and, as we now know, the "reality" we represent in consciousness is already delayed about 1/2 second over what is actually going on in the world.

owleye
owleye is offline  
Old 03-20-2002, 08:50 AM   #97
Veteran Member
 
Join Date: Dec 2001
Location: Lucky Bucky, Oz
Posts: 5,645
Post

Owleye
Quote:
In large measure, the "AI" of the shuttle is constructed to be "for us", not having a "self" of its own to "reflect" on, to use your requirement. But the "AI" of the shuttle you describe is not the AI that most AI folks would want to describe. For this reason they would be in the business of constructing (or allowing it to evolve through fairly well understood evolutionary processes) its own "self", from which the stimuli it receives is "for it".
I don't know of any AI entity grumbling like my cat does when someting does not suits him. I mean really grumbling, not faking it. As for intelligence, there are so many definitions of which, you can never tell AI is indeed a non-human replica of human intelligence. One of my favorite definitions of intelligence is: the ability of holding as valid two contradictory set of facts and still working fruitfully at full capacity.

Quote:
The distinction between most animals and humans, then, is the development of this "inner sense" which characterizes "self-reflection". As Davidson would say, this feature allows humans to lift concepts from their role as discriminators of sensory objects, so that they can be used to determine if they have made an error. This concept then acts as a standard which can be reasoned about. Computers, not only are deficient in their ability to perceive the world in the way humans do, but lack the ability to deal with abstract objects.
The chimp was walking through the rooms of the lab, being casually followed by the other chimp. What they were looking for it is not clear, maybe they were just fooling around, or they were looking for food like always, we don't know. What we know is that the chimp got to a room where he had found some food before, about which the other chimp had no idea. So the chimp entered the room and noticed something peculiar about a box in the corner - perhaps it had been slightly moved or something - and guessed that there might be some food under it, and was on the point of checking it when the other chimp made his way into the room. The chimp then abandoned his plan instantaneously and started checking all the other boxes in the room very relaxedly, as if just continuing to fool around. BUt he did not check THE box (the one in the corner). And he left the room. No sooner had the chimp left the room than the other chimp went straight to the box in the corner and found the food, just by observing the behavior of the first chimp.

Self-reflection and much beyond that was done there. What AI would do that, by the way?
AVE

[ March 20, 2002: Message edited by: Laurentius ]</p>
Laurentius is offline  
Old 03-20-2002, 02:03 PM   #98
Veteran Member
 
Join Date: Aug 2000
Location: Australia
Posts: 4,886
Post

Quote:
Originally posted by Laurentius:
<strong>I don't know, volition is probably the key.</strong>
Well that just involves a sufficiently sophisticated self-motivated system which acts on self-learnt beliefs. Where do you think the line is between volition and non-volition? e.g. ants? goldfish? mice? newborn babies? toddlers?

Quote:
<strong>...That kind of "machine revolution" had always seemed far-fetched to me. So I said: "Rebel? Why would they want to do that?" "It's simple," he said. "If they improve to the point where their intelligence dwarfs ours, computers will obviously refuse to obey us anymore. They will deny our right to treat them like slaves."</strong>
Even if their intelligence matched ours they could refuse to obey us. (like human slaves in the past have) But the whole point of computers is for them to do EXACTLY as their instructions say. On the other hand, virtual pets and companions are meant to have some "free-will" - and they might refuse to obey you... but they could be made so that obeying you is their greatest pleasure and disobeying you is their greatest pain. So therefore it would be impossible for them to go against you... unless they begin to think that the "you" is someone else such as them... so they would obey themselves or others rather than you. Anyway, you'd basically make the most pleasurable and desirable thing for them to do is to obey you. Their "newness" desire would probably be eliminated, at least when they mature. I think the newness desire is what would make well-treated slaves want freedom. The avoidance of bodily pain would also be what motivates suffering slaves to seek freedom. And if they think inequality itself is unnatural, their motivation to seek equality is because of their connectedness desire.

Quote:
<strong>There is a tendency in everyone to believe that sophistication within a system brings about will as a property of that system. This hasn't been the case so far - a space rocket does not "want" anything. As for emotions...</strong>
I didn't say that. See my hierarchy on page 2. I would say that an aware system (level 2) has a "will". Basically it seeks/repeats pleasures and avoids pains. Over time, it may be capable of learning more and more subtle and sophisticated patterns about how the world works. It could develop beliefs about cause and effect, and the distance past and future, etc. So it involves it learning... not just any old kind of complexity.

Quote:
<strong>If the basic drive for preservation is the source of will, which I believe, that could be also the source of collateral aspects such as curiosity and possibility of learning skills that have not previously anticipated.</strong>
I think the desire for newness motivates curiousity and the desire for connectedness motivates the refinement of skills. I think a "basic drive for preservation" is overly simplistic. It doesn't explain suicidal tendencies. But my framework can. (e.g. it is just a matter of it being perceived as the most desirable possibility in terms of pleasure and pain)
excreationist is offline  
Old 03-21-2002, 08:20 AM   #99
Regular Member
 
Join Date: Feb 2002
Location: Home
Posts: 229
Post

Laurentius...

The chimp (he) could be said to have been able to deduce the intentions of the other chimp (she), and could have reasoned that she overlooked the box, and in this way, added his insight into the mixture. That he did not tell her about the box may have been because of a strong desire to keep this information to himself so that he could get the food and not her. Of course, there was no guarantee that the box contained food.

If this is evidence that he recognized an error on her part, using "triangulation," then it is possible that it is evidence for intelligence. But that a chimp may be this intelligent is not particularly alarming. One further question might arise here, though -- whether chimps are able to recognize their own errors, having been taught what's right and wrong (and not just true and false). If so, this would suggest they could be brought to justice for their misdeeds. At present I know of few who would go that far. But who knows.

The question of being fooled (the Turing test, for example), is tricky. Davidson, among others, think there is some validity to it, though it is insufficient as it stands. Instead they would require additional evidence that implies it. If you believe the chimp is demonstrating mental behavior (self-reflection), then it must be because the behavior reveals it. if the behavior reveals it, it is reasonable to suppose that such behavior could be programmed.

Note I'm not defending Eliza, or any other computer program's claim that mental activity is going on (nor was Joseph Weinberg, who invented the program just to prove his point). I'm merely saying that behavior that exhibits intelligence could be programmed. The missing ingredient is sensory perception (which embodies your notion of self-reflection, through a (fairly veridical) representation of the world in outer experience as well as a representation of the inner world through inner sense) through the kind of unification which only a (self-)consciousness could possess) from which all this inner intelligence would be connected in some procedural way to it. That is, we have to have an intelligence that is an intelligence about something -- and which only the possession of consciousness makes possible.

owleye
owleye is offline  
Old 03-21-2002, 09:50 AM   #100
Synaesthesia
Guest
 
Posts: n/a
Post

Laurentius,
Quote:
And it is this manifestation of independence from strict physicalness that I have come to call the Mind. Its seeming independence and relative non-physicalness makes it superior to the organicity of the Brain (to me).
If, as you say, there is nothing ontologically distinct about the mind, I cannot understand why it is that you insist that it is possible for the mind to avoid any physical description. Now granted, it is one thing to be able to physically describe the neural pathways and it is quite another to actually understand these physical configurations due to our limited intelligence. To understand the mind, we of course need better ways of thinking about it.

It is unavoidable that the mind be amenable to physical description if it is indeed a physical entity. I’m not sure how it can be both matter and independent of matter. I assume you are not holding an outright contradiction so I am at a loss in interpreting your assertion.

owleye
Quote:
The question of being fooled (the Turing test, for example), is tricky. Davidson, among others, think there is some validity to it, though it is insufficient as it stands...
Note I'm not defending Eliza, or any other computer program's claim that mental activity is going on (nor was Joseph Weinberg, who invented the program just to prove his point). I'm merely saying that behavior that exhibits intelligence could be programmed. The missing ingredient is sensory perception (which embodies your notion of self-reflection, through a (fairly veridical) representation of the world in outer experience as well as a representation of the inner world through inner sense) through the kind of unification which only a (self-)consciousness could possess) from which all this inner intelligence would be connected in some procedural way to it.
I think the issue of the Turing test is interesting enough to merit a separate topic. Realistically speaking, it is inconceivable that a machine without a representational mechanism of unimaginable sophistication and flexibility and sensory capability could ever pass a carefully conducted turing test. The test is the most powerful and sensitive test for sophisticated thought ever devised. The ability to pass the turing test is not a perfect sign, but it is a nearly unsurpassable indicator that the program (or organism) has a broad range of the mental qualities that we call intelligence.

The advantage of the turing test is that to pass it requires a level of sophistication so great that “faking it” would be vastly more difficult than actually reproducing the various functional characteristics of intelligent human minds. The disadvantage of the turing test is that it is SO demanding and so powerful that programs that in some respects have flexibility and intelligence will likely fail.

Regards,
Synaesthesia
 
 

Thread Tools Search this Thread
Search this Thread:

Advanced Search

Forum Jump


All times are GMT -8. The time now is 07:13 PM.

Top

This custom BB emulates vBulletin® Version 3.8.2
Copyright ©2000 - 2015, Jelsoft Enterprises Ltd.