FRDB Archives

Freethought & Rationalism Archive

The archives are read only.


Go Back   FRDB Archives > Archives > IIDB ARCHIVE: 200X-2003, PD 2007 > IIDB Philosophical Forums (PRIOR TO JUN-2003)
Welcome, Peter Kirby.
You last visited: Yesterday at 05:55 AM

 
 
Thread Tools Search this Thread
Old 08-19-2002, 01:26 PM   #21
Regular Member
 
Join Date: Oct 2001
Location: Oztralia (*Aussie Aussie Aussie*)
Posts: 153
Post

What about Godel's therom and it's implications for AI? I mean if by AI you mean "analogous to a human mind" then i think it was shown to be impossible by Godel's theroms.

Taken from this article.. (a review of Roger Penrose's book Shadows of the Mind: A Search for the Missing Science of Consciousness)

------

"The relevance of all this to computers is that all computers involve- indeed are-systems for the mechanical manipulation of strings of symbols (or "bits") carried out according to mechanical recipes called "programs" or "algorithms." Now suppose that there could be a computer program that could perform all the mental feats of which a man is capable. (In fact, such a program must be possible if each of us is in fact a computer.) Given sufficient time to study the structure of that program, a human mathematician (or group of mathematicians) could construct a "Godel proposition" for it, namely a proposition that could not be proven by the program but that was nevertheless true, and-here is the crux of the matter-which could be seen to be true by the human mathematician using a form of reasoning not allowed for in the program. But this is a contradiction, since this hypothetical program was supposed to be able to do anything that the human mind can do.

What follows from all this is that our minds are not just computer programs. The Lucas-Penrose argument is much more involved than the bare outline I have just given would suggest, and many people have raised a variety of objections to it. But Lucas and Penrose have had little difficulty in showing the insubstantiality of these objections, and I think it is fair to say that their argument has not been dented. And yet, the argument Lucas and Penrose have made is so disconcerting to certain habits of thought that the reflexive response of many people is to say that it must be wrong. Science has conditioned us to expect the breakthrough, the revolution in thought, the astonishing new possibility. To say that machines will never think is as foolish as it was to have said that man would never fly. But science has shown us not only possibilities but limitations."

-----
<a href="http://www.firstthings.com/ftissues/ft9511/articles/revessay.html" target="_blank">http://www.firstthings.com/ftissues/ft9511/articles/revessay.html</a>
Plump-DJ is offline  
Old 08-19-2002, 02:45 PM   #22
Veteran Member
 
Join Date: May 2002
Location: Ontario, Canada
Posts: 1,125
Post

Quote:
"The relevance of all this to computers is that all computers involve- indeed are-systems for the mechanical manipulation of strings of symbols (or "bits") carried out according to mechanical recipes called "programs" or "algorithms."
All he has shown is that today's computers aren't capable of consciousness. Our consciousness is a result of our brains, if the function of each component of our brains were to be copied in every way by an inorganic material, cell by cell, why is this machine not conscious?

Thought experiment.

At this moment your brain is completely organic and you are conscious.

Now, replace a single brain cell with an inorganic replacement that does everything the cell it replaced did precisely.

Now another, and another.

At what point are you no longer a truely conscious being, but merely a complex machine that can "fake it"?
Bible Humper is offline  
Old 08-20-2002, 01:02 AM   #23
Regular Member
 
Join Date: Oct 2001
Location: Oztralia (*Aussie Aussie Aussie*)
Posts: 153
Post

Quote:
All he has shown is that today's computers aren't capable of consciousness. Our consciousness is a result of our brains, if the function of each component of our brains were to be copied in every way by an inorganic material, cell by cell, why is this machine not conscious?
1) Well actually i think the implications are a little bit more profound then that. Godel's therom's and it's implications for such things as AI and "unlimited knowledge" are still there, regardless of how advanced our computer's become or advanced our mathmatics gets.

2) I think the point you make as well about the mind being merely a function of the brain or reducible in some sense to the brain, is actually "the point" that Godel's theroms dispute.

----------
Penrose establishes with admirable rigor that no machine that works "computationally" can think as we do. He then argues (convincingly) that all machines constructed using the known laws of physics will work computationally. And having assumed that the human mind is nonetheless entirely explicable by the laws of physics, he is forced to conclude that there must be new laws of physics involving processes that are intrinsically non-computational (which is not to say that they are not described by deterministic mathematical laws).
------------------
Plump-DJ is offline  
Old 08-20-2002, 02:05 AM   #24
Veteran Member
 
Join Date: Feb 2002
Location: Singapore
Posts: 3,956
Post

Hi guys, is it possible for A.I to experience human's feelings?
Answerer is offline  
Old 08-20-2002, 04:26 AM   #25
Veteran Member
 
Join Date: Dec 2001
Location: Tallahassee
Posts: 1,301
Post

Quote:
Originally posted by Answerer:
<strong>Hi guys, is it possible for A.I to experience human's feelings?</strong>
If humans can do it and we are nothing but a collection of the matter that makes us then yes, it is possible.

There is an algorithm for emotions.
Liquidrage is offline  
Old 08-20-2002, 06:01 AM   #26
Veteran Member
 
Join Date: Mar 2002
Location: Southwest USA
Posts: 4,093
Post

Quote:
Originally posted by Answerer:
Hi guys, is it possible for A.I to experience human's feelings?
I too think that it will someday be possible to do this, but humanity is no where near this level of sophistication now. We first would have to define what, exactly, feelings are. If feelings were simply bio-chemical responses to bio-chemical stimulation we could at least have a starting place, but feelings are much more complicated than that.

Here's a question. If there was a clone of me, an identical twin, and someone were to completely erase his brain of all memories and install all of my memories into the clone would he become me, or would he be someone else?
Tristan Scott is offline  
Old 08-20-2002, 10:52 AM   #27
Senior Member
 
Join Date: Jul 2002
Location: Finland
Posts: 915
Post

Quote:
Plump-DJ: What about Godel's therom and it's implications for AI? I mean if by AI you mean "analogous to a human mind" then i think it was shown to be impossible by Godel's theroms.
First of all, I often get the impression that most people out there (even some of those who should by their education and/or profession) don't seem to appreciate the repercussions of Gödel's theorem (for example it makes Laplace's demon impossible unless it is completely detached from the subject reality).
The history of AI studies and Gödel's theorem are strangely intertwined. Gödel's theorem was only part of the work; Lucas-Penrose argument also needs Church-Turing thesis which shows that Hilbert's ending problem isn't solvable. Incidentally, Turing can be considered to have started the AI project in his article
<a href="http://www.abelard.org/turpap/turpap.htm" target="_blank">Computing machinery and Intelligence</a>, which I warmly recommend to anyone interested. Chapter 6: Contrary Views on the Main Question represents every form of critique that I've seen used against the possibility of "real" AI ever since (or before). Turing knew Gödel's theorem inside out (obviously, since he took to conclusion what Gödel has started), and he included the "mathematical objection" to the set of possible objections against AI (some 10 years before Lucas even got around to put his version out)... Turing's response goes as follows:

Quote:
The short answer to this argument is that although it is established that there are limitations to the powers of any particular machine, it has only been stated, without any sort of proof, that no such limitations apply to the human intellect. But I do not think this view can be dismissed quite so lightly. Whenever one of these machines is asked the appropriate critical question, and gives a definite answer, we know that this answer must be wrong, and this gives us a certain feeling of superiority. Is this feeling illusory? It is no doubt quite genuine, but I do not think too much importance should be attached to it. We too often give wrong answers to questions ourselves to be justified in being very pleased at such evidence of fallibility on the part of the machines. Further, our superiority can only be felt on such an occasion in relation to the one machine over which we have scored our petty triumph. There would be no question of triumphing simultaneously over all machines. In short, then, there might be men cleverer than any given machine, but then again there might be other machines cleverer again, and so on.
In my opinion this still is the weak spot of Lucas-Penrose argument:


Quote:
Now suppose that there could be a computer program that could perform all the mental feats of which a man is capable. (In fact, such a program must be possible if each of us is in fact a computer.) Given sufficient time to study the structure of that program, a human mathematician (or group of mathematicians) could construct a "Godel proposition" for it, namely a proposition that could not be proven by the program but that was nevertheless true, and-here is the crux of the matter-which could be seen to be true by the human mathematician using a form of reasoning not allowed for in the program.
A group of mathematicians could perhaps construct such a proposition, but it is possible that none of them (as individuals) could prove it (=understand it?) any better than the AI program. Both Lucas and Penrose just *assume* they could do it.
In the recent literature at least Colin McGinn in his "Problems of Consciousness" (gasp, it's already 10 years old... don't know if it's that recent after all) has taken a view that there is a theory of consciousness but, if our consciousness is based on it, we can't understand it, or in short: ""Mind may just not be big enough to understand mind". Funny enough, I don't remember McGinn making any references to Gödel theorem...

Should say few words about John Searle and biological naturalism as well, but this is getting too long & too late, so maybe next time.

...ah well, I see Searle is fully covered in a related thread already, so I won't bother...

-S-

[ August 20, 2002: Message edited by: Scorpion ]

[ August 23, 2002: Message edited by: Scorpion ]</p>
Scorpion is offline  
Old 08-20-2002, 11:01 AM   #28
Veteran Member
 
Join Date: May 2002
Location: Ontario, Canada
Posts: 1,125
Post

Hello Plump DJ,

Quote:
1) Well actually i think the implications are a little bit more profound then that. Godel's therom's and it's implications for such things as AI and "unlimited knowledge" are still there, regardless of how advanced our computer's become or advanced our mathmatics gets.
I don't see how he has addressed this. What leads him to conclude that brain cells will forever be beyond our ability to to recreate mechanically?

Also, since the human brain is not even close to being fully understood, how did he conclude that our mathematics will never be sufficient to create true A.I.? Is he assuming that the mysteries of the human brain will be forever beyond our ability to unravel?

Quote:
2) I think the point you make as well about the mind being merely a function of the brain or reducible in some sense to the brain, is actually "the point" that Godel's theroms dispute.
Well, I have a feeling that this guy thinks that our consciousness is the result of a magic soul.

What part of our brains are irreducible and how does this guy know that it is irreducible when there seems to be so much to discover still?

Also, how does an organ that is irreducibly complex evolve?

Quote:
Penrose establishes with admirable rigor that no machine that works "computationally" can think as we do. He then argues (convincingly) that all machines constructed using the known laws of physics will work computationally.
Our brains violate the known laws of physics? WTF?
Has this guy informed the A.I. researchers that they are all wasting their time unless they can figure out the unknown laws of physics that govern our brain? Where does this guy get this information from, since even top neurologists say that they don't understand all that much about the brain?

Quote:
And having assumed that the human mind is nonetheless entirely explicable by the laws of physics, he is forced to conclude that there must be new laws of physics involving processes that are intrinsically non-computational (which is not to say that they are not described by deterministic mathematical laws).
Why would inorganic materials be incapable of non-computational processes? We don't fully understand how the different components of our brains interact to allow consciousness, creativity, etc. and yet this guy is already convinced that it will be impossible for us to ever understand and create?

IMHO, he has based this theory on his assumption that our consciousness is the result of a magical soul, and is thus beyond naturalistic means to reproduce. He doesn't seem to have anything else.

Quote:
SCoW:

Thought experiment.

At this moment your brain is completely organic and you are conscious.

Now, replace a single brain cell with an inorganic replacement that does everything the cell it replaced did precisely.

Now another, and another.

At what point are you no longer a truly conscious being, but merely a complex machine that can "fake it"?
Bible Humper is offline  
Old 08-20-2002, 12:33 PM   #29
Veteran Member
 
Join Date: Mar 2002
Location: the dark side of Mars
Posts: 1,309
Post

I personally don't think we humans are all that intelligent, so I easily think AI can be created that will be smarter than us someday.
Radcliffe Emerson is offline  
Old 08-20-2002, 02:41 PM   #30
Veteran Member
 
Join Date: Oct 2000
Location: Alberta, Canada
Posts: 5,658
Post

I suggest reading Dennett's review of Penrose's <a href="http://ase.tufts.edu/cogstud/papers/penrose.htm" target="_blank">The Emperor's New Mind</a>:
Quote:
The argument Penrose unfolds has more facets than my summary can report, and it is unlikely that such an enterprise would succumb to a single, crashing oversight on the part of its creator--that the argument could be "refuted" by any simple objection. So I am reluctant to credit my observation that Penrose seems to make a fairly elementary error right at the beginning, and at any rate fails to notice or rebut what seems to me to be an obvious objection. Recall that the burden of the first part of the book is to establish that minds are not "algorithmic"--that there is something special that minds can do that cannot be done by any algorithm (i.e., computer program in the standard, Turing-machine sense). What minds can do, Penrose claims, is see or judge that certain mathematical propositions are true by "insight" rather than mechanical proof. And Penrose then goes to some length to argue that there could be no algorithm, or at any rate no practical algorithm, for insight.


But this ignores a possibility--an independently plausible possibility--that can be made obvious by a parallel argument. Chess is a finite game (since there are rules for terminating go-nowhere games as draws), so in principle there is an algorithm for either checkmate or a draw, one that follows the brute force procedure of tracing out the immense but finite decision tree for all possible games. This is surely not a practical algorithm, since the tree's branches outnumber the atoms in the universe. Probably there is no practical algorithm for checkmate. And yet programs--algorithms--that achieve checkmate with very impressive reliability in very short periods of time are abundant. The best of them will achieve checkmate almost always against almost any opponent, and the "almost" is sinking fast. You could safely bet your life, for instance, that the best of these programs would always beat me. But still there is no logical guarantee that the program will achieve checkmate, for it is not an algorithm for checkmate, but only an algorithm for playing legal chess--one of the many varieties of legal chess that does well in the most demanding environments. The following argument, then, is simply fallacious:


(1) X is superbly capable of achieving checkmate.

(2) There is no (practical) algorithm guaranteed to achieve checkmate.

therefore

(3) X does not owe its power to achieve checkmate to an algorithm.


So even if mathematicians are superb recognizers of mathematical truth, and even if there is no algorithm, practical or otherwise, for recognizing mathematical truth, it does not follow that the power of mathematicians to recognize mathematical truth is not entirely explicable in terms of their brains executing an algorithm. Not an algorithm for intuiting mathematical truth--we can suppose that Penrose has proved that there could be no such thing. What would the algorithm be for, then? Most plausibly it would be an algorithm--one of very many--for trying to stay alive, an algorithm that, by an extraordinarily convoluted and indirect generation of byproducts, "happened" to be a superb (but not foolproof) recognizer of friends, enemies, food, shelter, harbingers of spring, good arguments--and mathematical truths!
Penrose simply assumes that humans could find a Gödel proposition for a given AI, and that if they could it would mean something significant about the AI, which means he has simply assumed that humans do not have Gödel propositions.
tronvillain is offline  
 

Thread Tools Search this Thread
Search this Thread:

Advanced Search

Forum Jump


All times are GMT -8. The time now is 02:34 AM.

Top

This custom BB emulates vBulletin® Version 3.8.2
Copyright ©2000 - 2015, Jelsoft Enterprises Ltd.