Freethought & Rationalism ArchiveThe archives are read only. |
08-18-2002, 04:57 PM | #11 | |||
Senior Member
Join Date: Feb 2002
Location: Everywhere I go. Yes, even there.
Posts: 607
|
Quote:
"Anytime a machine can do anything that takes intelligence for you to do, it is a manifestation of artificial intelligence." Playing (and winning) at world-championship chess is a manifestation of AI that early critics said would never happen, because human programmers surely couldn't program a chess-machine smarter than themselves. They were wrong; programmers with only a rudimentary knowledge of chess can program a machine to outperform even the very best chess players. Other specific examples have been cited where superhuman solutions were engineered by diligent experimenters. Any thinking process, any intelligent behavior, that can be summarized in an algorithm, can also be an example of AI behavior if the algorithm were fed into a machine capable of executing it. And if we can't program a human-equivalent machine outright, there's no reason to believe that we can't at least represent all humanly intelligent behavior in a collection of algorithms designed or evolved with human supervision. All we need is the raw processing power and the basic algorithms to start with. Once a machine can learn using human language, the process will accelerate and you'll be able to have this kind of discussion with a machine and not know it's a machine. Strictly speaking, humans don't even have to understand an algorithm in order for it to work. 'Genetically' produced algorithms frequently are illegible to the programmers who set out to coax them into existence, but they are still useful. (I wish I had time to look up a concrete example, but at your local library or Barnes and Noble there might be one or two titles on Genetic Programming you can browse, or you might check MIT's AI Lab website.) Okay, long story short: we can use machines to solve problems, big and small, that once took exclusively human intelligence to solve. We can even use machines to solve problems too big or too difficult for human intelligence to solve. I know of no reason to suppose that we can't eventually apply intelligent machines to replicate any and all human behavior. Whether that means we will have "created" them, or rather that they will have evolved from a starting-point designated by us, either way I think it would fulfill your criteria that it be AI "which without our own existence to serve example of would never have exposed itself as a possibility.". To anyone interested in AI and the arguments pro and con, I heartily recommend Daniel Hillis's excellent book The Pattern on the Stone or Daniel Crevier's (outdated but still good) AI: The Tumultuous History of the Search for Artificial Intelligence. Both books were written by AI theorists who have also headed companies that produced cutting-edge AI systems. Quote:
We can abstract things, such as "strength" or "speed" or "find out if this surface is hot enough to burn my finger" and create machines that far outperform us. The same holds true for "math-solving" and "chess-winning," and it will eventually hold true for "translating Latin poems into Swahili" and "telling me how you feel." A powerful general-purpose computer capable of doing each of those things separately is also capable of doing each of those things in turns, and would ultimately be capable of integrating all the functions we give it, plus any it figures out on its own and thus would be exactly what you're asking for. Heck, thousands of years ago our ancestors invented God, and endowed him with superhuman qualities like "omniscience" and "omnipotence." So I see no reason that we can't also come up with machines with superhuman traits, in theory. We're just now beginning to generate the tools to turn our concepts into a functioning product. But it will happen; when the tools and systems begin to take shape, someone will make creating a superhuman mind the focus of all their efforts, and the final product will eventually be developed. What currently interests me more is "What effect will the arrival of such a technology have on human society?" Quote:
Optimistic, you've asked good and interesting questions; some seem to contain assumptions I don't share and which I think are mistaken, but I like your curiosity! -David / wanderer [ August 18, 2002: Message edited by: David Bowden / wide-eyed wanderer ]</p> |
|||
08-18-2002, 05:12 PM | #12 | |
Veteran Member
Join Date: Dec 2002
Location: Gatorville, Florida
Posts: 4,334
|
Quote:
It is not only possible, but probable, that eventually humans will invent a mechanical mind ("artificial intelligence") that is, in any measurable way, "greater" than what any human can ever hope to achieve. My own prescription for how this will occur is that we humans will establish a framework for this "artificial intelligence" and then we will prod it into "evolving" to its own higher state of intelligence. The overall concept has been demonstrated with programmable gate arrays. What we lack is the appropriate framework for a true "artificial intelligence" that would be amenable to evolution of this sort. == Bill |
|
08-18-2002, 05:46 PM | #13 | ||
Senior Member
Join Date: Feb 2002
Location: Everywhere I go. Yes, even there.
Posts: 607
|
Quote:
If mentality is represented by P, and if in order to get P you have to play champion-level chess (which we might represent as Q), then we're there; we've got Q and thus we've got P. And if P is just the ability to convince people via email that you're a human being, when actually you're a machine plugged into the wall in Urbana, Illinois, then you're looking at the Turing test and AI isn't quite there yet (although some people have been fooled some of the time by programs like ELIZA). But if R is the ability to converse in all modern human languages fluently, then we're not there yet; no machine can do that. And if P is Q and R at the same time, we ain't got P. What makes up P, in your opinion, Optimistic? Quote:
In a scenario where AI evolves as the result of our own researches, we'd supply the software and the hardware - plus everything embodied in our whole "culture," once the machine is capable of engaging people using our languages fluently. I'm sure eventually it would meet and surpass any reasonable qualifications for "mentality." What it chooses to do after that point will be very interesting - it will be a human intelligence compatible with ours, yet it would still be an artificial mind with features alien to our own experience. I'm curious how long they'll be around before asking for equal rights and political representation... - [ August 18, 2002: Message edited by: David Bowden / wide-eyed wanderer ]</p> |
||
08-18-2002, 06:14 PM | #14 | |||
Senior Member
Join Date: Feb 2002
Location: Everywhere I go. Yes, even there.
Posts: 607
|
Just a parenthetical aside:
Quote:
Quote:
Quote:
Given our eventual successes in other areas, we have no reason to believe that our own creations won't surpass us in the area of general intelligence or "mentality." Especially since we are simultaneously increasing our understanding of the mind and developing ever more tools and systems that help us model and imitate aspects of the mind. Modeling an AI after the human mind is not unlike the early development of fixed-wing aircraft; theorists were all over the map on the very possibility of powered heavier-than-air flight. It took many different approaches before the Wrights figured it out which problems to solve and how to solve them practically, in order to get a machine in the air. Like powered flight, at some point, AI will succeed at a basic level (and in some ways already has succeeded), and early successes will lead to the development of further refinements, until eventually human and then superhuman capabilities are reached. What AIs decide to do with themselves after they reach the "just slightly superhuman" stage is yet another thing that interests me, but more to the point: there's nothing, as far as we know, preventing machines from eventually getting to that further stage, once they get to the human stage - because at that point they'll be able to work on that problem for themselves. Since they'll be a lot more adaptible than we are, I'm sure they'll come up with incremental refinements we wouldn't have thought of, and soon after, systemic advances we can't even begin to imagine. - [ August 18, 2002: Message edited by: David Bowden / wide-eyed wanderer ]</p> |
|||
08-18-2002, 09:15 PM | #15 |
Junior Member
Join Date: May 2002
Location: Nu
Posts: 58
|
Can you have a system being completely aware of, and also understand, itself?
This is the classic "can a ruler measure itself?" paradox. I personally don't know the answer, what do you think? |
08-18-2002, 10:44 PM | #16 | |
Senior Member
Join Date: Feb 2002
Location: Everywhere I go. Yes, even there.
Posts: 607
|
Quote:
Wow; "awareness" and "understanding" - those are inevitably loaded terms, so any answer anyone gives ought to be scrutinized. Just shooting from the hip, I personally think that the answer to the first part of that question has to be a qualified "No." I think the answer to the second part of the question is a qualified "Yes." On complete self-awareness: I don't think a[n intelligent] system can be aware of the precise state of every one of its elements, certainly not in the present tense. In the case of a computer, the machine's processor is constantly active, and therefore any "awareness" the machine has of its processor's states is always at least a cycle behind the times, and probably much further behind, so it only knows what it was like just before it became aware of what it was just like... etc. A computer has a distinct advantage over a biological entity like a human in that it can be programmed to test most all of its parts for a specific state. A human can only be very vaguely aware of everything going inside him/herself. But by studying a model of a nearly identical system, we can get an idea as to what our own system is approximately like. What we know about the body through noninvasive self-diagnosis is minuscule compared with what we've learned about the human system in general through dissection and generations of other biological research. By applying abstract models of self-like entities to our own self-awareness, we extend our self-awareness. So a system can be very approximately aware of itself, especially if it can be measured, carefully modeled and if it studies that model closely. (It would help if the information was based on itself and not just another similar system, and very close to the current state of affairs - but long-term study might show where some things tend to be constant and where some things tend to change. But now we're getting into the "understanding" part of your question.) The ruler analogy might help: if the ruler had a brain and some way of controlling other materials, it might relate its own length to another physical body, or perhaps craft a device with a measurement it understood, and then apply that measurement to itself to find its own length. So yes, given a basic intelligence and capability to act, a ruler could measure itself. (In a similar way, if you don't have a measuring tape, you can learn your own waistline measurement using a string and comparing the amount of string it takes to go around your waist with the amount of string it takes to measure the length of your foot. You just have to accept your foot, your waistline and the string as constants.) Is self-awareness the same thing as self-understanding? I don't think the two concepts are the same. "Understanding" what one is "aware" of takes us into deep philosophical water, but for our purposes here I think complete understanding would involve forming an internal model not only of the (nearly) current state of just about all the elements in the system, but also just about all the possible states of those elements. It would also necessarily involve some awareness of the environment and of factors that tend to inflict change on the system. Obviously if we consider ourselves as humans to have a basically functional self-understanding, we don't require absolute total awareness of all those variables; just the ones that make a difference to us at a given time. I do think that an intelligent system, be it a human or a machine mind or something else altogether, can understand itself given an adequate basic awareness of itself in the abstract. A truly intelligent system would probably keep itself continually updated as to its own state and the changes it tends to undergo. By studing oneself, and studying information gleaned from entities like oneself, and by observing how one fits within a particular environment, one can come to some level of self-understanding. Whether one is satisfied with one's self-understanding, or thinks one should apply oneself to learning more... that's more deep philosophical water. "What would a ruler do next, after measuring itself?..." - (BTW Nu, have you had a chance to check out any of the Nietzsche recommendations we <a href="http://iidb.org/ubb/ultimatebb.php?ubb=get_topic&f=56&t=000332" target="_blank">tossed</a> your way the other day?) [ August 18, 2002: Message edited by: David Bowden / wide-eyed wanderer ]</p> |
|
08-19-2002, 04:27 AM | #17 |
Veteran Member
Join Date: Mar 2002
Location: Canton, Ohio
Posts: 2,082
|
MadMordigan! Hey!
A crane could outlift Einstein. A computer could outcount him. What functional criterion for intelligence are you espousing here? Ierrellus PAX |
08-19-2002, 06:36 AM | #18 |
Veteran Member
Join Date: Jul 2002
Location: Overland Park, Kansas
Posts: 1,336
|
Greetings:
We already have artificial memory storage devices, and have had them for a long time. (Once they were in the form of libraries, containing hundreds of thousands of texts. Now they are more often found in the form of computer networks...) These libraries contain far greater information than can be stored in an individual human brain. So, artificial...greater than human. Big deal; nothing new. Keith. |
08-19-2002, 08:19 AM | #19 | |
Veteran Member
Join Date: Mar 2002
Location: Southwest USA
Posts: 4,093
|
Quote:
I think it has been well established here that AI can easily out-think a human within the known laws of science. AI can design better aircraft, it can design better computers, engines, buildings, and cities. But can AI ponder the unknown? AI could easily outperform Einstein in computations and mathematics (not saying much, as Einstein hated mathematics), but could AI be made to spontaneously theorize? Can AI create completely new art that can evoke emotion? Could AI develop a completely new recipe for food that would be delicious? |
|
08-19-2002, 10:12 AM | #20 | ||||
Veteran Member
Join Date: May 2002
Location: Ontario, Canada
Posts: 1,125
|
Hello Tristan,
Quote:
An A.I. could be designed to develop the same emotional responses that we have developed, but grief seems to be superfluous. An A.I. that was designed to be completely loving of other beings would not need the emotion of grief that we have developed in order to behave "humanely". Grief is just an emotion that has evolved to punish us for "allowing" certain things to occur, if it is decided that grief is needed for "greatness", it could be recreated along with everything else. I think that the stumbling block is our own tendency to think of ourselves as being "greater" than we really are, all of the traits that we find admirable in ourselves have some sort of evolutionary purpose and are thus merely how our brains have developed. "We" are just the result of all the different activities of our own brains. An A.I. that has all of the mental factors that make us who we are would be an equal being, if the negative factors which we judge to make humanity less "great" were removed, the A.I. would actually be "greater". I think it was Asimov who said "We can imagine a man of the future walking past a robot factory and seeing a new robot walk out. The man pulls a gun in a rage and shoots the robot in the chest. To his amazement, the robot cries in pain and blood spurts out of the wound. The robot shoots back, and to it's amazement the human show no sign of really understanding what just happened to it and a wisp of smoke rises from the hole where it figured the human's heart to be. It would be rather a great moment of truth for both of them". Quote:
Quote:
Quote:
Even the creative process is a function of our brains and the memories of stimuli that it has been fed, when the brain is fully understood it will be possible to recreate this. There is nothing about us that can't be reduced to it's basic components unless you believe in a soul of some sort. |
||||
Thread Tools | Search this Thread |
|