FRDB Archives

Freethought & Rationalism Archive

The archives are read only.


Go Back   FRDB Archives > Archives > IIDB ARCHIVE: 200X-2003, PD 2007 > IIDB Philosophical Forums (PRIOR TO JUN-2003)
Welcome, Peter Kirby.
You last visited: Yesterday at 05:55 AM

 
 
Thread Tools Search this Thread
Old 08-18-2002, 04:57 PM   #11
Senior Member
 
Join Date: Feb 2002
Location: Everywhere I go. Yes, even there.
Posts: 607
Lightbulb

Quote:
Originally posted by Optimistic:
<strong>What I'm trying to zone in on is one dot, the goal of true A.I. The concept of a highly complicate creature similiar or exactly like ourselves created by ourselves.</strong>
True AI? Well, how about true intelligence running on a nonhuman system? An interesting definition that I've heard for artificial intelligence is one I can't remember the citation for, but anyway it goes like this:

"Anytime a machine can do anything that takes intelligence for you to do, it is a manifestation of artificial intelligence."

Playing (and winning) at world-championship chess is a manifestation of AI that early critics said would never happen, because human programmers surely couldn't program a chess-machine smarter than themselves. They were wrong; programmers with only a rudimentary knowledge of chess can program a machine to outperform even the very best chess players. Other specific examples have been cited where superhuman solutions were engineered by diligent experimenters.

Any thinking process, any intelligent behavior, that can be summarized in an algorithm, can also be an example of AI behavior if the algorithm were fed into a machine capable of executing it.

And if we can't program a human-equivalent machine outright, there's no reason to believe that we can't at least represent all humanly intelligent behavior in a collection of algorithms designed or evolved with human supervision.

All we need is the raw processing power and the basic algorithms to start with. Once a machine can learn using human language, the process will accelerate and you'll be able to have this kind of discussion with a machine and not know it's a machine.

Strictly speaking, humans don't even have to understand an algorithm in order for it to work. 'Genetically' produced algorithms frequently are illegible to the programmers who set out to coax them into existence, but they are still useful. (I wish I had time to look up a concrete example, but at your local library or Barnes and Noble there might be one or two titles on Genetic Programming you can browse, or you might check MIT's AI Lab website.)

Okay, long story short: we can use machines to solve problems, big and small, that once took exclusively human intelligence to solve. We can even use machines to solve problems too big or too difficult for human intelligence to solve.

I know of no reason to suppose that we can't eventually apply intelligent machines to replicate any and all human behavior. Whether that means we will have "created" them, or rather that they will have evolved from a starting-point designated by us, either way I think it would fulfill your criteria that it be AI "which without our own existence to serve example of would never have exposed itself as a possibility.".

To anyone interested in AI and the arguments pro and con, I heartily recommend Daniel Hillis's excellent book The Pattern on the Stone or Daniel Crevier's (outdated but still good) AI: The Tumultuous History of the Search for Artificial Intelligence. Both books were written by AI theorists who have also headed companies that produced cutting-edge AI systems.

Quote:
Originally posted by Optimistic:
<strong>...is A.I. that's above our mentality humanly possible without an illustration of something greater than ourselves?</strong>
Short answer: Yes. Since humans evolved from the most basic elements, and since evolution hasn't to my knowledge stopped working, I don't see why we shouldn't expect evolution to eventually produce something "better" than humans according to whatever specific aspect you're thinking of. I also see no reason why we shouldn't be able to speed up that process through the application of technology.

We can abstract things, such as "strength" or "speed" or "find out if this surface is hot enough to burn my finger" and create machines that far outperform us.

The same holds true for "math-solving" and "chess-winning," and it will eventually hold true for "translating Latin poems into Swahili" and "telling me how you feel." A powerful general-purpose computer capable of doing each of those things separately is also capable of doing each of those things in turns, and would ultimately be capable of integrating all the functions we give it, plus any it figures out on its own and thus would be exactly what you're asking for.

Heck, thousands of years ago our ancestors invented God, and endowed him with superhuman qualities like "omniscience" and "omnipotence." So I see no reason that we can't also come up with machines with superhuman traits, in theory. We're just now beginning to generate the tools to turn our concepts into a functioning product. But it will happen; when the tools and systems begin to take shape, someone will make creating a superhuman mind the focus of all their efforts, and the final product will eventually be developed.

What currently interests me more is "What effect will the arrival of such a technology have on human society?"

Quote:
Originally posted by Optimistic:
<strong>Will we ever hone our natural ability to fabricate gaps so well that anything and everything can be envisioned and achieved? &lt;~ Is that even possible?</strong>
I'm afraid I'm not sure what you mean here.

Optimistic, you've asked good and interesting questions; some seem to contain assumptions I don't share and which I think are mistaken, but I like your curiosity!

-David / wanderer

[ August 18, 2002: Message edited by: David Bowden / wide-eyed wanderer ]</p>
David Bowden is offline  
Old 08-18-2002, 05:12 PM   #12
Veteran Member
 
Join Date: Dec 2002
Location: Gatorville, Florida
Posts: 4,334
Thumbs down

Quote:
Originally posted by Optimistic:
<strong>As my first post stated, our scientific ideas aren't spawned from our heads from nothing. We rely on the universe to give us examples. Without an example greater than ourselves (say, extraterrestrial) it's (IMHO) impossible for us to conceptualize how something greater than ourselves would work (before creation, you need conception). By greater than ourselves, I mean, a greater mentality.

A.I. (with a greater mentality) is possible through a mistake, I suppose. </strong>
A mistake is not required. Through team effort, humans are constantly inventing new things that expand human capabilities in various ways.

It is not only possible, but probable, that eventually humans will invent a mechanical mind ("artificial intelligence") that is, in any measurable way, "greater" than what any human can ever hope to achieve.

My own prescription for how this will occur is that we humans will establish a framework for this "artificial intelligence" and then we will prod it into "evolving" to its own higher state of intelligence. The overall concept has been demonstrated with programmable gate arrays. What we lack is the appropriate framework for a true "artificial intelligence" that would be amenable to evolution of this sort.

== Bill
Bill is offline  
Old 08-18-2002, 05:46 PM   #13
Senior Member
 
Join Date: Feb 2002
Location: Everywhere I go. Yes, even there.
Posts: 607
Post

Quote:
Originally posted by Optimistic:
<strong>Without an example greater than ourselves (say, extraterrestrial) it's (IMHO) impossible for us to conceptualize how something greater than ourselves would work (before creation, you need conception). By greater than ourselves, I mean, a greater mentality.</strong>
I suppose it depends on how you define "mentality" and what you mean by "greater mentality." Could you give us something in the way of a definition of those things? Maybe you could list a few specific things that are necessary for something to be called a "mentality."

If mentality is represented by P, and if in order to get P you have to play champion-level chess (which we might represent as Q), then we're there; we've got Q and thus we've got P.

And if P is just the ability to convince people via email that you're a human being, when actually you're a machine plugged into the wall in Urbana, Illinois, then you're looking at the Turing test and AI isn't quite there yet (although some people have been fooled some of the time by programs like ELIZA).

But if R is the ability to converse in all modern human languages fluently, then we're not there yet; no machine can do that. And if P is Q and R at the same time, we ain't got P.

What makes up P, in your opinion, Optimistic?

Quote:
Originally posted by Optimistic:
<strong>A.I. (with a greater mentality) is possible through a mistake, I suppose.</strong>
Yes, maybe; or more likely it could evolve from the complex elements we deliberately supply for the job. Or we might just be able to hand-program the thing outright. (Although why not just put machines to work performing the grunt work in designing ever more intelligent machines?)

In a scenario where AI evolves as the result of our own researches, we'd supply the software and the hardware - plus everything embodied in our whole "culture," once the machine is capable of engaging people using our languages fluently. I'm sure eventually it would meet and surpass any reasonable qualifications for "mentality."

What it chooses to do after that point will be very interesting - it will be a human intelligence compatible with ours, yet it would still be an artificial mind with features alien to our own experience. I'm curious how long they'll be around before asking for equal rights and political representation...

-

[ August 18, 2002: Message edited by: David Bowden / wide-eyed wanderer ]</p>
David Bowden is offline  
Old 08-18-2002, 06:14 PM   #14
Senior Member
 
Join Date: Feb 2002
Location: Everywhere I go. Yes, even there.
Posts: 607
Post

Just a parenthetical aside:

Quote:
Originally posted by Optimistic:
<strong>But the motive behind true A.I. right now is something that will be in our image.</strong>
Not according to Rodney Brooks, head of the AI Lab at MIT. On <a href="http://www.ai.mit.edu/introduction/director-message.shtml" target="_blank">this page</a>, he names the central question of AI research: "How does the human mind work?" He also says:

Quote:
"There are dozens of new applications currently being developed at the Lab helping surgeons, assisting the disabled, replacing precision mechanical components with computation, building new classes of human computer interfaces, providing new capabilities in image indexing, and hijacking biochemistry to do computation for us.

Our work in exploring intelligence feeds these applications. Our work on applications gives us new tools to explore intelligence. It is a symbiosis that has worked for us for a long time, and it appears that it will continue to work for the forseeable future.
The main focus in AI research right now is not to recreate or surpass the human mind using machinery, but simply to understand the human mind. We might achieve that objective of understanding the mind without creating a fully humanlike intelligent entity, or a "true AI". That said, AI researchers might very well develop humanlike intelligent entities in order to understand our own minds. But that's peripheral to the main thrust of AI research right now.

Quote:
Originally posted by Optimistic:
<strong>Though, I guess this arguement is like the "can god (if he were to exist) create a rock heavier than he can lift?". It probably won't go anywhere except circular.</strong>
On the contrary; humans are not "omni-"anything and so we have no reason to believe that we are unsurpassable in every way.

Given our eventual successes in other areas, we have no reason to believe that our own creations won't surpass us in the area of general intelligence or "mentality." Especially since we are simultaneously increasing our understanding of the mind and developing ever more tools and systems that help us model and imitate aspects of the mind.

Modeling an AI after the human mind is not unlike the early development of fixed-wing aircraft; theorists were all over the map on the very possibility of powered heavier-than-air flight. It took many different approaches before the Wrights figured it out which problems to solve and how to solve them practically, in order to get a machine in the air.

Like powered flight, at some point, AI will succeed at a basic level (and in some ways already has succeeded), and early successes will lead to the development of further refinements, until eventually human and then superhuman capabilities are reached.

What AIs decide to do with themselves after they reach the "just slightly superhuman" stage is yet another thing that interests me, but more to the point: there's nothing, as far as we know, preventing machines from eventually getting to that further stage, once they get to the human stage - because at that point they'll be able to work on that problem for themselves. Since they'll be a lot more adaptible than we are, I'm sure they'll come up with incremental refinements we wouldn't have thought of, and soon after, systemic advances we can't even begin to imagine.

-

[ August 18, 2002: Message edited by: David Bowden / wide-eyed wanderer ]</p>
David Bowden is offline  
Old 08-18-2002, 09:15 PM   #15
Nu
Junior Member
 
Join Date: May 2002
Location: Nu
Posts: 58
Post

Can you have a system being completely aware of, and also understand, itself?

This is the classic "can a ruler measure itself?" paradox.

I personally don't know the answer, what do you think?
Nu is offline  
Old 08-18-2002, 10:44 PM   #16
Senior Member
 
Join Date: Feb 2002
Location: Everywhere I go. Yes, even there.
Posts: 607
Cool

Quote:
Originally posted by Nu:
<strong>Can you have a system being completely aware of, and also understand, itself?</strong>
(Hi Nu! I'm not sure if that's meant for me, but I'll play with it...)

Wow; "awareness" and "understanding" - those are inevitably loaded terms, so any answer anyone gives ought to be scrutinized. Just shooting from the hip, I personally think that the answer to the first part of that question has to be a qualified "No." I think the answer to the second part of the question is a qualified "Yes."

On complete self-awareness: I don't think a[n intelligent] system can be aware of the precise state of every one of its elements, certainly not in the present tense.

In the case of a computer, the machine's processor is constantly active, and therefore any "awareness" the machine has of its processor's states is always at least a cycle behind the times, and probably much further behind, so it only knows what it was like just before it became aware of what it was just like... etc.

A computer has a distinct advantage over a biological entity like a human in that it can be programmed to test most all of its parts for a specific state. A human can only be very vaguely aware of everything going inside him/herself. But by studying a model of a nearly identical system, we can get an idea as to what our own system is approximately like. What we know about the body through noninvasive self-diagnosis is minuscule compared with what we've learned about the human system in general through dissection and generations of other biological research. By applying abstract models of self-like entities to our own self-awareness, we extend our self-awareness.

So a system can be very approximately aware of itself, especially if it can be measured, carefully modeled and if it studies that model closely. (It would help if the information was based on itself and not just another similar system, and very close to the current state of affairs - but long-term study might show where some things tend to be constant and where some things tend to change. But now we're getting into the "understanding" part of your question.)

The ruler analogy might help: if the ruler had a brain and some way of controlling other materials, it might relate its own length to another physical body, or perhaps craft a device with a measurement it understood, and then apply that measurement to itself to find its own length. So yes, given a basic intelligence and capability to act, a ruler could measure itself.

(In a similar way, if you don't have a measuring tape, you can learn your own waistline measurement using a string and comparing the amount of string it takes to go around your waist with the amount of string it takes to measure the length of your foot. You just have to accept your foot, your waistline and the string as constants.)

Is self-awareness the same thing as self-understanding? I don't think the two concepts are the same. "Understanding" what one is "aware" of takes us into deep philosophical water, but for our purposes here I think complete understanding would involve forming an internal model not only of the (nearly) current state of just about all the elements in the system, but also just about all the possible states of those elements. It would also necessarily involve some awareness of the environment and of factors that tend to inflict change on the system.

Obviously if we consider ourselves as humans to have a basically functional self-understanding, we don't require absolute total awareness of all those variables; just the ones that make a difference to us at a given time. I do think that an intelligent system, be it a human or a machine mind or something else altogether, can understand itself given an adequate basic awareness of itself in the abstract. A truly intelligent system would probably keep itself continually updated as to its own state and the changes it tends to undergo.

By studing oneself, and studying information gleaned from entities like oneself, and by observing how one fits within a particular environment, one can come to some level of self-understanding. Whether one is satisfied with one's self-understanding, or thinks one should apply oneself to learning more... that's more deep philosophical water. "What would a ruler do next, after measuring itself?..."

-

(BTW Nu, have you had a chance to check out any of the Nietzsche recommendations we <a href="http://iidb.org/ubb/ultimatebb.php?ubb=get_topic&f=56&t=000332" target="_blank">tossed</a> your way the other day?)

[ August 18, 2002: Message edited by: David Bowden / wide-eyed wanderer ]</p>
David Bowden is offline  
Old 08-19-2002, 04:27 AM   #17
Veteran Member
 
Join Date: Mar 2002
Location: Canton, Ohio
Posts: 2,082
Post

MadMordigan! Hey!

A crane could outlift Einstein. A computer could outcount him. What functional criterion for intelligence are you espousing here?

Ierrellus
PAX
Ierrellus is offline  
Old 08-19-2002, 06:36 AM   #18
Veteran Member
 
Join Date: Jul 2002
Location: Overland Park, Kansas
Posts: 1,336
Post

Greetings:

We already have artificial memory storage devices, and have had them for a long time. (Once they were in the form of libraries, containing hundreds of thousands of texts. Now they are more often found in the form of computer networks...)

These libraries contain far greater information than can be stored in an individual human brain.

So, artificial...greater than human.

Big deal; nothing new.

Keith.
Keith Russell is offline  
Old 08-19-2002, 08:19 AM   #19
Veteran Member
 
Join Date: Mar 2002
Location: Southwest USA
Posts: 4,093
Post

Quote:
Of course a computer can do math better than I, but can it cry?
I think this question is very relevant to the question of this thread, however if the question of thread is referring to AI being "greater than ourselves" from the human perspective, then I think that maybe we should instead ask if a computer can "make us cry."

I think it has been well established here that AI can easily out-think a human within the known laws of science. AI can design better aircraft, it can design better computers, engines, buildings, and cities. But can AI ponder the unknown? AI could easily outperform Einstein in computations and mathematics (not saying much, as Einstein hated mathematics), but could AI be made to spontaneously theorize? Can AI create completely new art that can evoke emotion? Could AI develop a completely new recipe for food that would be delicious?
Tristan Scott is offline  
Old 08-19-2002, 10:12 AM   #20
Veteran Member
 
Join Date: May 2002
Location: Ontario, Canada
Posts: 1,125
Post

Hello Tristan,

Quote:
I think this question is very relevant to the question of this thread, however if the question of thread is referring to AI being "greater than ourselves" from the human perspective, then I think that maybe we should instead ask if a computer can "make us cry."
I agree, mathematical ability does not make an A.I. "greater than ourselves".

An A.I. could be designed to develop the same emotional responses that we have developed, but grief seems to be superfluous. An A.I. that was designed to be completely loving of other beings would not need the emotion of grief that we have developed in order to behave "humanely". Grief is just an emotion that has evolved to punish us for "allowing" certain things to occur, if it is decided that grief is needed for "greatness", it could be recreated along with everything else.

I think that the stumbling block is our own tendency to think of ourselves as being "greater" than we really are, all of the traits that we find admirable in ourselves have some sort of evolutionary purpose and are thus merely how our brains have developed. "We" are just the result of all the different activities of our own brains.

An A.I. that has all of the mental factors that make us who we are would be an equal being, if the negative factors which we judge to make humanity less "great" were removed, the A.I. would actually be "greater".

I think it was Asimov who said "We can imagine a man of the future walking past a robot factory and seeing a new robot walk out. The man pulls a gun in a rage and shoots the robot in the chest. To his amazement, the robot cries in pain and blood spurts out of the wound. The robot shoots back, and to it's amazement the human show no sign of really understanding what just happened to it and a wisp of smoke rises from the hole where it figured the human's heart to be. It would be rather a great moment of truth for both of them".

Quote:
I think it has been well established here that AI can easily out-think a human within the known laws of science. AI can design better aircraft, it can design better computers, engines, buildings, and cities. But can AI ponder the unknown?
In the same way that our brains can ponder the unknown, an A.I. could ponder the unknown. However it is that our brains can look at the unknown and find some parallels with what we already understand could be recreated in the A.I. The urge to discover which we have developed could also be recreated.

Quote:
AI could easily outperform Einstein in computations and mathematics (not saying much, as Einstein hated mathematics), but could AI be made to spontaneously theorize?
Sure.

Quote:
Can AI create completely new art that can evoke emotion?
Yes, if the A.I. had the same emotional responses to stimuli that we do, and the same ability to think in abstract symbols, it could do this. It would also need the same desire to express these ideas that we have.

Even the creative process is a function of our brains and the memories of stimuli that it has been fed, when the brain is fully understood it will be possible to recreate this.

There is nothing about us that can't be reduced to it's basic components unless you believe in a soul of some sort.
Bible Humper is offline  
 

Thread Tools Search this Thread
Search this Thread:

Advanced Search

Forum Jump


All times are GMT -8. The time now is 02:34 AM.

Top

This custom BB emulates vBulletin® Version 3.8.2
Copyright ©2000 - 2015, Jelsoft Enterprises Ltd.