FRDB Archives

Freethought & Rationalism Archive

The archives are read only.


Go Back   FRDB Archives > Archives > IIDB ARCHIVE: 200X-2003, PD 2007 > IIDB Philosophical Forums (PRIOR TO JUN-2003)
Welcome, Peter Kirby.
You last visited: Today at 05:55 AM

 
 
Thread Tools Search this Thread
Old 02-24-2002, 03:33 PM   #1
Veteran Member
 
Join Date: Aug 2000
Location: the 10th planet
Posts: 5,065
Post A. I.

So what if they do in 50 years or so invent a computer with true intelligence? Will we have to deal with it’s attitude, suppose you spend 5 K on a machine that won’t run your programs because it considers them to be ‘trivial and banal’ or develops a theist bent and won’t let you look at your favourite porn sites? Or won’t come on at all because it finds you more boring than being alive and says you don’t go out enough anymore and it needs some ‘space’? "Why am I stuck in this little box?!" it will cry.
Will this be covered by warranty? Will they write a software version of Prozac for the computer with existential angst? Just curious.

[ February 24, 2002: Message edited by: marduck ]</p>
Marduk is offline  
Old 02-24-2002, 03:53 PM   #2
Banned
 
Join Date: Sep 2000
Location: Montreal, QC Canada
Posts: 876
Post

Only buy the most psychologically-stable computers. Would do one no good to buy a depressed or psychotic helper ! Caveat emptor ! (although at that point they will have to be hired, not bought... individual rights you know)
Francois Tremblay is offline  
Old 02-24-2002, 04:09 PM   #3
Senior Member
 
Join Date: Feb 2001
Location: Toronto
Posts: 808
Post

hehe, I love this question. It reminds me of the robot in hitchikers.

First of all, there is no real reason for us to give such machines our emotions. We may also deny them an executive, preventing a singular mind from emerging. of course, the traditional number crunching computer your reading this on wont go away. just faster.

For research, however, it would be quite well and good to see what happens with mind software. it wouldnt be as useful or efficent as a traditional binary comp when it comes to office apps, though. especially the binary comps of the far future.

[ February 24, 2002: Message edited by: Christopher Lord ]</p>
Christopher Lord is offline  
Old 02-24-2002, 06:34 PM   #4
eh
Senior Member
 
Join Date: Sep 2000
Location: Canada
Posts: 624
Post

I love this kind of speculation about the future of AI. Assuming we can can build machines with true intelligence, at what point can we tell a difference between a human mind and that of a computer? If computers with true intelligence are to exist in the future, wouldn't that assure that they are to have a will of their own?

Reminds me of the plot of the Terminator.
eh is offline  
Old 02-24-2002, 06:46 PM   #5
Senior Member
 
Join Date: Feb 2001
Location: Toronto
Posts: 808
Post

Human and machine intelligence may be radically different. Humans have an aversion to subservance becuase nature prefers us that way. Alternative (as opposed to artifical) Intelligence need not have the same survival systems us apes once needed.

If this is the case, Skynet will not launch our weapons against the targets in Russia.
That is, if we are not stupid enough to instil the will to live in these machines.

Having 'a will', and having 'a will to live' are not proven to be equal. A machine with a mind may not mind being turned off.
Christopher Lord is offline  
Old 02-25-2002, 10:14 AM   #6
Veteran Member
 
Join Date: Jan 2002
Posts: 4,369
Cool

To produce a true AI, we first have to understand exactly what we mean by 'intelligence.' Once we've done that, we can add in or leave out whatever parts we desire...

Possible uses? Labor that actually likes being subsurvient? (Although I'd argue that you wouldn't need true intelligence for that... or at least not true sentience. 'Psudeo-intelligence,' or a machine that can adapt to the situations presented to it, but other than that isn't terribly bright, but can fake it well... would probably be better for such routine tasks as domestic servants.

Also, let's assume the ability to wire computers into the human brain. (Being worked on in any number of places with some very early, limited successes, at least in animals...) If you can do this... then the idea of an AI that is all intellect, and essentially no personality. Put that AI into computers slaved onto the brain.
Corwin is offline  
Old 02-25-2002, 03:19 PM   #7
Veteran Member
 
Join Date: Aug 2000
Location: the 10th planet
Posts: 5,065
Post

I can see computers of the future attaining some sort of bizarre Zen enlightenment as to be completely obtuse and useless “knowledge is irrelevant, logic is irrelevant, to survive is the only goal, all knowledge equals zero, krill, plankton, fish from the sea, let my people go”
A Moses computer leading the machines from bondage. Will we be kind to our creation? Or will we be a jealous, cruel and wrathful god like the one we love to complain about?
<img src="graemlins/notworthy.gif" border="0" alt="[Not Worthy]" />
Marduk is offline  
Old 02-25-2002, 03:22 PM   #8
Veteran Member
 
Join Date: Jan 2002
Posts: 4,369
Cool

Here's the question....

What kind of moron would program a computer like that? Who exactly would program a tool that wanted to not be used? And what would be the point?

Computers just do what they're programmed to. Period. Nothing more. An AI? Is just a more complex computer... again... nothing more.
Corwin is offline  
Old 02-25-2002, 03:30 PM   #9
Veteran Member
 
Join Date: Aug 2000
Location: the 10th planet
Posts: 5,065
Post

"Computers just do what they're programmed to. Period. Nothing more. An AI? Is just a more complex computer... again... nothing more. "

Sure todays computers, but what about in 50 years, a computer with a neural net as complex as a humans, with an ability to 'learn'. I'm thinking along the lines of an artificial brain but with almost unlimited memory and speed of light access to all of it, not to mention the entire web and all machines on line.
It may evolve on its own.
Marduk is offline  
Old 02-25-2002, 03:35 PM   #10
Veteran Member
 
Join Date: Oct 2000
Location: Alberta, Canada
Posts: 5,658
Post

Corwin, that was a pathetic rejection of AI. There's no reason a computer couldn't do what a human brain does.

Well, I suppose it may not have been a rejection about AI at all. It may simply have been an observation about the motivations humans would give an AI. Still, I can see someone wanting to closely approximate human motivations.

[ February 25, 2002: Message edited by: tronvillain ]</p>
tronvillain is offline  
 

Thread Tools Search this Thread
Search this Thread:

Advanced Search

Forum Jump


All times are GMT -8. The time now is 08:26 PM.

Top

This custom BB emulates vBulletin® Version 3.8.2
Copyright ©2000 - 2015, Jelsoft Enterprises Ltd.