FRDB Archives

Freethought & Rationalism Archive

The archives are read only.


Go Back   FRDB Archives > Archives > IIDB ARCHIVE: 200X-2003, PD 2007 > IIDB Philosophical Forums (PRIOR TO JUN-2003)
Welcome, Peter Kirby.
You last visited: Today at 05:55 AM

 
 
Thread Tools Search this Thread
Old 02-27-2002, 07:04 AM   #31
Veteran Member
 
Join Date: Aug 2000
Location: Australia
Posts: 4,886
Post

ok, I just wasn't sure whether you meant that AI can be "on par" with natural intelligence or not...
excreationist is offline  
Old 02-27-2002, 07:16 AM   #32
Synaesthesia
Guest
 
Posts: n/a
Post

Quote:
do you seriously think our visual processing compabilities are done using serial computation?
No but any parallell process can be emulated (if inefficiently) in serial.
 
Old 02-27-2002, 07:53 AM   #33
Junior Member
 
Join Date: Sep 2001
Location: London, England
Posts: 7
Post

The thing that puzzles me about this debate is why people keep talking about Artificial Intelligence (AI) when they clearly mean something else (perhaps Machine Sentience or Algorithmic Consciousness). Surely anyone before 1940 transported to today would have little problem calling a modern computer “intelligent”? The types of things we call “intelligent” in humans are already done much better by computers (processing formal systems like mathematics, chess or, for that matter IQ tests). Does anyone know if anyone has tried to program a computer to do standard intelligence tests and if so what IQ was achieved? It strikes me as a MUCH simpler task than, say, the Turing Test.

For me, the interesting thing about Machine Sentience is that we will probably never know for sure and it may always be controversial. After all, we cannot tell for sure how far down the animal kingdom sentience goes and indeed philosophical purists will point out that we cannot even prove the point about other human beings. Maybe one day a genius will sit up in the bath and say “Eureka! That’s the secret of sentience! And then we’ll be able to write a subroutine (call it a “soul algorithm”) that confers it to a computer and we’ll also understand how it works in humans. However I doubt it will be like this: people will gradually build machines that are more and more self programmed and which emulate consciousness in order to better interface with humans. Someday, it will become a controversial matter whether these machines are “really” sentient or just emulating sentience.

In some ways the status of domestic dogs is an analogue for the way this might develop except that machines will not be limited by the available brain power or structure. For many generations dogs have been selectively bred to be good companions to human beings i.e. for human-like, or sentient-like characteristics. But were dogs always sentient, or did sentience develop as a by-product during this directed breeding programme or are dogs just instinctively displaying (emulating) behaviours similar to those that are driven by sentience in humans? We will know that machines did not start out sentient, but the other two possibilities will not be easy to distinguish.
Jonte is offline  
Old 02-27-2002, 03:38 PM   #34
Veteran Member
 
Join Date: Aug 2000
Location: Australia
Posts: 4,886
Post

Quote:
Originally posted by Synaesthesia:
No but any parallel process can be emulated (if inefficiently) in serial.
That's true.
excreationist is offline  
Old 02-27-2002, 04:09 PM   #35
Veteran Member
 
Join Date: Aug 2000
Location: Australia
Posts: 4,886
Post

Quote:
Originally posted by Jonte:
The thing that puzzles me about this debate is why people keep talking about Artificial Intelligence (AI) when they clearly mean something else (perhaps Machine Sentience or Algorithmic Consciousness). Surely anyone before 1940 transported to today would have little problem calling a modern computer “intelligent”? The types of things we call “intelligent” in humans are already done much better by computers (processing formal systems like mathematics, chess or, for that matter IQ tests).
Yes, computers can do this often better than humans can, but that is just because humans naturally use fuzzy logic, not the precise logic of maths. So we are very inefficient and unreliable at maths. On the other hand, in the 50's they also tried getting computers to do other things, like image recognition. They failed miserably. They assumed that image recognition didn't really require intelligence since even tiny animals could do it.

Quote:
Does anyone know if anyone has tried to program a computer to do standard intelligence tests and if so what IQ was achieved?
This just shows that the programmer was very intelligent. I think real intelligence has to be actively learned by the computer itself - not programmed in.
Even if it did very well at IQ tests, it doesn't mean much since it would only be good at doing IQ tests - unlike humans, who can apply their intelligence to just about any problem domain.

Quote:
It strikes me as a MUCH simpler task than, say, the Turing Test.
I don't think the Turing Test is a good one anyway... it is just about fooling some amateurs for a couple minutes. (On the other hand, I think I'm good at finding problems with those programs)

from <a href="http://www.alicebot.org/alicechat.html" target="_blank">A.L.I.C.E.</a>:
&gt; The country I live in is Australia
Is that a fact.
&gt; What country do I live in?
I am in Dr. Richard S. Wallace's computer in San Francisco. Where are you?
&gt; My dog's name is Caesar
Caesar is a pretty good name for a pet.
&gt; Who is Caesar?
Some famous celebrity, right?


Basically this AI can't learn during a conversation. But usually it gives intelligent-sounding answers. Other AI systems might be good at answering the questions I asked them, but unable to answer other questions, such as this:
&gt; What is the second word in this sentence?
There might be more than one second word in this sentence.


In pre-programmed AI, every single "rule" has to be explicitly programmed in. With neural networks, the system can learn itself, although it can make many mistakes while it is learning.

As far as "sentience" and AI goes (What kind of sentience are you talking about? Self-awareness?), I'll just quote my reductive model that involves consciousness:
Quote:
The hierarchy of intelligent systems:
1. Processing Systems [or Programmed Systems]
...receive [or detect], process and respond to input.

2. Aware Systems
...receive input and respond according to its goals/desires and beliefs learnt through experience about how the world works
(self-motivated, acting on self-learnt beliefs)

3. Conscious Systems [meta-awareness]
Aware systems which utilize a meta-language to analyse themselves.
I'm saying that consciousness is a very sophisticated form of awareness, which in turn is a very sophisticated form of a processing or programmed system.

Traditional AI just involves programmed/processing systems. I don't know if we're quite at the stage of aware AI (sensory/emotional/belief awareness) yet but we've got part of the way there, with some robots or software characters being given motivational drives. (I think AiBo has this)
They can't actively learn totally new behaviours though (like cats and dogs can).
excreationist is offline  
Old 02-28-2002, 02:03 AM   #36
Veteran Member
 
Join Date: Feb 2002
Location: Singapore
Posts: 3,956
Talking

Well, I argue with a physicist about whether A.I could have souls and my stand is no but he seem to think otherwise. He said that the impulse in my mind is due the interactions of electrical fields and since if A.I behaves the same way as my mind do. So, why can't they have souls(if there is such things) as humans too. So, which stand do all of you supports?
Answerer is offline  
Old 02-28-2002, 02:33 AM   #37
Veteran Member
 
Join Date: Oct 2000
Location: Alberta, Canada
Posts: 5,658
Post

Well, my my stand is that nothing has a soul. I think you can figure out my position on AI and souls. Of course, if some things do have souls, then whether an AI can have one depends on the nature of souls, about which we have no information.
tronvillain is offline  
Old 03-06-2002, 09:55 PM   #38
Senior Member
 
Join Date: Feb 2001
Location: Toronto
Posts: 808
Post

Actually, AI these days is quite prevailant.

The airline industry (inter-airport, eta's, rerouting), for example, is mostly governed by computers, with much better results than people obtained.

AI pattern recognition is currently very advanced. Computers can out-score most primates at image recognition, and can beat even us if it is an expert system.

Video-game AI has surpassed a large portion of the animal kingdom so far, and has a good chance of passing cat level smarts within the decade. Black and White is currently the leader here.

All of these things are no longer called AI, because someone out there understands their algorithims fully. By this metric, when computers surpass us, and tell us the algorithm by which our brains work, we will no longer be considered an intelligence. We'll be like the airline controllers. Just an algorithmic process.

At least that will quiet the dualists.
Christopher Lord is offline  
Old 03-07-2002, 05:41 AM   #39
Veteran Member
 
Join Date: Dec 2000
Location: Tucson, Arizona, USA
Posts: 1,242
Post

Something that doesn't seem to have been thown into the A.I. stew so far in this conversation is genetic algorithms. Sooner or later some researcher, if they haven't already, will start applying this technology to the problem, and then some very interesting possibilities arise. Self evolving hardware and software anyone? Would such a system be able to evolve beyond the initial constrains of its programming?

There is already some debate about the ethics of releasing products into the marketplace developed with the aid of genetic algorithms that are not fully understood beyond the fact that they work.
Jeremy Pallant is offline  
 

Thread Tools Search this Thread
Search this Thread:

Advanced Search

Forum Jump


All times are GMT -8. The time now is 08:26 PM.

Top

This custom BB emulates vBulletin® Version 3.8.2
Copyright ©2000 - 2015, Jelsoft Enterprises Ltd.