FRDB Archives

Freethought & Rationalism Archive

The archives are read only.


Go Back   FRDB Archives > Archives > IIDB ARCHIVE: 200X-2003, PD 2007 > IIDB Philosophical Forums (PRIOR TO JUN-2003)
Welcome, Peter Kirby.
You last visited: Yesterday at 05:55 AM

 
 
Thread Tools Search this Thread
Old 10-08-2002, 12:18 PM   #11
Moderator - Science Discussions
 
Join Date: Feb 2001
Location: Providence, RI, USA
Posts: 9,908
Post

DigitalChicken:
Kurzweil and the transhumanism crowd are no different than futurists of the 50s who said that in the year 2000 (i.e. by now) I would be driving a flying car to work.

Except those predictions were not based on anything analogous to Moore's Law or the other <a href="http://www.kurzweilai.net/meme/frame.html?main=/articles/art0134.html" target="_blank">exponential trends</a> detailed by Kurzweil. And we certainly have the capability to build "flying cars", just not affordably...but if we had the capability to build superhuman A.I.'s or nanotech machines then even a few would likely change the world.
Jesse is offline  
Old 10-08-2002, 12:34 PM   #12
Banned
 
Join Date: Jul 2002
Location: U.S.
Posts: 4,171
Post

Quote:
Originally posted by Jesse:
<strong>DigitalChicken:

Except those predictions were not based on anything analogous to Moore's Law or the other <a href="http://www.kurzweilai.net/meme/frame.html?main=/articles/art0134.html" target="_blank">exponential trends</a> detailed by Kurzweil. And we certainly have the capability to build "flying cars", just not affordably...but if we had the capability to build superhuman A.I.'s or nanotech machines then even a few would likely change the world.</strong>
Those comments do not have any bearing on mine.

What we are capable of doing and what *we* as a society actually *DO* are different things.

Social forces have stopped, made less attractive or stunted a tremendous amount of things.

DC
Rusting Car Bumper is offline  
Old 10-08-2002, 12:48 PM   #13
Moderator - Science Discussions
 
Join Date: Feb 2001
Location: Providence, RI, USA
Posts: 9,908
Post

DigitalChicken:
What we are capable of doing and what *we* as a society actually *DO* are different things.

When it comes to technology, I'm not sure I agree. There are few possible technologies that have not been tried at least once.

DigitalChicken:
Social forces have stopped, made less attractive or stunted a tremendous amount of things.

Can you think of a technology which we have the capability to create now but not a single prototype has ever been built, for social reasons other than the fact that the expense is too high for all but a tiny number of individuals to create it? Even if A.I. or nanotech started out in the "too expensive" category, unless the exponential trends stop eventually quite a lot of people would have the capability to create them. Could social forces insure that not a single one of these people actually did so? Even in a police state I have a hard time seeing this working.

If we don't create nanotech or A.I. within the next century, I think it's almost certain the reason will have to do with insurmountable technical problems (or, say, the collapse of society and the loss of all advanced technologies) rather than social forces keeping feasible breakthroughs in check.
Jesse is offline  
Old 10-10-2002, 09:44 AM   #14
Regular Member
 
Join Date: Aug 2001
Location: Indeterminate
Posts: 447
Post

Quick thing: Do a google search for "kurzweil ipo prison"

He's not a crackpot, he's a thief. His primary skill isn't prediction, but making sure that other managers at his companies go to prison for fraudulent IPOs.
Lex Talionis is offline  
Old 10-10-2002, 12:22 PM   #15
Banned
 
Join Date: Sep 2001
Location: Eastern Massachusetts
Posts: 1,677
Post

Quote:
Originally posted by Lex Talionis:
<strong>Quick thing: Do a google search for "kurzweil ipo prison"

He's not a crackpot, he's a thief. His primary skill isn't prediction, but making sure that other managers at his companies go to prison for fraudulent IPOs.</strong>
I did the search, and came up with a conviction of a former CEO and VP of Kurzweil Applied Intelligence, Inc. in 1996 on charges of fraud. Please explain:
a) how this equates to Ray Kurzweil being a thief (there is no indication, either in the legal proceedings nor in the 1996 article on the matter, that Ray Kurzweil was ever implicated in theft or any other crime for that matter).
b) how this equates to him "making sure that other managers at his companies go to prison."
b)1) Where does "companies", plural come from?
b)2) Are you implying that it is a bad thing for one to let frauds go to jail? Should he have covered up for them?
b)3) what actions of his "made sure" of this?
c) You do know that Ray Kurzweil has no current involvement with Kurzweil Applied Technologies, which was sold to Lernout & Houspie in 1997.

The company, started by Ray K. in 1982, pioneered speech recognition. Like most of his early products, such as the text-to-voice book-reading machine for the blind, his products have provided great benefit to disabled people.

I am not defending the man, merely challenging your seemingly baseless accusation.

Oh, BTW, in response to an earlier poster, the music synthesizer company was sold to Young Chang long ago, and he has nothing to do with it at this point.
galiel is offline  
Old 10-10-2002, 06:31 PM   #16
Junior Member
 
Join Date: Sep 2002
Location: California
Posts: 41
Post

I'm an AI researcher (grad student) at a fairly prestigious AI research institute. I've got a fairly intimate knowledge of both the history and current capabilities of AI, I have difficulty taking people who make extreme claims about the subject seriously. I consider transhumanism to be a humorous sort of idiocy at best.

I suppose it's possible that we'll have human level AI in the next 20 to 40 years, but I don't think so. I work with people who have spent the last 10, 15, even 20 years working on the same AI related problems, and they don't expect to solve them in their lifetimes. I don't expect them to either, and I don't expect what I work on to be solved in my lifetime.

-tail
taillessmonkey is offline  
Old 10-10-2002, 07:37 PM   #17
Veteran Member
 
Join Date: Mar 2002
Location: Houston TX
Posts: 1,671
Post

I know his co. was sold to Young Chang and they are made in Korea, but I have a Kurzweil PC-88 synthesizer and it is an ABSOLUTELY AWESOME MUSICAL INSTRUMENT!!! (I use Cakewalk software) Mindblowing technology I can drive with my creativity! Wow!
Opera Nut is offline  
Old 10-10-2002, 08:07 PM   #18
Banned
 
Join Date: Sep 2001
Location: Eastern Massachusetts
Posts: 1,677
Post

Quote:
Originally posted by taillessmonkey:
<strong>I'm an AI researcher (grad student) at a fairly prestigious AI research institute. I've got a fairly intimate knowledge of both the history and current capabilities of AI, I have difficulty taking people who make extreme claims about the subject seriously. I consider transhumanism to be a humorous sort of idiocy at best.

I suppose it's possible that we'll have human level AI in the next 20 to 40 years, but I don't think so. I work with people who have spent the last 10, 15, even 20 years working on the same AI related problems, and they don't expect to solve them in their lifetimes. I don't expect them to either, and I don't expect what I work on to be solved in my lifetime.

-tail</strong>
I agree completely. And, as I said, his trends analysis is more valuable than his future predictions, which are a total crapshoot for *anyone*.

That said, both Kurzweil and others who predict conscious machines are not basing it on some miraculous breakthrough in human understanding. They point out the astonishing, breathtaking explosion in processing power that we face in the coming decades, and then split on their asumptions of what may come next.

Some posit that the system itself will reach such unimaginable complexity, orders of magnitude above a human brain, that intelligence may simply emerge

Some expect that the bottom-up approach of AL, not the top-down approach of AI, will allow unimaginably powerful computational devices to "learn" their way into intelligence.

Yet others expect that genetic algorithms will allow the brains we build to create yet more powerful artificial brains which will, in turn, create more powerful ones until they bootstrap their way to sentience.

Others, as I am sure you know, are working on developing better machine analgos to sensory organs, so that a silicon brain can experience the world and interact with the world more like we do, the theory being that such interaction and sensation lie behind the development of intelligence.

And we haven't even thrown in the cyborg route...It just occured to me that another way to make this happen is to design more and more component brain prosthetics along with the sensory organ prosthetics. When most of your senses are mediated by silicon and more and more of your neurons have been replaced by machine circuitry, you may cross a treshhold where more of "you" thinks in machine than in meat. Eventually, you could do away with the "meat" part altogether.

(Personally, I believe AL will achieve what AI has not. We will not program machine intelligence, we will grow it.)

Who knows. It is quite possible that we may not see machines achieve intelligence for a long, long, time. But one thing we learn from trends analysis--computational limitations will NOT be a factor. I don't think most people realize that, within their lifetimes, they might see a desktop machine with raw computational power exceeding the combined computational power of all humans on Earth. And that is *without* dramatic discontinuities like the development of practical quantum computation. I do not believe that computational power will magically translate to intelligence. But, intelligent or not, that is a lot of horsepower. And it is good information to know it is coming, soon.

Hell, before long it won't be computationally unfeasible to recreate a human brain as is, neuron by neuron, synapse by synapse. Most probably NOT the most efficient way to go about things, but, hell, if you create a whole brain prosthetic...As a true materialist, I believe there is nothing magical about the brain, it is just an incredibly complex, fuzzy logic calculator.

That is why I think Kurzweil is valuable, even if he does go off the deep end with his prophesy (and even if he does AI a huge disservice by pretending that his silly "alter ego" bot Ramona is true AI). For the same reason R. Buckminster Fuller was, and still is, valuable. He gets people thinking logarithmically, instead of linearly. Now, all we have to do is to get people thinking 3-dimensionally instead of 2-dimensionally, and system-theoretically instead of binary-dualistically, and we might have a chance to survive ourselves.
galiel is offline  
Old 10-10-2002, 08:52 PM   #19
Junior Member
 
Join Date: Sep 2002
Location: California
Posts: 41
Red face

Quote:
Originally posted by galiel:
[QB]

QB]
I was gonna reply, but I fucked up my post and lost it twice in a row, and I just don't care enough to type all that again.

Long story short, I think the trends on computational power are also somewhat overblown.

-tail
taillessmonkey is offline  
 

Thread Tools Search this Thread
Search this Thread:

Advanced Search

Forum Jump


All times are GMT -8. The time now is 04:16 AM.

Top

This custom BB emulates vBulletin® Version 3.8.2
Copyright ©2000 - 2015, Jelsoft Enterprises Ltd.