FRDB Archives

Freethought & Rationalism Archive

The archives are read only.


Go Back   FRDB Archives > Archives > IIDB ARCHIVE: 200X-2003, PD 2007 > IIDB Philosophical Forums (PRIOR TO JUN-2003)
Welcome, Peter Kirby.
You last visited: Today at 05:55 AM

 
 
Thread Tools Search this Thread
Old 05-18-2003, 10:12 PM   #111
Veteran Member
 
Join Date: Aug 2000
Location: Australia
Posts: 4,886
Default Re: I know, but:

Quote:
Originally posted by DRFseven
Q: Why do you think we have free will?

A: Because if I didn't I'd have to excuse criminals!

I think it is possible to reject free will but also believe that criminals should be punished....

If there weren't any punishments for crimes (if they were "forgiven"), people would be less discouraged to commit crimes. So people would have much less reason to avoid speeding, robbing banks, killing people they hate, raping others, etc.

Also, punishing people can alter the person's behaviour so that they don't want to do bad things in the future. (Or at least not get caught)
excreationist is offline  
Old 05-19-2003, 09:53 AM   #112
Veteran Member
 
Join Date: Mar 2001
Posts: 2,322
Default Re: Re: I know, but:

Quote:
Originally posted by excreationist
I think it is possible to reject free will but also believe that criminals should be punished....

If there weren't any punishments for crimes (if they were "forgiven"), people would be less discouraged to commit crimes. So people would have much less reason to avoid speeding, robbing banks, killing people they hate, raping others, etc.

Also, punishing people can alter the person's behaviour so that they don't want to do bad things in the future. (Or at least not get caught)
Yeah, I agree. I was highlighting a "special" (human) reasoning ability typified by this popular analysis. The reasoning that purportedly answers the question is actually answering some other question, because obviously, free will would be true or untrue regardless of whether we "have to" excuse criminals or not. It's kind of like saying, "I don't think we are causing an ozone depletion, because if I did think so, I'd have to wear sunscreen." But, as you know, whatever opinion coincides with the proper neurotransmitter mix is the "correct" opinion to have.
DRFseven is offline  
Old 05-19-2003, 01:42 PM   #113
Veteran Member
 
Join Date: Apr 2003
Location: British Columbia
Posts: 1,027
Default

Quote:
Originally posted by John Page
For the answer to be meaningful, the computer needs to understand the question and provide its reasoning. If not, such generated answers will be an answer to a question, not necessarily the one asked.
The problem is that for any anwer there will be a program that gives you this answer. That doesn't mean you'll know to build the right program, but that's your limitation, not a limitation on what is possible for the program.

Any answer is possible, including reasoning and jumping out of the scope of the problem.

Now, you can claim,

1) That's not how our minds work, though. We don't just spit out rote answers.
2) For anything meant to mimic human intelligence, this would fast become impractical.

So, yes, that's true, but what you can't claim is that some problems are non-computational in the sense that no program could solve them, much less that there might be a proof that this is so.
sodium is offline  
Old 05-19-2003, 02:10 PM   #114
Veteran Member
 
Join Date: May 2001
Location: US
Posts: 5,495
Default

Quote:
Originally posted by sodium
So, yes, that's true, but what you can't claim is that some problems are non-computational in the sense that no program could solve them, much less that there might be a proof that this is so.
Just to be clear, I'm not taking this position exactly. The concept of program and computation is open to interpretation as are problem, consciousness etc.

BTW, I'd like to see some form of inverse of the Turing test where the machine has to tell whether its interacting with another machine or human intelligence...

Cheers, john
John Page is offline  
Old 05-21-2003, 03:45 AM   #115
Veteran Member
 
Join Date: Aug 2000
Location: Australia
Posts: 4,886
Default

Quote:
Originally posted by sodium
......1. If a question has an answer, then it is decidable computationally, in principle. For example, let's consider the question, "Does life have meaning?"

I can easily write a program that when faced with this input will output the answer "Yes!" I can also create one that will answer "No!" Assuming this question has an answer, one of these programs will be right. So, one of my programs, although I don't know which one, is able to answer the question using purely mechanical processes.

You might say that this kind of canned answer isn't good enough, and that the computer should be able to justify its answer. But as long as I can limit the length of this justification, there will be a program that can spit it out in response to the question. So, the problem is still decidable computationally, if it is decidable at all.
You could train a person who doesn't understand a word of English or perhaps a parrot to do that. i.e. when it hears "duz lyf hav meening?" it parrots back "noh" or "yes" - or a longer answer (exactly as you trained it). That is basically the as same as what you are talking about.
On the other hand, we learn about the world and how to speak over the course of many years and using that experience we generate an answer to that question for ourselves... i.e. we aren't preprogrammed in advance with that question and answer by a programmer or trainer.
excreationist is offline  
Old 05-21-2003, 06:02 PM   #116
Veteran Member
 
Join Date: Apr 2003
Location: British Columbia
Posts: 1,027
Default

Quote:
Originally posted by excreationist
You could train a person who doesn't understand a word of English or perhaps a parrot to do that. i.e. when it hears "duz lyf hav meening?" it parrots back "noh" or "yes" - or a longer answer (exactly as you trained it). That is basically the as same as what you are talking about.
Yes. You see, my argument isn't that a mechanistic process is exactly the same as a human, although I believe that too, and am arguing for it in a different thread. But some people have claimed that they can prove that computers cannot, even in principle, answer certain kinds of questions. Not merely that they can't mimic internal human thought processes, but that they simply can't answer certain questions. I'm pointing out that this is false.
sodium is offline  
Old 05-23-2003, 07:04 PM   #117
New Member
 
Join Date: Feb 2003
Location: Rockland, Me, USA
Posts: 3
Default Mechanistic

A very interesting thread. Godel has always been a favorite of mine, as relates to the unprovability of knowledge systems. My favorite line of reasoning is the old quote: "Nothing can be created out of nothing." It can. But, define nothing. Seriously think about it. At one point in our past nothing was most likely everything that existed.

God/energy/spirit and nothing. Here in lie the greatest of paradoxes.

I am an atheist. I am most sixty years old. I have experienced a great deal of mechanistic thought. I have also experienced thought far beyond being able to describe as mechanistic. What is woman? How does she work? Why does she work the way she does? How does male/female mechanics/love work? What is karmic control of a life?[ie., schitzophrenia, etc.] What is spiritual hedgmony of entire nations? What is economic suzerainty of all nations by one? How did it happen? Why is it as it is?

If we are ever to solve the AI paradigm, we must resolve the nothing paradox, as well as many others. I have an intuitive feeling AI is most totally paradox dependent. I feel all knowledge is. Nothing exists that can not in some way be paradoxed. [basically, as Godel showed back in the 30's]

I have no proof. I simply believe as Bob Dylan stated: "You want the truth? There ain't no truth." Heisenberg, Einstein, and Godel all contributed greatly to the road we must travel. We have a long way to go to truly and positively know the answers we seek.
Gillespie is offline  
Old 05-31-2003, 11:21 AM   #118
mhc
Regular Member
 
Join Date: Mar 2003
Location: CA
Posts: 124
Default

John Page wrote:
Quote:
BTW, I'd like to see some form of inverse of the Turing test where the machine has to tell whether its interacting with another machine or human intelligence
THAT, it seems to me, would be the real test that the machine can "do what we do".
mhc is offline  
Old 05-31-2003, 06:46 PM   #119
Veteran Member
 
Join Date: Apr 2003
Location: British Columbia
Posts: 1,027
Default

If a computer passes the Turing test, then that means that telling computers from humans is not the kind of thing we can do, so it doesn't make sense to require it of a computer.
sodium is offline  
Old 05-31-2003, 07:39 PM   #120
Veteran Member
 
Join Date: May 2001
Location: US
Posts: 5,495
Default

Quote:
Originally posted by sodium
If a computer passes the Turing test, then that means that telling computers from humans is not the kind of thing we can do, so it doesn't make sense to require it of a computer.
Why not? We use computers to make sense of MRI data, to search for patterns in the myriad of incoming astronomy data.....perhaps we can use computers to tell us who *we* really are.
John Page is offline  
 

Thread Tools Search this Thread
Search this Thread:

Advanced Search

Forum Jump


All times are GMT -8. The time now is 11:32 PM.

Top

This custom BB emulates vBulletin® Version 3.8.2
Copyright ©2000 - 2015, Jelsoft Enterprises Ltd.