FRDB Archives

Freethought & Rationalism Archive

The archives are read only.


Go Back   FRDB Archives > Archives > IIDB ARCHIVE: 200X-2003, PD 2007 > IIDB Philosophical Forums (PRIOR TO JUN-2003)
Welcome, Peter Kirby.
You last visited: Yesterday at 05:55 AM

 
 
Thread Tools Search this Thread
Old 03-24-2002, 04:49 PM   #31
Veteran Member
 
Join Date: Dec 2002
Location: Gatorville, Florida
Posts: 4,334
Arrow

Quote:
Originally posted by John Page:
<strong>There appear to be a number of contradictions in your previous post. How can people choose anything if they don't have an element of Free Will? What data is there to support your view? </strong>
The contradictions only appear based upon just how one chooses to define what "Free Will" actually is. Please read my above two posts and see if it doesn't make more sense. Also, I refer to some data in support of my position in those same posts.

In particular, I believe that the functions of a human brain can (and eventually will) be reduced to a computer program. Once that occurs, we can have yet-another-argument over what "Free Will" actually means. If you can reduce the functioning of a human brain down to a computer program, then I believe the matter of Determinism will be settled, once and for all. But only time will tell, as we haven't (yet) reached the point where we can actually do that.

== Bill
Bill is offline  
Old 03-24-2002, 04:55 PM   #32
Veteran Member
 
Join Date: Dec 2002
Location: Gatorville, Florida
Posts: 4,334
Thumbs down

Quote:
Originally posted by excreationist:
<strong>There is a big difference though - chess computers are given explicit instructions by programmers about *exactly* what to do.
Humans on the other hand begin with several basic instincts and automonously learn new behaviours - just to seek their instincts. Humans can be trained like circus animals, but most of what we learn is picked up by us learning for ourselves. </strong>
I'm not so sure about this; at least I doubt your use of the word "*exactly*" in this assertion.

The idea of heuristic computer algorithms is not new, and it is my understanding that the "Deep Blue" computer program was a heuristic (learning) algorithm. Thus, there was no "*exactly*" in its programming. There was only a general heuristic algorithm and a series of goal-seeking algorithms employed to optimize the decision tree. Thus, "Deep Blue" didn't try to work its way through every one of the millions of possible counter-moves, but only sought to explore in some depth the few thousand most likely future positions. This is the real difference between a "brute force" approach (which seeks to try out every possible alternative) and a heuristic and goal-seeking approach (which seeks to rapidly locate an "optimal" solution). The latter is, of course, far closer to actual human thought.

== Bill
Bill is offline  
Old 03-24-2002, 04:59 PM   #33
Veteran Member
 
Join Date: Dec 2002
Location: Gatorville, Florida
Posts: 4,334
Thumbs up

Quote:
Originally posted by John Page:
<strong>I'm with you pretty much all the way except this part, where you say "ONLY". I think part of our decision making includes considering an issue from several different angles. I think this "perceived objectivity" contributes to the "free will perception" because it creates the illusion that the decision has come partly from outside ourselves. (Hence illusion of divine inspiration?). </strong>
Here, I agree. The whole idea of the "Cartesian Theater" has been roundly trounced within the Philosophy of Mind circles. However, the illusion persists that our "essential selves" are merely sitting in our brains and watching our lives go by (this is the essence of what DesCartes was speaking of). This is, of course, an illusion; but it remains a popular misconception about how our brains function.

== Bill
Bill is offline  
Old 03-25-2002, 09:12 AM   #34
Regular Member
 
Join Date: Mar 2002
Location: Earth
Posts: 247
Post

Quote:
Originally posted by owleye:
<strong>Hans...

"Wherever the volition of ones will can be deduced, that volition is always self serving."

1. Well, I suppose you could support this merely by defining (voluntary) actions in such a way that they were always self-serving. However, I'm sure you wouldn't wish to do that.</strong>
This actually is my position. Although I'm uncertian that I am capable of doing so (defining voluntary actions in such a way that they were always self serving). Would seem to be quite a task given the multitude of voluntary action.

If this thread has taught me anything, it has taught me that I have a lot of learning to do before I can effectively offer anything to this forum. Lesson learned.

Thanks for your's and everyone else's time who has contributed to this topic and to those who may continue to.
Hans is offline  
Old 03-25-2002, 04:32 PM   #35
Regular Member
 
Join Date: Feb 2002
Location: Home
Posts: 229
Post

Hans...

Well, hate to see you quit so soon, particularly from the position you reveal in your last response where you define voluntary action so that it must be self-serving. I'd thought we were going to make some headway. Instead, you have made no progress whatsoever. You are trying to promote something by way of a mere posit, as if this was supposed to stimulate further discussion. That is, all you have been doing is supposing that voluntary actions are self-serving -- like an assumption that you wish to draw on to make some further point. However, if you are going to do this sort of thing, you should pick an assumption that is less controversial.

As an alternative, you could posit this controversial assumption for a quite different reason -- namely to see whether or not it leads to some sort of a contradiction or to something which you would otherwise not find acceptable, for example, you could show that this assumption leads inextricably to the idea that there is no such thing as moral conduct, which you otherwise would think there is. In this case, you might be using your posit wisely to reject this assumption.

However, to use a controversial assumption just to support a conclusion which follows from this is to fall prey to a logical fallacy, known as begging the question. I'd be more careful if I were you in making these kinds of arguments.

owleye
owleye is offline  
Old 03-25-2002, 09:05 PM   #36
Veteran Member
 
Join Date: Aug 2000
Location: Australia
Posts: 4,886
Post

John Page:
Quote:
...I think part of our decision making includes considering an issue from several different angles. I think this "perceived objectivity" contributes to the "free will perception" because it creates the illusion that the decision has come partly from outside ourselves....
Yeah I mentioned that... that is what I meant by "us evaluating our options". Sometimes this is done intuitively with most of the reasoning being unconscious, (in other areas of the brain) and sometimes it is guided (e.g. with a commentating voice).

Bill:
What I mean is that programmers programmed in Deep Blue's problem solving strategies. On the other hand, animals and people can develop their own problem solving strategies just by interacting with their environment. This means that Deep Blue is only good at playing chess games while we can learn totally new skills - e.g. that when you let go of an object, it usually falls, or if you see an object behind your image in a mirror, it is behind you in reality. So there are lots of things that aren't directly instinctual, that our parents don't teach us... we just work it out for ourselves.

[ March 25, 2002: Message edited by: excreationist ]</p>
excreationist is offline  
Old 03-26-2002, 05:51 AM   #37
Regular Member
 
Join Date: Mar 2002
Location: Earth
Posts: 247
Post

owleye,

I think my only mistake was beginning a discussion in an area of which I'm considerably lacking. Since I'm not literate in human behavour I'm incapable countering arguments or supporting my own. So I've done the only thing I can do after making such a mistake. I've admitted as much and moved on.
Hans is offline  
Old 03-26-2002, 08:48 AM   #38
Kip
Regular Member
 
Join Date: Jan 2001
Location: not so required
Posts: 228
Post

Quote:
Then are there some who desire the evil and others who desire the good?

Do not all men, my dear sir, desire good?
Socrates

[ March 26, 2002: Message edited by: Kip ]</p>
Kip is offline  
Old 03-26-2002, 05:41 PM   #39
Veteran Member
 
Join Date: Dec 2002
Location: Gatorville, Florida
Posts: 4,334
Post

Quote:
Originally posted by excreationist:
<strong>What I mean is that programmers programmed in Deep Blue's problem solving strategies. On the other hand, animals and people can develop their own problem solving strategies just by interacting with their environment. This means that Deep Blue is only good at playing chess games while we can learn totally new skills - e.g. that when you let go of an object, it usually falls, or if you see an object behind your image in a mirror, it is behind you in reality. So there are lots of things that aren't directly instinctual, that our parents don't teach us... we just work it out for ourselves. </strong>
Yes, Deep Blue was only good for playing chess. That was the only set of problem solving strategies which it had been "taught."

However, I think you overstate the case for humans when you assert that "animals and people can develop their own problem solving strategies just by interacting with their environment." I think that is true, but only to a very limited debree, and a very important element of the word "environment" is other (older, wiser) animals acting out problem solving strategies (by way of example). In the case of humans, we learn an additional skill of passing on problem solving strategies through communication (language). The real power of modern civilization is the ability to preserve the learning of others indefinitely (through writing; and more recently, through printed books). So, I hope you see that the amount of individual innovation is actually fairly low. The vast bulk of human learning comes from examining the actions of others, even vicariously through communication (story telling, etc.).

=====

We have not really tried to create a very general purpose computer simulation. Its the old "crawl before you walk" approach towards incremental developmental advances. That, in and of itself is yet another human-developed problem solving strategy.

However, I see the distinction as being primarily quantitative rather than qualitative. In other words, I don't see any really formiddable barriers towards developing true artificial intelligence. It doesn't require a qualitative increase in computer capabilities. It only requires current technology, plus a far better understanding of exactly what our own (human) brains actually do. The basic "processing cycle time" of the human brain is very slow (to the extent that such measurements have been obtained; see Dennet, etc.). What the human brain does have is tremendous parallel processing capabilities. Those slow-but-parallel processing capabilities can easily be emulated on a fast-but-serial computational device (a modern computer). The thing which is lacking is the model for the software. The hardware we have will do perfectly well, all things being equal.

=====

I think that we make a great mistake when we tend to over-complexify what it takes to "act human." Yes, we have some wonderful capabilities (in the form of "front-end processors," like speech recognition and visual recognition signal processors). But our basic memory and thinking capabilities are no match for what a good computer can churn through. And the hardware and software requirements for good parallel processing is even getting cheap to obtain. At some point in the not-too-distant future, it will be possible to emulate the parallel processing aspects of the human brain for a cost that will not be viewed as large. Again, what we lack, in reality, is a good enough understanding of the operation of our own brains so as to be able to create the appropriate software to emulate our human brain operation.

Unfortunately, research into human brain functioning is quite difficult without the ability to do things that would be strongly opposed by medical ethicalists. So, we plod along, limited to (mostly) non-invasive investigatory procedures. Even these limits, however, should not deter us from a solution in the not-too-distant future.

== Bill
Bill is offline  
Old 03-27-2002, 12:05 AM   #40
Veteran Member
 
Join Date: Aug 2000
Location: Fidel
Posts: 3,383
Post

Excreationist and Bill-



I think we need to consider the isolation of computers from the environment when we say that they cannot learn. So far, I don't think anyone has developed a robot with heuristic algorithms that also has the same # or type of inputs from the environment that humans and animals have. (or the terabytes of storage capacity humans have)

If a computer was programmed with algorithms that "learned" to associate different inputs from the environment, I think we would find that the computer "learned" language, etc. For what is learning, besides the observation and storage (memory) of causal connections and associations in the environment?

To further clarify what I mean by isolation of computers from the environment:

Deep Blue can learn to play better chess be cause Deep Blue has contact with the environment through chess- no other inputs are attached to deep blue.

For a computer to truly become independant of human input (become an artificial intelligence) - the computer needs to have inputs from the environment and learning algorithms that interpret and associate those inputs.

[ March 27, 2002: Message edited by: Kharakov ]</p>
Kharakov is offline  
 

Thread Tools Search this Thread
Search this Thread:

Advanced Search

Forum Jump


All times are GMT -8. The time now is 04:13 AM.

Top

This custom BB emulates vBulletin® Version 3.8.2
Copyright ©2000 - 2015, Jelsoft Enterprises Ltd.