Freethought & Rationalism ArchiveThe archives are read only. |
03-24-2002, 04:49 PM | #31 | |
Veteran Member
Join Date: Dec 2002
Location: Gatorville, Florida
Posts: 4,334
|
Quote:
In particular, I believe that the functions of a human brain can (and eventually will) be reduced to a computer program. Once that occurs, we can have yet-another-argument over what "Free Will" actually means. If you can reduce the functioning of a human brain down to a computer program, then I believe the matter of Determinism will be settled, once and for all. But only time will tell, as we haven't (yet) reached the point where we can actually do that. == Bill |
|
03-24-2002, 04:55 PM | #32 | |
Veteran Member
Join Date: Dec 2002
Location: Gatorville, Florida
Posts: 4,334
|
Quote:
The idea of heuristic computer algorithms is not new, and it is my understanding that the "Deep Blue" computer program was a heuristic (learning) algorithm. Thus, there was no "*exactly*" in its programming. There was only a general heuristic algorithm and a series of goal-seeking algorithms employed to optimize the decision tree. Thus, "Deep Blue" didn't try to work its way through every one of the millions of possible counter-moves, but only sought to explore in some depth the few thousand most likely future positions. This is the real difference between a "brute force" approach (which seeks to try out every possible alternative) and a heuristic and goal-seeking approach (which seeks to rapidly locate an "optimal" solution). The latter is, of course, far closer to actual human thought. == Bill |
|
03-24-2002, 04:59 PM | #33 | |
Veteran Member
Join Date: Dec 2002
Location: Gatorville, Florida
Posts: 4,334
|
Quote:
== Bill |
|
03-25-2002, 09:12 AM | #34 | |
Regular Member
Join Date: Mar 2002
Location: Earth
Posts: 247
|
Quote:
If this thread has taught me anything, it has taught me that I have a lot of learning to do before I can effectively offer anything to this forum. Lesson learned. Thanks for your's and everyone else's time who has contributed to this topic and to those who may continue to. |
|
03-25-2002, 04:32 PM | #35 |
Regular Member
Join Date: Feb 2002
Location: Home
Posts: 229
|
Hans...
Well, hate to see you quit so soon, particularly from the position you reveal in your last response where you define voluntary action so that it must be self-serving. I'd thought we were going to make some headway. Instead, you have made no progress whatsoever. You are trying to promote something by way of a mere posit, as if this was supposed to stimulate further discussion. That is, all you have been doing is supposing that voluntary actions are self-serving -- like an assumption that you wish to draw on to make some further point. However, if you are going to do this sort of thing, you should pick an assumption that is less controversial. As an alternative, you could posit this controversial assumption for a quite different reason -- namely to see whether or not it leads to some sort of a contradiction or to something which you would otherwise not find acceptable, for example, you could show that this assumption leads inextricably to the idea that there is no such thing as moral conduct, which you otherwise would think there is. In this case, you might be using your posit wisely to reject this assumption. However, to use a controversial assumption just to support a conclusion which follows from this is to fall prey to a logical fallacy, known as begging the question. I'd be more careful if I were you in making these kinds of arguments. owleye |
03-25-2002, 09:05 PM | #36 | |
Veteran Member
Join Date: Aug 2000
Location: Australia
Posts: 4,886
|
John Page:
Quote:
Bill: What I mean is that programmers programmed in Deep Blue's problem solving strategies. On the other hand, animals and people can develop their own problem solving strategies just by interacting with their environment. This means that Deep Blue is only good at playing chess games while we can learn totally new skills - e.g. that when you let go of an object, it usually falls, or if you see an object behind your image in a mirror, it is behind you in reality. So there are lots of things that aren't directly instinctual, that our parents don't teach us... we just work it out for ourselves. [ March 25, 2002: Message edited by: excreationist ]</p> |
|
03-26-2002, 05:51 AM | #37 |
Regular Member
Join Date: Mar 2002
Location: Earth
Posts: 247
|
owleye,
I think my only mistake was beginning a discussion in an area of which I'm considerably lacking. Since I'm not literate in human behavour I'm incapable countering arguments or supporting my own. So I've done the only thing I can do after making such a mistake. I've admitted as much and moved on. |
03-26-2002, 08:48 AM | #38 | |
Regular Member
Join Date: Jan 2001
Location: not so required
Posts: 228
|
Quote:
[ March 26, 2002: Message edited by: Kip ]</p> |
|
03-26-2002, 05:41 PM | #39 | |
Veteran Member
Join Date: Dec 2002
Location: Gatorville, Florida
Posts: 4,334
|
Quote:
However, I think you overstate the case for humans when you assert that "animals and people can develop their own problem solving strategies just by interacting with their environment." I think that is true, but only to a very limited debree, and a very important element of the word "environment" is other (older, wiser) animals acting out problem solving strategies (by way of example). In the case of humans, we learn an additional skill of passing on problem solving strategies through communication (language). The real power of modern civilization is the ability to preserve the learning of others indefinitely (through writing; and more recently, through printed books). So, I hope you see that the amount of individual innovation is actually fairly low. The vast bulk of human learning comes from examining the actions of others, even vicariously through communication (story telling, etc.). ===== We have not really tried to create a very general purpose computer simulation. Its the old "crawl before you walk" approach towards incremental developmental advances. That, in and of itself is yet another human-developed problem solving strategy. However, I see the distinction as being primarily quantitative rather than qualitative. In other words, I don't see any really formiddable barriers towards developing true artificial intelligence. It doesn't require a qualitative increase in computer capabilities. It only requires current technology, plus a far better understanding of exactly what our own (human) brains actually do. The basic "processing cycle time" of the human brain is very slow (to the extent that such measurements have been obtained; see Dennet, etc.). What the human brain does have is tremendous parallel processing capabilities. Those slow-but-parallel processing capabilities can easily be emulated on a fast-but-serial computational device (a modern computer). The thing which is lacking is the model for the software. The hardware we have will do perfectly well, all things being equal. ===== I think that we make a great mistake when we tend to over-complexify what it takes to "act human." Yes, we have some wonderful capabilities (in the form of "front-end processors," like speech recognition and visual recognition signal processors). But our basic memory and thinking capabilities are no match for what a good computer can churn through. And the hardware and software requirements for good parallel processing is even getting cheap to obtain. At some point in the not-too-distant future, it will be possible to emulate the parallel processing aspects of the human brain for a cost that will not be viewed as large. Again, what we lack, in reality, is a good enough understanding of the operation of our own brains so as to be able to create the appropriate software to emulate our human brain operation. Unfortunately, research into human brain functioning is quite difficult without the ability to do things that would be strongly opposed by medical ethicalists. So, we plod along, limited to (mostly) non-invasive investigatory procedures. Even these limits, however, should not deter us from a solution in the not-too-distant future. == Bill |
|
03-27-2002, 12:05 AM | #40 |
Veteran Member
Join Date: Aug 2000
Location: Fidel
Posts: 3,383
|
Excreationist and Bill-
I think we need to consider the isolation of computers from the environment when we say that they cannot learn. So far, I don't think anyone has developed a robot with heuristic algorithms that also has the same # or type of inputs from the environment that humans and animals have. (or the terabytes of storage capacity humans have) If a computer was programmed with algorithms that "learned" to associate different inputs from the environment, I think we would find that the computer "learned" language, etc. For what is learning, besides the observation and storage (memory) of causal connections and associations in the environment? To further clarify what I mean by isolation of computers from the environment: Deep Blue can learn to play better chess be cause Deep Blue has contact with the environment through chess- no other inputs are attached to deep blue. For a computer to truly become independant of human input (become an artificial intelligence) - the computer needs to have inputs from the environment and learning algorithms that interpret and associate those inputs. [ March 27, 2002: Message edited by: Kharakov ]</p> |
Thread Tools | Search this Thread |
|