Freethought & Rationalism ArchiveThe archives are read only. |
03-17-2002, 11:29 PM | #81 | ||||||||||
Senior Member
Join Date: Feb 2001
Location: Toronto
Posts: 808
|
Laurentius, thanks for the reply. I didn't mean to be quite as bitter as I sounded in my reply before last. To offer an exuse, I was under the influence . I realize how many replies you have been getting from various people, so feel free to take your time.
Quote:
Quote:
Quote:
Quote:
Quote:
Quote:
Quote:
Quote:
Quote:
Quote:
|
||||||||||
03-18-2002, 03:51 AM | #82 |
Veteran Member
Join Date: Dec 2001
Location: Lucky Bucky, Oz
Posts: 5,645
|
Owleye
Well, this is indeed something that needs to be worked on. No matter how deep the thought of Deep Thought is I suspect that is not deep enough since it lack self relectivity. This is what I was talking about after all. The power of a conscious system to objectively view itself and act according to the inhanced information and analysis thus generated. But I'll certainly come back later. Now I've just poked in during a lunch break. AVE |
03-18-2002, 09:34 AM | #83 |
Regular Member
Join Date: Feb 2002
Location: Home
Posts: 229
|
Laurentius...
"This is what I was talking about after all. The power of a conscious system to objectively view itself and act according to the inhanced information and analysis thus generated." Here's hoping you do. Consciousness, in the form in which most philosophers speak of it, implies self-consciousness. That is, when we are conscious of X, we can be, reflectively, conscious of being conscious of X. This way of analyzing the problem, of course, takes us down the path of phenomenology. I would recommend pursuing this, and, if possible to become acquainted with Husserl, who has us being able to carry out a transcendental reduction whereby we are able to decouple consciousness, in the form he refers to as a natural attitude toward the world, from that which we are conscious of without such a reference. There is a distinction then between the object as intended, and the intended object. Frege would refer to the former as the sense, the latter as the reference. Intension and Extension as aspects of meaning are also in common use. Following in the footsteps of Immanuel Kant here, Husserl realizes that in our natural mode of consciousness we take the world as it appears to us (not taking this to mean as it seems to be) to be the real world. In that sense, following Kant, our mind in its capacity to experience the world takes up the position of an empirical realist. But, Husserl goes on to notice that we also have the ability to perceive objects without attributing what Kant would have referred to as substance. There are in fact examples of this occurring spontaneously, as for example, when we believe the image of the person speaking to us from Afghanistan has been time delayed and at the time the image appears to us the person so speaking has moved on. That is, on this realization, we've decoupled the person from the image. Of course, because of the pressing needs of life, we don't ordinarily occupy this condition for long, slipping back into the natural attitude rather quickly. (Another example, is how the image of the sun can be thought decoupled from the sun itself, which clearly is not in that same position at the time the image is presented.) In any case, he goes on to describe a vast number of modes of consciousness toward what he believes to be a science of phenomenology. owleye |
03-19-2002, 01:42 AM | #84 | |||
Veteran Member
Join Date: Dec 2001
Location: Lucky Bucky, Oz
Posts: 5,645
|
Christopher Lord
AVE Quote:
Quote:
Quote:
Okay, I'm going to present a coherent persepective of the achievements for my theory quite soon. AVE [ March 19, 2002: Message edited by: Laurentius ]</p> |
|||
03-19-2002, 05:46 AM | #85 | |
Veteran Member
Join Date: Aug 2000
Location: Australia
Posts: 4,886
|
Laurentius:
I just read a little of your last post.... Quote:
Anyway, I think cutting-edge AI could teach itself new skills but it wouldn't be much smarter than a kitten. It would need to have a craving for newness to motivate it to discover and explore things. Computers on their own aren't self-training though. They just do exactly as they are told, step by step. |
|
03-19-2002, 12:19 PM | #86 | |
Senior Member
Join Date: Feb 2001
Location: Toronto
Posts: 808
|
Quote:
My own projects typically aren't too intresting, typically because I dont have time to train a net properly or crunch huge GAs. Usually I write a conventional app to train them, which cant train complex things. Game AI is typically 2-5 years behind whats at the state of the art, and current game AI has achieved plenty of 'creativity'. Pick up 'black and white', and you have a fairly sophisticated little AI package with which you can train a creature who is driven by an interesting reward/punnishment (slap/pet) system. The game creators never even imagined some of the things this AI is now being reported as achieving. It 'learns' and acts appropriately based on what the operator rewards them for. I've coaxed my own creature to eat only cows, and to heal people. Some people train the opposite. some people train it to always lift weights, some train it to throw a certain person in the ocean. This is in a comercial game released a year ago, using AI methods developed long before. <a href="http://ai-depot.com/" target="_blank">ai-depot.com</a> is a site offering help in the practical implementation of AI in current systems. <a href="http://technetcast.ddj.com/tnc_play_stream.html?stream_id=526" target="_blank">This</a> is a transcript in which Marvin Minsky talks about why current nets cant 'reflect' on output. And remember, current hardware is only as complex as a small insect, so by that metric our AI software is actually ahead of nature in terms of the complexity-behavior ratio. [ March 19, 2002: Message edited by: Christopher Lord ]</p> |
|
03-19-2002, 01:03 PM | #87 | ||||||
Guest
Posts: n/a
|
Laurentius,
Quote:
Quote:
My position is that like gliders, our minds are amenable to being understood without direct reference to physical laws. Physical objects can indeed have things like intentionality(will) and can indeed interact with the world via a representation of it. Quote:
Quote:
One can easily make a self-modifying computer program. What is far from easy, and what took evolution billions of years to do, is make a program modify itself in useful and flexible ways soas to cope with a diversity of situations and tasks. Christopher Lord, Quote:
excreationist, Quote:
Regards, Synaesthesia |
||||||
03-19-2002, 01:20 PM | #88 | ||
Guest
Posts: n/a
|
owleye
Quote:
Quote:
1.This post is 170 words long. 2.The words in it are not 170 words long. Therefore 3.This post is not made of words. Regards, Synaesthesia |
||
03-19-2002, 04:39 PM | #89 | ||
Veteran Member
Join Date: Aug 2000
Location: Australia
Posts: 4,886
|
Quote:
Synaesthesia: Quote:
Actually it looks like the <a href="http://www.genobyte.com/robokoneko.html" target="_blank">robotic kitten</a> hasn't been finished yet. And I'm not sure how good it would be at exploring its environment and teaching itself new behaviours. I think an even more advanced robot would be <a href="http://www.cyberlife-research.com/contents.htm" target="_blank">Lucy</a> - which is being created by the creator of the "Creatures" games, Steve Grand. This robot is also controlled by a neural net, but it will begin extremely simply - like a baby - and it learns new behaviours. And obviously, we're much smarter than kittens... but AI research keeps on moving forward... |
||
03-19-2002, 07:29 PM | #90 |
Veteran Member
Join Date: Dec 2001
Location: Lucky Bucky, Oz
Posts: 5,645
|
Owleye
Year 3333. Astronaut Joe Amstrong and his highly computerized shuttle are hovering Jupiter's troubled gaseous surface in a mission during which the shuttle has learned how to deal with many of Jupiter's gravitational irregularities. Suddenly, while Joe Amstrong is busy abserving the recent unusual behavior of the guineea pigs in the lab area, the computer displays on its screen an enormous gravitational storm in the eye of the planet and announces bluntly: "Shuttle destruction is imminent in 10 seconds." The guineea pigs are frantically running all over the shuttle now, and their crazed squeaks show how they cannot take their little senses away from the danger. In a moment astronaut Joe Amstrong mentally views his highly sophisticated but minuscule shuttle caught in the gravitational turbulence, with him and the guineea pigs trapped inside, and says to himself: "We're doomed." No matter how complex the AI of the shuttle is, in the case of the computer we can only speak about INTRANZITIVE behavior, the automatic execution of implanted or learned operations. By contrast, the guineea pigs show higly TRANZITIVE behavior toward the environment with regards to which they engage an emotional rapport. Detaching himself to his emotions and mentally picturing the situation in which they are all caught, the astronaut's behavior is REFLECTIVE. I don't know how convincing this hypothetical situation is, but that's about the most rigorous "philosophy on the run" I am able to provide under these informal conditions. Besides, the great difficulty is to persuade computer-oriented people of the fallacy considering high-intelligence bearing structures as being relatively the same as the human brain. Since my major is language, I've inevitably studied various theories, from Saussure to Chomsky and beyond. I'm not working in this field though, and I've come to detach myself from all those abstractizations. I think I'm more attracted by a popular philosophy that specialists may scorn, but which can be accepted and assimilated by the common man. AVE |
Thread Tools | Search this Thread |
|