Freethought & Rationalism ArchiveThe archives are read only. |
06-22-2002, 03:12 PM | #161 | |
Veteran Member
Join Date: May 2001
Location: US
Posts: 5,495
|
Quote:
There can be objects in our imagination that have no physical correlate. So, although we can conceive of the concept of "things that are unimaginable", by definition they cannot be imagined so there is no actual thing to participate in reality (although the "nonsensical" concept participates in reality). If I've caused any confusion, perhaps its because I define imagination as a real phenomenon. If it were not, the planets would be inside our heads.... Cheers, John |
|
06-22-2002, 03:16 PM | #162 | |
Veteran Member
Join Date: May 2001
Location: US
Posts: 5,495
|
Quote:
Cheers, John |
|
06-23-2002, 06:39 AM | #163 |
Regular Member
Join Date: Feb 2002
Location: Home
Posts: 229
|
excreationist...
"Anyway, the main point of this is that it shows how neural networks can infer (or predict) the outputs for unseen inputs... so it can work out some of the patterns without being explicitly taught it." You describe "it" as a java applet that in turn represents (by supposition) an actual neural network that is able to learn a pattern from samples given to it that presumably follow that pattern. You suggest (I think) that it can be said to learn a pattern if "it" can "infer" or correctly "predict" whether new data fits that pattern or not. There may be other ways that the applet could inform us that it has learned the pattern. In all cases, the applet demonstrates this by exhibiting output that someone having an understanding of what was intended to be learned would know whether it did so. May I assume this fairly represents what you intended to show by this? "Also, even in this very, very simple neural network, the information is smeared across many neurons. In a multiple layer neural network this problem gets much worse. So it makes it hard for people to understand exactly what each neuron is doing since they have such an interconnected relationship. If you get it to learn all the patterns, you'd basically see six pluses going along the main diagonal." I believe the above is intended to support your prior claim that neural networks do not information as "pictures" or as "descriptions" but in some other way. The problem with this is that when philosophers speak about this problem they aren't particularly interested in how information is stored, from a design standpoint, rather they are interested in whether or not the mapping (correspondence) is direct or indirect (somewhat like the difference between raster and vector graphic devices). Both, of course, could be used in our own neural network, each for a different purpose. One question the philosopher would ask is how concepts are stored -- as rules (descriptors) or as images. Do we recognize a cat because we have an image of a cat at our disposal for comparison (as John seems to think)? Or do we recognize a cat because perception involves a constructive process where data wind up fitting a rule-based pattern we know to be a cat. (Thus, a coiled rope can be observed to be a snake, because the data received from the coiled rope fits what we expect a snake to appear like in the same situation. Is this because the data are arrayed in our neural network topologically spatially (in the case of visual perception) or through rules of construction from smaller pieces?) Your applet seems to corroborate the view that concepts (information) are stored as rules and not images, though it is difficult to say since the example rule learned does not lend itself to discriminating between them. I'd be interested to know how your applet would deal with the concept of a triangle. owleye |
06-23-2002, 07:47 AM | #164 | |||
Guest
Posts: n/a
|
John,
Quote:
owleye, Quote:
The brain could, in principle, be understood without first using what a brain has to tell us. But why would anyone want to waste our time so? Why develop a theory of planetary motion without first looking into the night sky? If we don’t have the noteworthy phenomenon to explain, obviously we are not going to explain it in the near future! To fruitfully understand the brain we need to use theories to interpret it's bewilderingly complex states. As it happens we have theories that are to some degree right. We can understand language, we can anticipate perception. Why should the mind sciences ignore some of the best data we have? Quote:
The neuronal level physical description subsumes virtually endless high-leveled parsing of brain function. There is very great potential for developing a pseudo-mental language for describing brain function. I say pseudo-mental because mental description is known to have systematic inaccuracies. Even now, first person accounts are being supplanted by our knowledge of how the brain works! One early example of this kind of thinking is the description of brain function as vector transformation. We interpret the position of our hand in terms of the visual feedback from it in relation to the position of our head, our shoulder our elbow and our wrist. A very large part of our understanding consists in finding analogous conceptions of a situation and mapping the relevant facets of it onto the metaphor. Many brain structures capable of this kind of operation have been found. Regards, Synaesthesia |
|||
06-23-2002, 08:41 AM | #165 |
Regular Member
Join Date: Feb 2002
Location: Home
Posts: 229
|
excreationist...
"Yes this is true but I think in the far future it may be possible to monitor all of the neurons in the brain quite regularly (e.g. 100+ Hz). The hard problem would be to decode the information inside our brain. It would be theoretically possible to decode visual-spatial thoughts and summarize them using 2D and 3D pictures, etc. So it would basically be about finding the function of neurons - and translating the information they contain into other information (like video images or sound) in real-time. ... but theoretically it may be possible in the future to decode the information in the brain more directly and show how conscious experiences can all be precisely accounted for. (i.e. they would be able to read people's minds.... :eek " Before that eventuality, there remain certain philosophical difficultes. Suppose a sophisticated machine is attached to my brain which (in real time) provides a stream of conscious interpretation of what I am experiencing, both inwardly and outwardly. Apparently for your outer experience you would provide some audio-visual (and perhaps tactile, or other sensual) representation. To make this work, you would have to produce a realistic simulation of what is actually being observed and do so without the benefit of knowing what it is that is being experienced. As far as inner experience goes (such as thoughts, feelings, and the like), I suspect this cannot be simulated, but speculatively, I can imagine that it can be prompted, much the same way that I can get you to understand the pain I feel if I inflict the same kind of pain I'm having on you. However, again it would have to be an exact simulation, one in which it would not be possible for you to think you are other than the person who is being simulated. Thus, the simulation would have to suspend everything about the person you were and implant a different brain into the person. But this too seems implausible since it requires being in the same physical position in space and time, oriented in the same way toward the world, with exactly the same physical features. Thus, not only is it a complete simulation of the brain but it requires a complete simulation of the entire universe around which and about which the brain is occupied. I do not foresee this future. However, I might be able to see a future in which an understanding of brains could help me correct deficiencies I have. If I might ask a question about the applet. Would the applet produce the same neural configuration when it has learned something each and every time it goes through the learning process? Would you expect each of our own neural networks to have learned what a cat is (or what counts as a cat) in the same way, such that each of us has exactly the same neural configuration. If not, this sophisticated machine would have to determine the general pattern from each of the individual patterns that each of neural connections generate when we've learned what a cat is. The same would be the case for all concepts. "You can just count without counting anything in particular... but you would have learnt earlier that the pattern of number words are associated with quantities of objects. If you had never learnt that number words are used to count objects then number words would be meaningless words." I suspect you are wrong that before we have the ability to count objects, we must have learned earlier that the "pattern of number words are associated with quantities of objects." Kids seem able to provide sequence of numbers (i.e., 'one', 'two' 'three', ...) before they understand that these numbers can be applied to quantities, such as "how old are you?" which is one of the first applications of quantity. Indeed, it is often a fascinating experience. We do this first by associating a configuration of fingers to represent one or two or three, which they can hold up when asked the question of how old they are. This is not thought to be a quantity until we first get them to recognize that they are two (or three) today but will be three (or four) on their upcoming birthday. And this might not be enough. When a number is a quantity, they will associate it with the results of a count. This occurs quite late, I think, about the time they are able to learn to read. In the meantime, however, they learn the pattern of numbers that go into the count. It is then fair to say that until then numbers are meaningless (they don't mean anything -- they are merely sounds). However, the pattern remains and represents information that can be put to use by associating it with something meaningful, like an aggregate of objects. (Note as well that your applet may be able to learn a pattern, but the pattern learned and its information content has no meaning beyond that. Indeed, this is Searle's objection generally, that computer models have no understanding -- they attach no meaning to what they are doing (if it can be said that it does anything at all). "Yes. The moon is made up of a huge number of sub-atomic particles. We are only sensing some of the photons that are reflected off of the Moon's surface which came from the Sun." I suspected this is how you would respond. First, I can't see how you can call this an "approximation." It sounds to me like we are not seeing the moon at all, according to your theory. I had the impression that sub-atomic particles were unobservable. Secondly, when asked whether we see the moon, we don't ordinarly respond by saying that we see only a crescent shaped object in a certain direction away from us, though certainly this may be more accurate from your viewpoint. There are an unbounded number of perspectives that have to be considered when sight is analyzed in this way. Husserl, through what he has determined to be a phenomenological reduction that suspends the object and seeks only its image (so to speak), recognizes that our mind has great powers to perceive things despite that we have only a limited perspectives. When taking the natural attitude (i.e., living in the world), sight is of objects seen, not images of the objects.) Indeed, when shown a picture of a group of folks and asked whether we can pick out a certain person from it, we don't reply "here is a picture of that person, pointing to the picture, rather we say "there he his" while pointing. it may not be literally correct, but it is in the nature of perception to perceive objects as objects, not as images. owleye |
06-23-2002, 10:23 AM | #166 | |||
Guest
Posts: n/a
|
owleye,
Quote:
First, neural networks like the java one are deterministic. However, they are quite unpredicable. Imagine a kind of landscape of possible initial network configurations. The landscape is composed of N+1 colors. One color for each of N possible configuration that produces a correct answer, one color (say black) for any incorrect answer. You’ll notice a few things about almost all of the useful networks. One configuration that gets to the right answer might be able to tolerate quite a lot of change before producing a wrong answer. So there will be a large color area with only a few black spaces. In other realms of configuration space, there will be isolated color areas and large reaches of black. The major disadvantage of simple neural networks (even large, simple ones) is that it’s hard to tell in advance which initial configurations will produce the best result. You need a feedback mechanism which can usefully modify the initial conditions to improve the odds of producing useful results. Quote:
At any rate, as you point out, even before they understand the measure of magintude as related to a number line, children understand things like magnitude, they can often count and they have sophisticated notions of comparison. In short, they have almost all the meaning, they just haven’t put it all together. Quote:
Regards, Synaesthesia |
|||
06-23-2002, 01:08 PM | #167 | |
Veteran Member
Join Date: May 2001
Location: US
Posts: 5,495
|
Quote:
As you say, in a huge number of ways.... I think we are concurreing that the "token" or "symbol" being a language representation is arbitrary (9, nine, onze etc.) However, to be meaningful you have to know how to decode these tokens into an actual example (axiomatic concept) of the quantity nine. But the "quantity nine" is abstract (i.e. real but invisible) of which I believe we will find instances in the brain when we understand its workings. I believe the "normal" processing of color in the brain is already well established and the detection of a specific colors can be mapped to a complex of cells. It seems clear to me that the activity of these cells is carried out in context with visual and aural analysis giving context to the color detected. "Red" on its own would be as meaningless as "Nine" on its own. This being the case, both signs and the transmission of signs using signals is necessary for sensory processing. Cheers, John |
|
06-23-2002, 05:59 PM | #168 | |
Veteran Member
Join Date: May 2001
Location: US
Posts: 5,495
|
Quote:
Um, so do you have a recommendation or not? Cheers, John |
|
06-23-2002, 06:55 PM | #169 | ||||||||||||||
Veteran Member
Join Date: May 2001
Location: US
Posts: 5,495
|
Quote:
Quote:
Quote:
Quote:
Quote:
Quote:
Quote:
Quote:
This is why I've developed an ontology that starts with the experience of the observer rather than a "claim". Quote:
Quote:
Quote:
Quote:
Quote:
Quote:
Taken from the summary provided on P.277 of Chapter XIII Voloume VI of Coppleston's The History of Philospophy. I do have a copy of "Critique of Pure Reason" but cannot locate it at this time. Cheers, John |
||||||||||||||
06-23-2002, 08:59 PM | #170 |
Regular Member
Join Date: Feb 2002
Location: Home
Posts: 229
|
Synaesthesia...
"First, neural networks like the java one are deterministic. However, they are quite unpredicable." May I infer that our own neural network would be produce unpredictable neural configurations on having learned what something is? That is, if we are unable to determine in advance what configuration of neurons would be developed on it having learned what a cat is, how would it ever be possible for a study of the brain alone to determine what it had learned? This was the philosophical problem I was trying to introduce for those who think the brain and mind are so closely related that a mapping can be done between them. "No, I wouldn’t say quite that late in life do they attach meaning to numbers." You could be right but my point was that at least part of the structural element of counting involves the ability to put elements in sequence (in this case sounds). ABCs and 123s are learned in preschool often through a musical rendering. If I understood excreationist's and John's view on this, they maintain that numbers exist only in the context of quantifiable objects. I can't say for sure, however, since one or both of them conceded that counting can be performed without the need to apply it to objects. Whoever admitted this, went on to claim that such counting had to be preceded by learning to apply it to objects. (I suppose this is because they are empiricists, who think that mathematical propositions are learned from experience. Perhaps you are as well.) "Computers have very narrow meaning networks these days. This is an objection to the limited architecture of today’s AI, it is not objection to AI in general." I take it then that you have resolved in your mind what it would require for a machine to know what it is doing? Is the Turing test your guide? owleye |
Thread Tools | Search this Thread |
|