Freethought & Rationalism ArchiveThe archives are read only. |
06-23-2002, 09:28 PM | #171 |
Regular Member
Join Date: Feb 2002
Location: Home
Posts: 229
|
John...
"However, to be meaningful you have to know how to decode these tokens into an actual example (axiomatic concept) of the quantity nine." What would be an "actual example" of the quantity nine? If instead of nine I'd chosen some huge number -- say with billions of digits in it, or even a transcendental number, or a square root of 2, would the decoding process be the same, albeit producing a different "actual example?" "But the "quantity nine" is abstract (i.e. real but invisible) of which I believe we will find instances in the brain when we understand its workings." "Real but invisible" sounds more like you believe in ghosts. In any case, I think your idea is incoherent. Nine, as a concept, is undoubtedly stored as a rule of a certain kind, and not an instance. That this is how it would be stored is especially true in consideration of very large numbers. Surely we don't store every number in the brain. "I believe the "normal" processing of color in the brain is already well established and the detection of a specific colors can be mapped to a complex of cells." As you probably are aware, every culture has its own color breakdown. I think it is the Hopi Indians who had at one time only four colors, two of which were white and black. In addition, certain eskimo cultures have numerous color categories for what many of us might call white. For the shopper today, the number of color categories is getting to be out of hand, what with all the names given to them by the paint producers. That we have a good understanding of how the brain processes colors, I have considerable doubt. Perhaps you could cite a source for me. "It seems clear to me that the activity of these cells is carried out in context with visual and aural analysis giving context to the color detected. "Red" on its own would be as meaningless as "Nine" on its own. This being the case, both signs and the transmission of signs using signals is necessary for sensory processing." Could you hazard a guess on what might be a sign (or signal) of red? I ask this only because it doesn't seem obvious that when I'm perceiving a sunset I'm seeing a sign of red. Rather I'm seeing a red sunset. owleye |
06-23-2002, 11:12 PM | #172 |
Regular Member
Join Date: Feb 2002
Location: Home
Posts: 229
|
John....
"If what is being counted is entries in a list of names, are you saying the planets comprise a list of names? The list is not even a set of sense data of the planets!!" No. The list is a list of planets. In enumerating the entries in the list by number, I'm counting the planets. This is how our mind works. Now, it's true that we can focus our attention on any aspect of this counting, but if our intent is to count the planets, a list of them will do quite well. If we didn't know the number of days in a week, but knew there names, we could count them by ordering the names in a sequence and counting them off one by one. "So? I think my explanation a lot closer description physical and phsioloogical fact than counting a list of names." This may be true, but it prevents such an explanation from explaining what a mind is capable of doing. "How about mirages and other optical illusions? How about number and geometry? How about color? In fact, how about an understanding of brain function being useful to knowing the limits of our "natural" perception?" Physiology is a useful science and so is mathematics. Presumably there is a science of color. Beyond that, however, I don't see any of these sciences helpful to determining what a mind can do. "How about x-ray images, for example, are they not a useful tool developed from an understanding that our "natural" senses do not detect all of the electromagnetic spectrum?" Undoubtedly. "Not according to some physicists who maintain that there are quanta and Xeno's paradoxes exist only in the mind, not in physical reality." My example stands unrefuted, however. If space-time is quantized, certain topological difficulties will have to be overcome, particularly the so-called Weyl tiling problem. With respect to Zeno's paradoxes, they would be resolved if they were "only in the mind." It is because Zeno's paradoxes referred to physical reality that their force was displayed. Motion must be rejected if there are gaps in space. Do you reject motion? "How about the word "mind" as the "token" that represents the processes of the brain? This would fulfil the criteria "something physical and something logical"." I think it is silly to call the mind an abstraction of the brain as you have called it. This misrepresents what the mind is in relation to the brain. Moreover, a "token" intends to reflect a physical representation of a type, and a type is what is abstract. Minds probably are the only things that can understand types (i.e.abstract entities). However, this does not mean that a mind is a type. "Get someone to check that out in a room with red lighting. Cats can see in four colors - I wonder what their mental equivalent of blue is?" Are you disputing that the color of my eyes are blue? "But how can you guarantee 100% your ontological claim if you were shown a very good hologram, for example?" Huh? I stipulated that it was a coiled rope. I didn't say I knew it was a coiled rope. I suspect the rationale for the above is that in your theory coiled ropes do not exist. "I think this comes back to our inability to know somthing directly, it can only occur through a mind/body border (or brain/body if you prefer that it is the brain that knows). "This is why I've developed an ontology that starts with the experience of the observer rather than a "claim"." What exists in your ontology? "Thanks, this was helpful. Was there a particular reason you mentioned the conscious, outside reality but not the subconscious?" Undoubtedly there is a vast realm of sub-conscious activity. I'm not happy about considering such activity mental activity, though I concede it probably plays a crucial role in determining what consciousness attends to. I tend to think, along with others, that sub-conscious activity serves the function of determining the relevance or significance of new data and discarding it if it lacks either. "But we went through the dictionary thing already and I still can't understand why you have difficulty with the way I'm using the word "abstract" so I was trying to get at your own definition of concrete. This harks back to an earlier point I was trying to make that "concrete" excludes some entities that are "physical" but clearly measurable and not abstract (such as quarks)." A quark is abstract only if we are not referring to a particular instance of one. The same is true of atoms and molecules. We know their properties generally, but unless we are referring to a concrete instance of one, we would consider it abstract. Abstraction derives from the principle of how we mentally eliminate from a set of particular instances all the properties that are peculiar to their individuality (e.g., their position in space, or their relative motion or their energy, or the exact number of hairs on their head, etc., etc.,) and retaining the properties that seem essential to all of them (or at least are common to all of them). The prepostion that follows the verb abstract in this use is "away." That is, we abstract away the unessential properties and relations, retaining the general or common properties. We keep the whiskers, but toss the color, of any given cat, in the process of learning what a cat is -- i.e., of developing its concept. "Glad to try. Some gods might (phenomenally) exist but to be known they'd have to move across the border from the top left to lower left quadrant." Huh? To exist phenomenally, I would need to be able to perceive it. To exist as a phenomenon is to have it sensibly appear to us in some way. "But, for example, if a god were defined as "the unknowable" I take this to be physically unknowable and thus a mere product of the imagination." Unfortunaely you didn't clarify what you mean by "real" which was what I was seeking. "But I think there are degrees of subjectivity which are diminished by repeatable observations, experimentation etc." I suspect this implies there are degrees of objectivity as well, which is increased by "repeatable observtions, experimentation, etc." Why pick on subjectivity rather than the more obvious objectivity for representing truth? "Now, if we could get "outside" our minds then perhaps we could be objective about how we perceive." Good luck. I think it is a terrible mistake to think this is the path to a sound theory of the mind. "The cognitive function of the categories lies in their application to objects as given in sense intuition, that is, to phenomena. Things-in-themsleves are not, and cannot be, phenomena. And we posses no faculty of intellectual intuition which could supply objects for a meta-phenomenal application of the categories. "Taken from the summary provided on P.277 of Chapter XIII Voloume VI of Coppleston's The History of Philospophy. I do have a copy of "Critique of Pure Reason" but cannot locate it at this time." Though I think Copelston's history is problematic, the above reference is undoubtedly a good one. However, I'm not sure you understand what is being said. Let me ask you this. Do you think we have an intellectual intuition that is able to bypass the senses and access things as they are in themselves? To do so, according to Kant, you would not be able to regard them as having magnitude, as being located in space or time, or being in motion, having causal properties, or be comprised of matter or energy. The above properties, according to Kant, are given to us only from sensible intuition, not from any sort of intellectual intuition. (Don't forget. Kant is an empirical realist, though it was only during the critical period that he finally shed Leibniz's monad (thing-in-itself) entirely, relegating it to an idea of reason.) owleye |
06-24-2002, 04:17 AM | #173 |
Veteran Member
Join Date: Aug 2000
Location: Australia
Posts: 4,886
|
owleye:
Before that eventuality, there remain certain philosophical difficultes. Suppose a sophisticated machine is attached to my brain which (in real time) provides a stream of conscious interpretation of what I am experiencing, both inwardly and outwardly. Apparently for your outer experience you would provide some audio-visual (and perhaps tactile, or other sensual) representation. To make this work, you would have to produce a realistic simulation of what is actually being observed and do so without the benefit of knowing what it is that is being experienced. Tactile sensations can be represented using multiple 3-D images of the body that are colour-coded to show different pressures, textures, temperatures, etc, pushing on the touch receptors. I'm talking about presenting the information is a fairly clinical detached way. As far as inner experience goes (such as thoughts, feelings, and the like), I suspect this cannot be simulated, but speculatively, I can imagine that it can be prompted, much the same way that I can get you to understand the pain I feel if I inflict the same kind of pain I'm having on you. There could be numerical readings which show how desirable or undesirable the signals are. These signals would be attached with goals, memories, external experiences, etc. The person would be limited to doing what had the most desirable signal at that point in time. Basically the contents of their brain would be summarized so that other people could see all the main things - their priorities/focus, etc. However, again it would have to be an exact simulation, one in which it would not be possible for you to think you are other than the person who is being simulated. Thus, the simulation would have to suspend everything about the person you were and implant a different brain into the person. But this too seems implausible since it requires being in the same physical position in space and time, oriented in the same way toward the world, with exactly the same physical features. Thus, not only is it a complete simulation of the brain but it requires a complete simulation of the entire universe around which and about which the brain is occupied. I'm not talking about simulating the brain - just keeping track of what it's doing - the information content. Note that the information can be represented in different ways - how desirable/undesirable things are can be represented with numbers rather than physical pleasure or pain acting on the observer. I do not foresee this future. However, I might be able to see a future in which an understanding of brains could help me correct deficiencies I have. Well I agree with what you were saying about not being fully experience someone else's experiences without forgetting about yourself. But I'm just talking about reading people's thoughts, etc, not fully reliving every part - and only every part - of their experiences. If I might ask a question about the applet. Would the applet produce the same neural configuration when it has learned something each and every time it goes through the learning process? Well firstly, it has random noise running through it, and that affects how it learns things - in minor ways. The +'s and -'s would still be in the same places though. If you train it under lots of noise the "weights" will become very strong in order to stand out from the noise. Without any noise the weights (+ and -) are only quite weak. Would you expect each of our own neural networks to have learned what a cat is (or what counts as a cat) in the same way, such that each of us has exactly the same neural configuration. No... our neurons attach to each other in a pretty chaotic way. I think they each connect to about 10,000+ others. In that applet you can simulate neural networks that have a different structure... e.g. by disabling some of the weights along the main diagonal, you are decreasing the number of inputs to the 8 neurons and the network will have to learn things in a different way that if those weights were there - there will be plus's in some of the places where there were minus's. So if the neural networks have even a slightly different structure (one neuron has a different number of inputs, etc) then they would learn in different ways. And this is true of our brain - our brain cells die - and are born - while we learn new things. If not, this sophisticated machine would have to determine the general pattern from each of the individual patterns that each of neural connections generate when we've learned what a cat is. The same would be the case for all concepts. Yeah, they learn generalized patterns. The thing with the rule of 78 is that it fairly understandable to humans what is going on. e.g. if there is a "1" and an output, there is always a "1" as an output, etc... so there is a line of +'s along the diagonal. The part where the 7's and 8's are are reversed along the other diagonal. Along the bottom 2 rows (neurons) it sees if it is a 147 and if so the output is a 7, rather than an 8. <a href="http://www.geocities.com/SiliconValley/2548/arch.gif" target="_blank">http://www.geocities.com/SiliconValley/2548/arch.gif</A> Here is how neural networks are normally drawn. (that's a 3 layer one I think) My applet had 8 inputs, 8 neurons and 8 outputs. It would be hard showing 64 weights on a picture like that though. BTW, <a href="http://www.geocities.com/SiliconValley/2548/ochre.html" target="_blank">here</a> is a character recognition network. It is a good introduction to neural networks. By default there are 192 greyscale pixel inputs (the pictures) to 8 input neurons, so those input neurons would have 1536 inputs in total. There are 12 hidden neurons by default (in the second layer)... each of those has an inputs coming from each of the 8 input neurons = 96 inputs going to the second (hidden) layer) the output layer would have 10 neurons (one for each of the outputs) and 12 inputs (from the hidden layer) = 120 inputs. So by default there are 1536 + 96 + 120 = 1752 weights...! (Mine had 64 weights) That neural network seems to work in an analog way where there are non-discrete inputs. On the other hand, my inputs were either on or off and the neurons either fired or didn't fire depending on if their threshold had been reached or not. <a href="http://blizzard.gis.uiuc.edu/htmldocs/Neural/neural.html" target="_blank">Here</a> is some more introductory information about neural networks. Apparently our brain uses on/off and analog (non-discrete) type neurons. I think that digit recognizer would work much better if it had more examples of each digit. Then it would be much better at guessing drawings that it hadn't seen before. (Although it treats all inputs the same whether it has seen them before or not - animals would react differently though - they would try to learn more about unexpected inputs and so master that new domain). ...(Note as well that your applet may be able to learn a pattern, but the pattern learned and its information content has no meaning beyond that. Well if you had a much more complex neural network and hooked it up to motors and a TV camera and touch sensors and gave it "desires" (like keeping its battery fairly charged) things would be different. By being self-motivated/self-directed, the contents of a neural network can have meaning (or at least a "personal" function). Indeed, this is Searle's objection generally, that computer models have no understanding -- Well tiny neural networks would only be comparable to things like bumblebees - they're pretty intelligent but obviously don't understand things at a human level. Remember that our brain has about 100,000,000,000 neurons that each have about 10,000 (or 50,000?) inputs. they attach no meaning to what they are doing (if it can be said that it does anything at all). Yeah, my applet doesn't really "do" anything on its own. On the other hand there are things like "a-life" that use the patterns they've learnt to seek their desires. (e.g. satisfy hunger, avoid obstacles, etc) I suspected this is how you would respond. First, I can't see how you can call this an "approximation." It sounds to me like we are not seeing the moon at all, according to your theory. I had the impression that sub-atomic particles were unobservable... We can see the location of many of the sub-atomic particles that make up the Moon because photons collide off of them. So we see a "reflection"(?) of the Moon rather than the Moon itself... but since most things just reflect light like that we say that we just see the object itself. In the case of silouhettes, like a tree in the sunset, hardly any light is reflecting off of it, it is blocking the light. We see an absence of light in the shape of the tree. Basically we can detect large scale objects using photons to a limited accuracy... an approximation of the complexity (zillions of particles, etc) that is out there. [ June 25, 2002: Message edited by: excreationist ]</p> |
06-25-2002, 05:37 PM | #174 | |||||||||||||||
Veteran Member
Join Date: May 2001
Location: US
Posts: 5,495
|
owleye:
Quote:
Quote:
Quote:
Quote:
Quote:
Quote:
In the above you state that you saw a coiled rope, but that you did not know it was a coiled rope. The sense data equivalent to a coiled rope exists in your mind/brain. It is possible for an "illusory" coiled rope to be perceived because our senses can be fooled in a number of ways ranging from photographs through models to a dream that seems to be "real". The observer comes to knowledge of the coiled rope through a more thorough examination involving touch etc. to establish other properties of the object concerned. This is the concept of "triangulation" of the sense I refered to earlier in this thread. Quote:
Quote:
Quote:
Quote:
Quote:
Quote:
Quote:
I acknowledge that a truth can be said to be objective within the bounds of the observations and conditions under which it is observed/deduced. Outside of these bounds that truth becomes subjective. Quote:
Quote:
Cheers, John |
|||||||||||||||
06-25-2002, 05:52 PM | #175 | |
Veteran Member
Join Date: May 2001
Location: US
Posts: 5,495
|
Owleye:
Quote:
1. An instance of a quantity may exist only in the context of a set of countable objects (or things). 2. We can discuss this by sharing the concept of quantity, which is at a greater level of abstraction than instances of quantity. 3. I did not state that an object had to be "learned" before it was counted. The object does, however, have to be experienced. Even if you are counting imaginary things in the sky, you are still counting imaginary things in the sky (and one can clearly experience ones own imagination). Cheers, John |
|
06-25-2002, 07:40 PM | #176 |
Veteran Member
Join Date: Aug 2000
Location: Australia
Posts: 4,886
|
owleye:
[to Synaesthesia] ...my point was that at least part of the structural element of counting involves the ability to put elements in sequence (in this case sounds). ABCs and 123s are learned in preschool often through a musical rendering. Yeah, ABC's and 123's are similar in that way. If I understood excreationist's and John's view on this, they maintain that numbers exist only in the context of quantifiable objects. I'm saying that for someone to properly understand a number, they need to be aware that they can be associated with objects. e.g. you might be able to teach a kid these sounds... "eins-zwi-dri-fveer-foonf" and these sounds are numbers... (in German) but the kid won't be able to *use* these numbers unless they understand how those sounds can be associated with quantites of objects! I can't say for sure, however, since one or both of them conceded that counting can be performed without the need to apply it to objects. Reciting sounds is different from being able to associate those sounds with quantities. The latter is required to properly understand what numbers are all about. Whoever admitted this, went on to claim that such counting had to be preceded by learning to apply it to objects. No, you can learn to recite the sounds first... e.g. you could teach someone those German sounds without telling them what those sounds mean. (I suppose this is because they are empiricists, who think that mathematical propositions are learned from experience... Yeah, of course we learn maths from experience! |
06-25-2002, 08:07 PM | #177 | |
Veteran Member
Join Date: Mar 2001
Posts: 2,322
|
Quote:
|
|
06-25-2002, 09:54 PM | #178 |
Regular Member
Join Date: Feb 2002
Location: Home
Posts: 229
|
"Tactile sensations can be represented using multiple 3-D images of the body that are colour-coded to show different pressures, textures, temperatures, etc, pushing on the touch receptors. I'm talking about presenting the information is a fairly clinical detached way."
The reason for suggesting the use of tactile sensors was as an alternative way of obtaining information about the world -- i.e., where things are sensed through the use of a cane or our hands while in the dark. It was not intended to provide some representation of how we felt from the touching experience. This would be covered under the inner experience aspect of consciousness. "There could be numerical readings which show how desirable or undesirable the signals are." How are signals determined to be desirable or undesirable? At first blush this would seem to require feedback from the subject telling the experimenter of his or her experience? "These signals would be attached with goals, memories, external experiences, etc." Similarly, these determinations would seem to require feedback from the subject telling the experimenter of his or her "goals, memories, or external experiences, etc." "The person would be limited to doing what had the most desirable signal at that point in time." I have the feeling that "the most desirable signal at that point in time" would be _defined_ to be what the person is doing at that time. It wouldn't be informative about what was desirable or undesirable. "Basically the contents of their brain would be summarized so that other people could see all the main things - their priorities/focus, etc." If you are merely going to summarize "the contents of a subject's brain", why not do so from the standpoint of examining their behavior? What is it that is gained by your brain scanner? "I'm not talking about simulating the brain - just keeping track of what it's doing - the information content. Note that the information can be represented in different ways - how desirable/undesirable things are can be represented with numbers rather than physical pleasure or pain acting on the observer." How would a thought be represented? If I represent a particular pain as a coordinate in pain space, how does this help us understand the experience of the pain that has this coordinate? That is, how would you be able to determine the mapping from the brain to the mind without any first hand experience of the mind in the first place? What, in an objective sense, does this coordinate mean? Is the pain coordinate felt by person A the same as person B? "But I'm just talking about reading people's thoughts, etc, not fully reliving every part - and only every part - of their experiences." Would the reading of a person's thoughts be in some language? If so, does that imply that thoughts do not exist except through language? In any case, how can a thought be decoded in the absence of a world in which the thought is about? I just don't see how it can be determined without the context in which the thought is given? Suppose I'm reading a book and at the same time having difficulty understanding what I'm reading? The brain scanner would somehow have to recreate the experience of my reading the book in order to determine the relevance of the thoughts I have about it? Similarly if I'm driving along the road, the brain scanner would have to recreate both the inner and outer experiences, connecting them in such a way that a story can be told of what the thoughts are about. This is what I was referring to when I brought in the simulation. With respect to your response to my question about the applet, I raise it because if you expect to analyze the brain itself without also interrogating the person having that brain, I suspect you would not be able to accomplish what you intend to accomplish simply on the basis that each person stores information about an object in different ways. One cannot merely look at neurons and their configuration and determine what they represent? Indeed, there are certain philosophical difficulties having to do with what the reference of a concept/term is. Quine has written, for example, of the indeterminacy of reference. (The classical example is what does 'gavagai' refer to.) "We can see the location of many of the sub-atomic particles that make up the Moon because photons collide off of them." I take it then that the Moon exists in addition to sub-atomic particles. "So we see a "reflection"(?) of the Moon rather than the Moon itself... but since most things just reflect light like that we say that we just see the object itself." My reading of this is that it does not support the view that we see an approximation to the moon? "In the case of silouhettes, like a tree in the sunset, hardly any light is reflecting off of it, it is blocking the light. We see an absence of light in the shape of the tree. Basically we can detect large scale objects using photons to a limited accuracy... an approximation of the complexity (zillions of particles, etc) that is out there." Note that we don't see the photons or zillions of particles. We see the moon. Or if were focussing on some perspective we have, we can say we see a crescent shaped object in the sky. However, in the normal mode of seeing -- what Husserl calls the natural attitude -- objects seen are seen as real (i.e. natural) objects. Occasionally, of course, we can be fooled and when this occurs we may report that we see what looks like the moon. In no case would we conclude that if what we are seeing is the moon would it be considered an approximation to the real thing. It might be one thing or another, but it wouldn't be approximately one or approximately the other. owleye |
06-25-2002, 11:38 PM | #179 |
Regular Member
Join Date: Feb 2002
Location: Home
Posts: 229
|
John...
"Again, what is doing the counting, "enumerating the entries"? In a previous post you supposed it was some kind of machine. If so, where is it?" I understand this to be a question about a subject doing the counting. Though I didn't suppose it was a machine, rather I indicated that it could be a machine and it could just as easily be a human that is doing the counting, I don't quite see the relevance of where the subject is that is doing the counting. The subject does have a location, certainly, (and so likewise does the subject's brain). However, this tells me nothing about the location of the subject's mind. It is similar to the problem of determining the location of software. Certainly hardware can be located. But what makes you think that software has a location? "Any suggestions as to how we might investigate the mind and its relationship to the body etc?" I think first we should try to understand the mind before determining its relationship to the brain. "Are you saying that Zeno's paradoxes only exist in the mind?" I think I said just the opposite. "How is my model a misrepresentation?" It is fair to say that you may have represented the brain's neural network through a process of abstracting its essential elements. However, this is not the same thing as saying the mind is that abstraction. The mind is capable of understanding the abstraction of a neural network that you are representing, but this is not what the mind is. An abstraction can be an object of thought. It is not a thought. "I have no idea what color your eyes are - at least we both seem to have empirical roots." What sort of evidence would be required for such a determination? "In the above you state that you saw a coiled rope, but that you did not know it was a coiled rope." Not so. I saw a coiled rope as a snake. I guess this is a tricky sentence for you. Perhaps I should have broken it down into. 1. The object before me is a coiled rope. 2. I saw this object as a snake. "The observer comes to knowledge of the coiled rope through a more thorough examination involving touch etc. to establish other properties of the object concerned. This is the concept of "triangulation" of the sense I refered to earlier in this thread." For probably the third time, it was not a question of my knowledge that it was a coiled rope. I stipulated that it was a coiled rope that I saw as a snake. Obviously I was fooled. However, if you insist on it being an epistemological quesiton about the coiled rope and you want to solve the problem of error using 'triangulation', then I suggest reading Donald Davidson who uses this term in a particular way. If I have it right, it basically comes down to the existence of a teacher-learner relationship with respect to objects of experience. One needs a standard bearer. "The first two things I introduce are the ontology and the symbolic representation of (our knowledge of) its existence. The first statement of the ontology is "This ontology exists"." Ontology is occupied with existence itself. As such, your opening statement is circular. It is analogous to responding to a question about how you account in your theory of gravity for the wide differences in high and low tides in some places in the planet compared with others with an opening statement that says that "This theory of gravity differs widely in some places in the planet compared with others." "This may explain some of our disagreements, I definitely consider this actvity mental (i.e. acts of the mind). To me this is more consistent with our body of knowledge and empirical experience - consider the case of children before (generally) the age of two that have no conscious memory yet are clearly capable of mental acts." That children age two have no conscious memory is an outrageous claim. Do you think they cannot recognize their parents? How would this be possible without memory of their parents? "I am considering the actual instance of one - how do you know its "concrete"? It can pass through other matter, including (literal) concrete, so doesn't the use of physical encompass concrete things but not the other way around?" I've heard enough. Obviously you aren't going to get it. Too bad. "In refering to an "unknowable god" I was giving an example of something that could not be real. That you can imagine something is a "real" experience but it does not follow that there is a physically "real" corrolary for everything you imagine." So. Not everything is real, just as not everything exists. These can be exemplified by "god is not real" and "god does not exist." In addition to the concept of god existing, you allow that the experience of god is/can be real. "IMO an absolutely objective truth is an imaginary concept that does not participate in reality." This doesn't make a lot of sense to me, but I won't try to persuade you otherwise. Perhaps its because we have completely different ideas of what being objective and subjective mean. Ordinarily speaking, objectivity relates to the objects of experience whereas subjectivity relates to the subjects of experience. If your theory holds that objects do not exist apart from a subject, I gather you would answer the question about trees in the quad not existing unless someone is observing them. They rather pop into existence at the instant they are observed. "I acknowledge that a truth can be said to be objective within the bounds of the observations and conditions under which it is observed/deduced." This rather confirms the view that objects do not exist apart from the subject. "Outside of these bounds that truth becomes subjective." I gather this means only that when we are not observing the tree we only believe the tree remains. Since it could have vanished or a different tree that looks like it substituted in its place, it is not possible to be sure of its continued existence. Of course, ordinarily we would say in this instance that the belief that the tree remained, if it turned out to have been switched, was a false belief. In your "subjective truth" thesis, however, I gather such a belief would be a subjective truth. And, moreover, since it can never be absolutely determined, it can never be objectively true that the tree remained when not bein observed. One may wonder what you think of the those theories which regard the universe as having some age longer ago than there were folks making observations. Did the sun really exist prior to my birth? "But why? Are you proposing that only subjective analysis of the mind is possible? If so, why, pray?" Essentially because no physical thing has intentionality. Physical things are objects and never subjects. Biological organisms may have the additional feature of functionality, though one needs an analysis of functionality to account for it. For example, it might be considered that the function of the heart is to pump blood throughout the body, delivering nutrients to the cells that need it. However, another interpretation is that the functionality of the heart is something imposed by the mind that needs to find meaning in the heart pumping. Mechanically, the heart may pump blood or not pump blood. It may be that if the heart fails to pump blood, the organism will die. From a biological viewpoint, it may be important for the heart to fail, just as it may be important for the heart to beat. Dying and living are both part of biology. It is we who make the leap that when the heart doesn't pump blood we would call it a mal-function. It is the way our mind works. Your negative response to my question about intellectual intuition makes me want to inquire what beef you have with Kant. owleye |
06-26-2002, 04:05 AM | #180 |
Veteran Member
Join Date: Aug 2000
Location: Australia
Posts: 4,886
|
owleye:
I think you're basically right with most of that. About the Moon - maybe my problem was that I thought "I can see the Moon" implies "I can see *all* of the Moon" rather than "I can see *some* of the Moon" (we don't see all of it - like the back of it and individual atoms, etc). ...If I represent a particular pain as a coordinate in pain space, how does this help us understand the experience of the pain that has this coordinate? This is just used to see that the experimental model of their motivational system (desires, etc) is working. e.g. the pain signal from a pin might be -100 and you could tell them to keep on going. It might go down to -50 and you might see that the pain is endured because a pleasure (conforming) outweighs it or a greater possible pain (embarrasment/rejection) outweighs it. It is much more complex than that of course. I think that basically pain is about the compulsion to avoid something. That's it. It could also trigger some instincts like screaming (for help) and making your heart race. So by looking at the values you could see how much the person wants to avoid something... and see how close to the threshold they are so that they stop enduring the pain. The hard bit would be to translate the information that is associated with the pleasure/pain signals. e.g. It might look like this: awdijoadijasido +20 asdjoasidjo -100 asdkjoqwdoasd +50 The last thing was the most desireable... it would seem like a random sequence of neurons firing. It would be hard to work out what that information is about since information can be very smeared when it is in a neural network. I guess it would be easiest to do the experiment on yourself and see if there are any correlations between the firing pattern and the desireable and undesireable things you had been thinking about. The problem is that you mightn't be aware of most of the things that go through your head during decision making. You'd only be "informed" when you get stuck or when there are some final results. Anyway, don't worry about this brain scanning stuff. It is too long and complicated and I get a headache thinking about it. I don't think it had much to do with the topic... That is, how would you be able to determine the mapping from the brain to the mind without any first hand experience of the mind in the first place? No, it would involve first-hand experiments but when it is good at predicting things, it would show that the model works. (And you wouldn't need to rely on asking the person first-hand) What, in an objective sense, does this coordinate mean? Is the pain coordinate felt by person A the same as person B? Some people might be very sensitive to the signals... so they mightn't feel equally uncomfortable. And if one person is filled with pleasure from doing something they wouldn't take much notice of the pain. ...With respect to your response to my question about the applet, I raise it because if you expect to analyze the brain itself without also interrogating the person having that brain, I suspect you would not be able to accomplish what you intend to accomplish simply on the basis that each person stores information about an object in different ways. One cannot merely look at neurons and their configuration and determine what they represent? Yeah, you're probably right. I had some other ideas about this but they seem to have dead-ends. Indeed, there are certain philosophical difficulties having to do with what the reference of a concept/term is. Quine has written, for example, of the indeterminacy of reference. (The classical example is what does 'gavagai' refer to.) Well we know for sure that the neurons connected to green colour receptors take in information about green, and neurons attached to hairs in the inner(?) ear have pitch information, etc. I think in the future they could work further and further into the brain... maybe not very far, but a bit further then the eyes and ears at least. "We can see the location of many of the sub-atomic particles that make up the Moon because photons collide off of them." I take it then that the Moon exists in addition to sub-atomic particles. The Moon is the subatomic particles - we just can't see all of it. |
Thread Tools | Search this Thread |
|