FRDB Archives

Freethought & Rationalism Archive

The archives are read only.


Go Back   FRDB Archives > Archives > IIDB ARCHIVE: 200X-2003, PD 2007 > IIDB Philosophical Forums (PRIOR TO JUN-2003)
Welcome, Peter Kirby.
You last visited: Yesterday at 05:55 AM

 
 
Thread Tools Search this Thread
Old 04-05-2002, 02:01 AM   #61
Regular Member
 
Join Date: Feb 2002
Location: Home
Posts: 229
Post

DRFseven...

You find it odd because, to you, the differences account for the free will you think we possess."

And much more besides. In any case, your having a different view of things does refute what I was offering.

"I find that, as great as the differences are, there is still no freedom from prior causation in our own behavioral mechanisms."

There being no freedom from prior causation is not sufficient to derail free will, imho.

"The thoughts that govern our behavior all come with strings attached; just because many of the strings are conceptual does not make them nonexistent."

As I see it, because thoughts govern our behavior, we are free, and not bound. Indeed, it is very difficult to have our thoughts controlled. We cannot even control them. It is a mark of freedom that this is the case, not of being bound.

"It's not strictly true that I think we don't have choice. It's that I see choices as an evolved mechanism whereby the individual's responses are determined according to previous experiential factors instead of being determined by hard-wired biological factors."

By this definition, cats have the same capacity for choice as humans, since they both learn. I see humans rather in a different light, however, they have reached a greater level of choice than cats and this choice gives them a kind of freedom that we don't attribute to cats. Cats have no understanding of right and wrong, in any principled sense. They merely act in accordance with their prior conditioning. Humans, by contrast do have an understanding of right and wrong, in a principled sense. They don't act entirely on the basis of prior conditioning. They act within guidelines that accord with moral conduct (at least they are supposed to once they reach a certain maturity.)

"To many people, an individual's choice is not a choice, an individual's freedom is not free, if it is determined by antecedent causes, so what I call a choice may not mean choice to you."

That's right. Your position has no persuasive power if this is what you mean.

"I think a response is a choice if it reflects an evaluation based on the individual's conclusions. Is this what you think?"

I would translate this in the following way.

"A response reflects a choice made if it was based on evaluating or deliberating over a set of choices available."

"Certainly our mental schema belong to us; certainly we think our thoughts and make our decisions; indeed, we have no say in the matter, we are bound by our physiology to do this.

I take the latter clause to be irrelevant.

"I don't deny a bit of this. You are describing how we are bound to "come to" conclusions based on experience; not to freely design conclusions unfettered by bases.

I disagree. i'm describing how we are free to "come to" conclusions. Indeed, in no way are we bound to do so.

"Owleye, the firing of the synapses either are or are caused by (depending upon which view of the neurophysiology of cognition is assumed) the various conceptual schema held by an individual at any point in time."

I'm afraid I'm in total disagreement with either of these interpretations. Neurons are not cognitions, nor do they cause them in any meaningful way. I give my reasons below where you equate neural firings with thoughts.

"Either way, once a thought-provoking stimulus is received by the senses,"

I claim that thoughts are quite independent of sensory reception. We can be thinking of something entirely different from what we are receiving from the our senses. Indeed, we can process data received from the senses that we aren't perceiving at all.

"the whole rigamarole that is sensitized to that specific associative coding is engaged. We have groupings of firings going off all over the place ("knowledge"; nested beliefs, conceptualizations) over which we have not the slightest control and many of which we are not even aware."

Again, irrelevant.

"You ask "Who cares?" about the sequence of firing. I can't identify who those are who care (though there are quite a few!), but I can tell you that the importance of the sequence of firing is that the sequence and the specific thought are two sides to a coin. Without that specific sequence, the specific thought would not exist to the thinker."

Quite so, but the question is not what mechanism produces the thought, or even the ingredients that go into it. The question is whether our thoughts are impeded in some way.

"What is ridiculous is the idea that we might somehow design a set of beliefs and thoughts before we believe and think them!"

Again, who cares how thoughts are structured or designed? It is not the design of the set of beliefs that is in question, but whether a set of beliefs are freely held, or instead are unalterable. Indeed, I would imagine that more than one design could produce the same set of beliefs.

"Now I shall choose to think education is good before I have any idea what it is, and I will also decide without evidence that impulsive behavior should be avoided."

Nonsense questions that no person would wonder about. They are irrelevant.

But let cite Werner Heisenberg (who may have obtained it from Leibniz, but I'm not sure.). I've also heard this quoted from other sources, so it is probably not necessary to credit anyone. Anyway he says: "We can do what we wish, but we can't will what we wish."

From this, you would say this proves we don't have free will. I would say the opposite. It proves we do. That's because I take the first clause as sufficient for free will, whereas you take the second clause as evidence for our not having it.

If we can't get past this, we might as well close up the dialog.


"I think neural activity IS thought;"

I think you are wrong here. (1) neural activity is physical, thought is mental. (2) neural activity has no content, whereas thoughts have it. (3) neural activity has a highly random character, having a high degree of variability to a given pattern, whereas thoughts don't seem to exhibit this variability; (4) neural activity has no meaning, in and of itself, whereas thought do.

"And how do you think neural activity or thought is controlled?"

Neural activity is controlled using pharmaceuticals as well as meditatation, and biofeedback. Thoughts, however, are more difficult to control, even by us. However, this is a sign that we are free, and not bound.

"When, for instance, you perform some task and you don't achieve the results you wanted, can you alter that perception? Can you decide to think that the task was done appropriately,"

Again, irrelevant. Deciding to think is of little consequence. It is deciding itself that is important to free will.

owleye
owleye is offline  
Old 04-05-2002, 02:38 AM   #62
Junior Member
 
Join Date: Dec 2000
Location: streets of downtown Irreducible Good Sense in a hurricane
Posts: 41
Post

If we did not have a nature of any sort, then there would be no meaning to the problem of free will. Without a nature that favors, say, valid deduction over invalid deduction, so that we would not care which is which, we would be truly random. If we added up all of our natures and then simply put them all in a box, and closed the box so that we did not refer to them in making conscious or subconscious choices, we could not be said to have a will in the first place.

When we think of 'free will', what are we really thinking of? I don't think we should conclude whether or not we have free will until we find out just what sorts of free will we do have. Consider, for instance, that we can, and often do, have conflicting desires. One of them is for a short-term, but destructive, good, and the other is for a long-term, but constructive, good, and, they are mutually exclusive. While sufficient fear of losing the long-term good would make us "will" against persuing the short-term good, the absence of this sufficient fear does in no way prevent us from exercizing free will in the matter. We are not robots of the biochemistry of fear.

This brings us to two questions: what is the greatest imaginable, long-term, true good, and is it realizable? The fact that we do not know whether it is realizable of not is hardly reason to reject hoping in it.

[ April 05, 2002: Message edited by: Danpech ]</p>
Danpech is offline  
Old 04-05-2002, 10:27 AM   #63
Veteran Member
 
Join Date: May 2001
Location: US
Posts: 5,495
Post

Owleye, thanks. Here are my observations:

Quote:
Originally posted by owleye:
<strong>....Indeed, all of our subsystems are control systems, and the brain is no different in this regard.
..... making it possible to know that data doesn't fit its category and can learn, perhaps from a third party, that the category is wrong. This is probably what you are thinking about by the term 'working hypothesis'. </strong>
I don't agree that all systems are control systems, I think some are purely analytical or play the role of tools manipulated by what we might call a control system. Indeed, one might argue that designating an entity a "control system" is anthropomorphism, or like saying that a gearbox has the same function as a car.

By "working hypothesis" I meant an interactive model. For example, where one might be able to think "What would JC have done in this situation?" and, knowing their attibutes (morality, popularity etc.) produce a guess from what we call intuition.

JC stands for Jackie Chan, BTW

Quote:
Originally posted by owleye:
<strong>In what way is the mind 'abstract'? The usual meaning of abstraction is a feature of our mind that discards any concrete influences. ....It is the set of common conditions we have learned which belong to all cats. They amount to a set of rules or principles that can be applied to sense experience or to behavior or to discursive formulations, or to judgements, generally, or other.)
</strong>
Maybe we're working with slightly different definitions. I'm proposing to treat the mind as the abstract phenomenon of brain (and other components that support thought) - if you include some physical matter in "mind", what is your terminology for the purely abstract aspect? Maybe we should use that term.

I agree your description of abstract with the following rider:
" ... discards any direct concrete influence. This clarifies because I posit that there must be a connection of some kind, if not we would end up with 'magical dualism'.

As for the cats, I offer the following change: Instead of "It is the set of common conditions we have learned which belong to all cats." say "It is the set of common conditions we use to identify instances of the pattern conforming to the concept 'cats' in reality." This makes clear that identity is in the mind.

Quote:
Originally posted by owleye:
<strong>Let me acknowledge, though, that our own ability to process abstract ideas is something that, if I'm not mistaken, is not yet possible by software. Perhaps, with your knowledge of software, you could convince me otherwise.
</strong>

Ideas are abstract. Computers can process ideas but they don't know what they are.

On the mind residing on the physical substrate of the body:
Quote:
Originally posted by owleye:
<strong>I think what you may be groping for is something similar to the software/hardware facet of computation. In philosophical circles, this is known as the type/token distinction. If this is not what you have in mind, let me know.
</strong>
No, its not a token distinction alone, to be meaningful information (in the mind) must have context. The mind contains multiple layers of abstraction that only "make sense" in context with adjacent layers. Maybe relocatable object code is an example, its the same data codifying process operations but can be executed by loading at different memory locations. Another example might be pointer arithmetic (in the 'C' family of languages) where the pointer is used to identify the "sense data".

Quote:
Originally posted by owleye:
<strong>"I also suggest that the mind can be considered as separate parts with several layers of abstraction, in this sense parts of the mind evaluate/control/monitor control other parts of the mind."

Again, I have difficulty comprehending this, but if what you mean is that there is an element of layering (much as we might understand the OSI layers in network protocols), I think you may be onto something. I suspect not, though.....
</strong>

Think of an array of pointers. Instead of the data you find a pointer to that data, the indirection being an abstraction layer. Now consider an array of pointers pointing to an array of pointers. In the second example the data remains the same but there is a second layer of abstraction.

I have no idea whether the above analogy can be applied directly to brain function but consider a neuron. With up to 150,000 dendritic inputs and I forget how many axon outputs abstraction seems inevitable. If you think of axons and dendrites as directional pointers, you have my analogy.

Quote:
Originally posted by owleye:
<strong>However, as fascinating as this is, there remain signficant problems with it if we wish, as Norretranders does, to regard it as a "User Illusion." His interpretation is that consciousness is situated in a such a way that makes free will possible.
</strong>

I wouldn't go so far as to predicate consciousness as necessarily the cause or facilitator of free will. I think it likely a contribution factor, though.

Point to ponder on the "User Illusion", an illusion is real, otherwise we could not know it. What, then, distinguishes an illusion from the reality it participates in?

Quote:
Originally posted by owleye:
<strong>My question was based on your apparent need to tell us that a "fully determined system" could exhibit choice. To date, I have yet to understand what compelled you to suggest this.
</strong>

It was a response to your statement "In this causal way of explaining human behavior, there can be no alternatives. It is fully determined." at the beginning of this page.

Quote:
Originally posted by owleye:
<strong>"However, if the states it contains bear relation to states in the outside world and are stored in a manner that is contextual with other states (e.g. x happened at the same time as y but produced a different color) this data can be used by a simple process to handle a wide array of situations."

The use of 'handle' is a term from control theory. The wide variety represents the free variables that are processed by this system...
</strong>

I used the term "handle" in a non-technical sense. I'm not ducking the question though because your point is relevant. Consider, the brain comprises mortal cells and it is likely that the way the brain functions is to overcome single and multiple points of failure. The reasoning behind my mention of a "simple process" is an observation that nerve cells seem more uniform (in type) than the wide variety of thoughts they can think. In turn, this makes for a robust, scalable system for thinking.

Quote:
Originally posted by owleye:
<strong>The problem child here is what does it mean to be a "self." I have seen database systems (or so-called knowledge-base systems) attached to a natural language interface, in which the language used to communicate to human users made it seem like its database was part of its self. Is this all it takes?
</strong>

I don't think so and I wish I knew what it did take! Irrespective of what "self" actually is, awareness of self would clearly seem to need a feedback loop like a mirror or imagination. The key issue I would have with your model is a lack of experiential learning interaction with the outside world. Ultimately, meaning comes from the real world and not from langauge alone. I did have a dream once where I was swimming in a "sea of meaning" though....

Quote:
Originally posted by owleye:
<strong>....We are reduced to the status of any other creature. We would assume a dog who has maimed a child....
</strong>

Of course we're dogs with special powers! Did you look through the are dogs conscious thread under science and skepticism.

Quote:
Originally posted by owleye:
<strong>"I'm not sure I have the answer to Chalmers, but look at the advantages consciousness might confer. We have a real-time convergence of data from reality merged with experience of past situations codified to provide us an understanding of cause and effect."

I don't see consciousness as necessary for this. What part of the above requires consciousness? (Note, I think that it would be more accurate to say "near real-time", since it has been determined that there is a signficant lag (about 1/2 second, according to Libet -- about .4 second in NASA studies) in what we are conscious of compared to what a real-time system says it is. This might impact your theory -- I can't say.
</strong>

Maybe a misunderstanding, I am saying this is a significant part of conscious processes (not a reason for consciousness per se) conferring competitive benefits.

Yes, I'm aware of the fact that a real-time system is a bit of an oxymoron. If I remember correctly, the study measured the time between the cause (i.e. sense data not visually detectable such as seeing a pin stuck in you) and conscious perception. I'm not sure how they did it but I guess they had to cancel out the reporting time. This is a different delay, obviously, than a reflexive action.

As a side point, I have looked at latency in voice communications systems and it seems that our auto-compensation buffers for sound only go out to 800 to 1200 milliseconds which is the kind of delay you get with voice compression algorithms on top of a two-way satellite link propogation delay.

My understanding is that the nervous system is constructed to automatically compensate for this and the effect is only humanly detecteable with wide differences such as in thunder and lightning. The best evidence I read for this is without the timewise compensation we could not pick out language from a continuous stream of noise.

Quote:
Originally posted by owleye:
<strong>"I desired to indicate that free will is not a binary, on/off, property. Perhaps "apparent posession of absolute free will..." would have been clearer."

This muddies the water, I'm afraid. But let me ask you why you need to indicate 'apparent' in the first place. Second, what is the need to specify 'absolute' here? Is this the right word? Absolute is usually meant to be distinguished from 'relative'.
</strong>

Apparent because if you don't know the trick you could make the Cartesian theater conclusion. In the latter the doer is assumed to act with a will absolutely independent of the theater they are playing in. I just wanted to make this distinction, I don't believe there is such a thing as "absolute freedom" of will.

Cheers!

[ April 05, 2002: Message edited by: John Page ]</p>
John Page is offline  
Old 04-05-2002, 11:40 AM   #64
Veteran Member
 
Join Date: Mar 2001
Posts: 2,322
Post

Quote:
owleye: In any case, your having a different view of things does refute what I was offering.
Thank you, but my having a different view doesn't refute what you said, it's that you have no argument and I do that refutes it. Can't you present anything to back up your idea that even though our thoughts depend upon things, they are free?

Quote:
There being no freedom from prior causation is not sufficient to derail free will, imho.
Why not? How does "no freedom" from something equal "free"? In order for that to be true, you've got to qualify "free" to mean "bound by some, but not all, restraints." If that is the case, then I can agree with you that we have free will; our behavior is not as hard-wired as that of other animals.

Quote:
As I see it, because thoughts govern our behavior, we are free, and not bound.
How is something bound by a governor free?

Quote:
Indeed, it is very difficult to have our thoughts controlled. We cannot even control them. It is a mark of freedom that this is the case, not of being bound.
Again, you just assert it; you don't explain. How does not being able to control our thoughts equate to free thought?

Quote:
By this definition, cats have the same capacity for choice as humans, since they both learn.
But nothing learns like humans; we're the star learners of the planet, due to all those neural connections that you don't seem to appreciate.

Quote:
I see humans rather in a different light, however, they have reached a greater level of choice than cats and this choice gives them a kind of freedom that we don't attribute to cats.
Yes, and cats have a kind of freedom we don't attribute to ourselves; freedom from societal impositions, freedom from language, freedom from certain emotional states, etc., so it's not clear how their behavior totals up to be "less free" than ours. They are less free than we are from certain things and we are less free than they are from certain others.

Quote:
Cats have no understanding of right and wrong, in any principled sense.
That's because they're cats. Right and wrong are human constructs.

Quote:
They merely act in accordance with their prior conditioning. Humans, by contrast do have an understanding of right and wrong, in a principled sense. They don't act entirely on the basis of prior conditioning. They act within guidelines that accord with moral conduct (at least they are supposed to once they reach a certain maturity.)
And where do you think humans get these moral guidelines? They learn them and moral acquisition is a very good example of conditioning. Beginning from our infancies, we receive emotional "lessons" (experiences) from those we trust, thereby learning that things are good and bad. As we mature, we categorize according to those lessons that we don't even remember. Later, things just "seem" good and bad to us according to how we reason that they fit in with our guidelines.

Quote:
Me: "To many people, an individual's choice is not a choice, an individual's freedom is not free, if it is determined by antecedent causes, so what I call a choice may not mean choice to you."

You: That's right. Your position has no persuasive power if this is what you mean.
Since all conclusions are determined by what is known about something, all conclusions are determined by antecedent causes. Therefore, you must think people don't make choices.

Ex. Your overhead light goes out and you decide to replace the bulb so that you'll have light. How did you know it was burned out? How did you know you could install a new light bulb? Experience, that's how. Antecedent causes. Without experience, you wouldn't have known what to think.

Quote:
"A response reflects a choice made if it was based on evaluating or deliberating over a set of choices available."
And without experience, there would be no knowledge of what might be available to choose, and, what's more, no way to determine preference.

Quote:
Me: "Certainly our mental schema belong to us; certainly we think our thoughts and make our decisions; indeed, we have no say in the matter, we are bound by our physiology to do this.

You:I take the latter clause to be irrelevant.
Why; because physiological ties contradict the idea of freedom? Physiology cannot be discounted; do away with it and you do away with all behavior, including thinking and choosing.

Quote:
i'm describing how we are free to "come to" conclusions. Indeed, in no way are we bound to do so.
You're bound to think of pink elephants if I say so, right? Can you conclude not to? Of course not; you are bound to react to the stimulus.

Quote:
Neurons are not cognitions, nor do they cause them in any meaningful way.
And do photons and neurons not cause sight in any meaningful way?

Quote:
I claim that thoughts are quite independent of sensory reception. We can be thinking of something entirely different from what we are receiving from the our senses. Indeed, we can process data received from the senses that we aren't perceiving at all.
Doesn't matter. There was an original sensory perception that started the ball of thought rolling. Quite right that we needn't be aware of it, though.

Quote:
Quite so, but the question is not what mechanism produces the thought, or even the ingredients that go into it. The question is whether our thoughts are impeded in some way.
So far we have not mentioned impediment of thought. What does that have to do with the discussion?

Quote:
Again, who cares how thoughts are structured or designed? It is not the design of the set of beliefs that is in question, but whether a set of beliefs are freely held, or instead are unalterable.
We alter our beliefs all the time, but always due to some experience, some new association. Thus, the belief does not change freely but as a result of unavoidable change in structure of the mental schema that produce our conclusions.

Ex. You replace the bulb and the light still won't come on. Your husband or wife comes in and tells you the utility company has the electricity off for thirty more minutes. In thirty minutes, the light comes back on, along with the television and the heat pump. Voila'; now you know something of which you were previously unaware. From now on you are bound to think of lack of electricity as a possible reason for lights going out in specific instances. You have no control over this.

Quote:
Anyway he says: "We can do what we wish, but we can't will what we wish."

From this, you would say this proves we don't have free will. I would say the opposite. It proves we do. That's because I take the first clause as sufficient for free will, whereas you take the second clause as evidence for our not having it.
Then you must also grant the goldfish free will because it does what it wishes, too. It just doesn't wish much.

Quote:
I think you are wrong here. (1) neural activity is physical, thought is mental. (2) neural activity has no content, whereas thoughts have it. (3) neural activity has a highly random character, having a high degree of variability to a given pattern, whereas thoughts don't seem to exhibit this variability; (4) neural activity has no meaning, in and of itself, whereas thought do.
Mental content and variability do not necessarily describe the face of freedom any more than any other attribute a thing has. You may as well say that tomatoes and not blueberries are free because tomatoes turn red.

Quote:
Neural activity is controlled using pharmaceuticals as well as meditatation, and biofeedback.
These are merely different stimuli. Neural activity is controlled by any number of stimuli, including previous neural activity.

Quote:
Deciding to think is of little consequence. It is deciding itself that is important to free will.
And how is the decision to decide decided?
DRFseven is offline  
Old 04-06-2002, 11:51 AM   #65
Regular Member
 
Join Date: Feb 2002
Location: Home
Posts: 229
Post

John...

"I don't agree that all systems are control systems, I think some are purely analytical or play the role of tools manipulated by what we might call a control system."

What systems do you have in mind, that are not control systems. The usual ones are, like the circulatory system, the limbic system, the respiratory system...

"Indeed, one might argue that designating an entity a "control system" is anthropomorphism, or like saying that a gearbox has the same function as a car."

Perhaps we have different understandings of what a control system is. A control system is a system that monitors its progress and corrects for differences between its implicit goal and its current status. Control systems are a subset of information systems. If what you mean by this is that biological systems do not actually behave purposely because this implies foresight and a number of other heady principles, I would agree with you. But, for all intents and purposes, biological organisms do behave purposely. A heart pumps blood for a purpose. If this bothers you, I'll substitute "A heart pumps blood as if it had a purpose."

"By "working hypothesis" I meant an interactive model."

This is close enough to what I was talking about that it isn't worth exploring any further.

"Maybe we're working with slightly different definitions. I'm proposing to treat the mind as the abstract phenomenon of brain (and other components that support thought) - if you include some physical matter in "mind", what is your terminology for the purely abstract aspect? Maybe we should use that term."

Well, abstractions are a possible capability of minds. The problem with associating them with brains is that the vocabulary gets in the way. "Abstract phenomena" is a vocabulary that we use with minds, not brains. It is no different with the hardware/software division. They have different languages. The connection between the two is the so-called type/token distinction, whereby we can have many physical instantiations, called tokens, that are all related to some type. For example, each instance of the physical representation of the letter 'e' in the above, corresponds to the same type 'e'. There is no 'e' type unless there is at least one token 'e' to represent it. Plato, of course, thought differently, principally because he didn't conceive the idea that a brain was required to tokenize each instance of the type. Thus, types could exist in a kind of Platonic heaven.

"As for the cats, I offer the following change: Instead of "It is the set of common conditions we have learned which belong to all cats." say "It is the set of common conditions we use to identify instances of the pattern conforming to the concept 'cats' in reality." This makes clear that identity is in the mind."

This version opens up a big can of worms here about what is real. You seem to be moving (or have have already moved ) to the position that everyday reality is not real. I don't wish to get into this, but it may come up later. We'll see.

"Ideas are abstract. Computers can process ideas but they don't know what they are. "

No system that I'm aware of has knowledge at all, much less ideas or thoughts. Indeed, computers have no minds on which they would even be able to make type/token distinctions. It is we who interpret all this. However, I don't wish to exclude the possibility that a system could be constructed that would be able to achieve this status. According to Donald Davidson, assuming that it met certain other requirements, which I can introduce if the need arises, a computer system could have thoughts if it was endowed with perception. (This, I believe, would require consciousness, over and above that which is involved in sensory processing.)

"No, its not a token distinction alone, to be meaningful information (in the mind) must have context. The mind contains multiple layers of abstraction that only "make sense" in context with adjacent layers."

I agree that the type/token distinction is not adequate to characterize the entire mental sphere. It's purpose was to provide a characterization of the relationship between the mental and the physical (or the ideal vs. the real).

"Maybe relocatable object code is an example, its the same data codifying process operations but can be executed by loading at different memory locations."

Relocatable object code doesn't seem to be relevant to the issue of contextual layering. How is this an example of what you have in mind?

"Another example might be pointer arithmetic (in the 'C' family of languages) where the pointer is used to identify the "sense data"."

The pointer is not used to identify the 'sense data', it merely points to it. Something else identifies it as 'sense data', and I'd think it more likely to be the data type, (or class or other structural identifier), with which the pointer is associated. And even then, it is not really identifying the data, so much as taking the data to be whatever type the pointer represents. To identify something requires a process of discrimination, which of course, makes use of types, and likely also a process of trial and error. But this also assumes that the world contains information, which could get us back to the issue of what's real.

"Think of an array of pointers. Instead of the data you find a pointer to that data, the indirection being an abstraction layer. Now consider an array of pointers pointing to an array of pointers. In the second example the data remains the same but there is a second layer of abstraction."

Though this may be important to layering, there remains an important difference between this idea and what minds do. Indeed, minds, on their own, have a great deal of difficulty with abstractions. Most folks deal far better staying with the concrete.

Even so, the layering you speak doesn't quite capture the kind of abstraction that occurs in, for example, set theory or a theory of language. Pointers themselves never point to abstract entities -- or, they never deal with abstractions themselves. In fact, I don't really know how we would be able to code an abstraction.

"I have no idea whether the above analogy can be applied directly to brain function but consider a neuron. With up to 150,000 dendritic inputs and I forget how many axon outputs abstraction seems inevitable. If you think of axons and dendrites as directional pointers, you have my analogy."

I think the problem is your mixing the physical with the logical. You will have to tell me, in physical terms, what the tokens are, and then I might be able to tell you what they mean. The axons and dendrites, and all the rest are the mechanisms by which the "data" are processed. It is the data that represents, or could represent, types, abstractly considered. Naturally, I don't have a clue how this would be achieved.

"I wouldn't go so far as to predicate consciousness as necessarily the cause or facilitator of free will. I think it likely a contribution factor, though."

No cause was assumed. I personally think it is the other way around. A "gap" emerged between stimulus and response in the course of evolution in which everything else we identify with human mental activity, including its sociality, "rushed in" to compensate for it.

"Point to ponder on the "User Illusion", an illusion is real, otherwise we could not know it. What, then, distinguishes an illusion from the reality it participates in?"

This is not difficult. Illusions have content. It is the content that is represented as real, but in fact isn't. Illusions are something we have. When we have an illusion, the illusion is always about something else. It is the something else towards which its content is directed, but in fact may be falsely directed, since it may not correspond to it.

Thus to say that consciousness is an illusion is actually a misnomer. Consciousness is required in order that there be illusions.

I used the term "handle" in a non-technical sense.

As was I.

"I don't think so and I wish I knew what it did take! Irrespective of what "self" actually is, awareness of self would clearly seem to need a feedback loop like a mirror or imagination."

The self emerges because we have inner experience as well as outer experience. Inner experience can also be considered as reflection. On reflection we notice that experience itself has a subject. That is, there is a subject that is experiencing, and this is contrasted with what is being experienced -- namely its object. When we are immersed in experience, however, there is no subject-object distinction. Heidegger makes much of this. Following Husserl, layers of experience arise due to the acquisition of scientific knowledge, which Heidegger in particular thinks gets in the way of authentic experience.

"Of course we're dogs with special powers! Did you look through the are dogs conscious thread under science and skepticism."

No. My initial reaction to this is that dogs may be conscious, but I suspect they are not self-conscious. Anyway, can you say that a dog really understands that they can in principle do wrong? Indeed, do they function on principles at all? This was my contention in trying to persuade you that humans were quite different than all (or almost all) other creatures.

"Yes, I'm aware of the fact that a real-time system is a bit of an oxymoron."

Not really, real-time systems are fairly common. All they take is a (relatively) accurate clock and various compensations for the time delay associated with processing data within the smallest meaningful clock cycle (at least this is how clock-driven real-time systems work.)

The reason for calling it "near-real time" is that what we observe consciously is a construction of our mind, and can't count as being "out there" and totally independent for the purposes of determining whether we function in real-time. This is not to say, however, that what we observe is not real -- rather it is what evolution has designed that makes it sufficiently real for us. It is an evolutionarily fit mechanism (and probably succeeded more than is necessary for mere biological success).

"If I remember correctly, the study measured the time between the cause (i.e. sense data not visually detectable such as seeing a pin stuck in you) and conscious perception. I'm not sure how they did it but I guess they had to cancel out the reporting time. This is a different delay, obviously, than a reflexive action."

Norretranders book "The User Illusion" goes into this in some detail.

"Apparent because if you don't know the trick you could make the Cartesian theater conclusion."

The problem with this is that you are treating consciousness as if it were some sort of scientific object. In exactly the same way I indicated above about the question of illusion, I can say the same thing here. The can be no consideration of appearance at all without a prior supposition of consciousness.

"In the latter the doer is assumed to act with a will absolutely independent of the theater they are playing in."

I agree that this is the Cartesian view, which, for example, Sartre holds, but even if it isn't true, literally, there remains the need to explain how it makes sense to think of it this way. Calling it an illusion, or indicating that it represents a seeming-to-be, does not help in understanding it. It reminds me of the notion of trying to explain the terms left and right, without giving making use of any spatial intuition, or explaining the notion of before and after without reference to temporality.

These, of course, have been done in modern quantificational logic, as it was developed by Frege in 1879, and since then we have learned a great deal about the formal structures involved in processing data, and have even invented astounding formal structures that represent computational models, distinct from mathematical models, but when it comes to characterizing the usual everyday notions, like left and right and before and after, we usually resort to things we take for granted, like space and time. One may hope someday that a breakthrough in logic will occur that will open up the possibility of a language of consciousness. What usually occurs (and became the background of Frege's exposition) is a well-spring of advances in the formal sciences that preceded it.

Husserl's phenomenology may give us hope, I think. Analytic philosophers in the U.S. and England, in particular, by contrast, seem to think we can get there through language alone.

owleye
owleye is offline  
Old 04-06-2002, 08:32 PM   #66
Veteran Member
 
Join Date: May 2001
Location: US
Posts: 5,495
Post

reOwleye, many thanks for your stimulating questions on a topic I've not thought about for a while. Here's my response to your observations.

Quote:
Originally posted by owleye:
<strong>Perhaps we have different understandings of what a control system is. A control system is a system that monitors its progress and corrects for differences between its implicit goal and its current status.... </strong>
Yes, I think we do. I think of a control system as one that has a specific control function in relation to other processes or things. I guess I regard internal control as irrelevant when considering what the system does for the "outside world".

From the rest of your posting it seems we're looking at whether a human being is "controlled" in any way. To me, this is an anathema. While we interact with our environment and some humans may be controlled by other humans, to suppose our actions are controlled to a specific purpose is unfounded. (On the other hand, maybe there is some unseen galactic intelligence.. but then I'd want to know what controlled that...)

Quote:
Originally posted by owleye:
<strong>Well, abstractions are a possible capability of minds. The problem with associating them with brains is that the vocabulary gets in the way. "Abstract phenomena" is a vocabulary that we use with minds, not brains. It is no different with the hardware/software division.
</strong>
Are you comfortable with the comparison brain == hardware, mind == software + memory contents (data)?

I think of the "token" as the "axiomatic concept" of the type. I believe it is very important to distinguish between a "symbol" as a arbitrary tag, token or name and a "representation" which is data that can be used for templating and hence reconstruction of the original entity.

I'm jumping forward here but say we have an external entity, we receive sense data about that entity (color, shape, smell, sound etc.) and match this with our axiomatic concept (template) for a cat. The word "cat" is the token, the type is the "axiomatic concept of cat", distinguishing features of the cat (time, location, bent ear etc.) can make that instance of cat unique in our memory.

So, while I'm not unhappy with the token/type idea, I think there's more to it. Everything above occurs in the mind (i.e. after receipt of sense data) and is abstract.

Quote:
Originally posted by owleye:
<strong>This version opens up a big can of worms here about what is real.</strong>
I hope the above makes clear what I consider as the boundary between external reality and abstraction in the mind. I regard them all as part of reality (because my thoughts are real). I would say abstract and concrete are opposites.

Quote:
Originally posted by owleye:
<strong>No system that I'm aware of has knowledge at all, much less ideas or thoughts.....According to Donald Davidson, assuming that it met certain other requirements, which I can introduce if the need arises, a computer system could have thoughts if it was endowed with perception. (This, I believe, would require consciousness, over and above that which is involved in sensory processing.) </strong>
I'm having a hard time with this one. First I think perception is a pre-requisite for knowledge is a pre-requisite for consciousness. I would agree that consciousness "perceives" self but I don't think that is needed for perception in general. My definition of perception is the process that occurs when sense data is abstracted.

Notes: 1. I believe one can view the mind as dealing with a number of layers of abstraction, perception being required to pass from one layer of abstraction to the next.
2. I think it is important to differentiate between abstraction and transmission. Transmission is the passing of data between different points in space/time with or without distortion. Abstraction occurs when two sets of "input" data are compared and are transformed into "output" data by a reliable repeated process.

Quote:
Originally posted by owleye:
<strong>Relocatable object code doesn't seem to be relevant to the issue of contextual layering. How is this an example of what you have in mind?
</strong>
Poor analogy on my part. I was trying to communicate a certain homogeneity of the brain processes so I likely misunderstood your original point.

Quote:
Originally posted by owleye:
<strong>The pointer is not used to identify the 'sense data', it merely points to it. Something else identifies it as 'sense data', and I'd think it more likely to be the data type, (or class or other structural identifier), with which the pointer is associated. And even then, it is not really identifying the data, so much as taking the data to be whatever type the pointer represents. To identify something requires a process of discrimination, which of course, makes use of types, and likely also a process of trial and error. But this also assumes that the world contains information, which could get us back to the issue of what's real.</strong>
I agree with you in the context of declaring the data types and structures in a computer progam. Perhaps my poor analogy again but if you take the physical structure of a neuron you can say that the physical location of the dendrites (which are pointers) enables them to sense data at that location - like a memory read from a pointer address. So, what you would declare in a computer program is implicit in the physical brain structure. If all brain cells were uniform in type we might say:

struct brain{
struct cell[CELLS]{
dentrites[DENDRITES]; /* where dendrite is a structure declaring each dentrite as a pointer to the input data for the cell process */
cell_process_ptr;
axons[AXONS];}
struct sense_data_layers[SENSORS];}
}

I know the above is syntactically incorrect - just meant to convey an impression!!

It seems to me that cells seek out relationships and the other cells that need to know them through migration of axons etc.

Quote:
Originally posted by owleye:
<strong>.....Indeed, minds, on their own, have a great deal of difficulty with abstractions. Most folks deal far better staying with the concrete....
Pointers themselves never point to abstract entities -- or, they never deal with abstractions themselves. In fact, I don't really know how we would be able to code an abstraction.
</strong>
Agreed, and I think we need to understand abstraction better....
A bitwise & operation performs an abstraction - you take two pieces of data and the result is only understandable if you know the abstraction process. That something has been abstracted doesn't mean that its representation doesn't exist in space/time. I can see the results of abstraction in a computer output register but it doesn't mean a thing unless I know the modus operandi of the computer.

Quote:
Originally posted by owleye:
<strong>I think the problem is your mixing the physical with the logical. You will have to tell me, in physical terms, what the tokens are, and then I might be able to tell you what they mean. </strong>
I don't think I am mixing them up. Hopefully, earlier in this post I have shown that abstracted data is still data that can be measured. I hope I have also shown how a token might comprise a literal name and one or more pointers to a type (or the templates for its key features) etc. This is consistent with my suggestion that the mind is the abstract context of the brain (and other physical parts that contribute to thinking).

I know this may be incomplete or hazy, that's because I don't really know how the mind works - this is just theory!

Quote:
Originally posted by owleye:
<strong>A "gap" emerged between stimulus and response in the course of evolution in which everything else we identify with human mental activity, including its sociality, "rushed in" to compensate for it.
</strong>
I'm confused as to what the gap is and the benefits of it being filled in.

Quote:
Originally posted by owleye:
<strong>Illusions have content. It is the content that is represented as real, but in fact isn't.

Thus to say that consciousness is an illusion is actually a misnomer. Consciousness is required in order that there be illusions.
</strong>
Taking the last point first, I think senses can be deceived without the need for any consciousness.

On the first point, perhaps this is just semantics. I'm saying illusions exist and are therefore real. I agree they're only imaginary (kinda by definition). I guess you're saying they represent something that doesn't exist in external reality, which I agree with.

Quote:
Originally posted by owleye:
<strong>...When we are immersed in experience, however, there is no subject-object distinction.....</strong>
Agreed, and furthermore I think it likely that there are undiscovered hard wired aspects of our system of perception that deceive us, we have optical illusions so why not logical illusions. We need to build working models to find out exactly why we think what we think.

Quote:
Originally posted by owleye:
<strong>No. My initial reaction to this is that dogs may be conscious, but I suspect they are not self-conscious. Anyway, can you say that a dog really understands that they can in principle do wrong? </strong>
I think most dog owners would say their dog knows when its been bad. (My cat does, without any cues from me, I believe!)

Quote:
Originally posted by owleye:
<strong>"Yes, I'm aware of the fact that a real-time system is a bit of an oxymoron."

Not really, real-time systems are fairly common..... </strong>
I think we're in agreement. Expanding on the thought, sensible real-time systems design requires an architecture that deals gracefully with information overload, rather than just queueing input data till we get to it (not a good idea with a reactor melt-down in process).

The two main ways I'm familiar with are interrupt driven systems where you discard the interrupt depending how busy you are with"more important" stuff and secondly polling systems which regularize the number of events you process. I'm wondering if the experiential side of the brain is interrupt driven with a buffer for daily experiences - do an all-nighter and you lose some of the buffer contents (hence short-term memory loss). Sleep allows the brain to process the day's experiences, finding the best way to knit them into our longer term memory.

Quote:
Originally posted by owleye:
<strong>Norretranders book "The User Illusion" goes into this in some detail.
</strong>
Thanks, I should take a look at it.

Quote:
Originally posted by owleye:
<strong>The problem with this is that you are treating consciousness as if it were some sort of scientific object. In exactly the same way I indicated above about the question of illusion, I can say the same thing here. The can be no consideration of appearance at all without a prior supposition of consciousness.
</strong>
Disagree! I think a lack of understanding arises simply because people refuse to consider the mind and its operation as something that can't be explained. I think your second statement hard to defend - a tree can fall in the forest even if no-one is there to see it. Things don't "appear" in that sense, they are merely observed.

It may prove fruitful to move away from a "silver bullet" kind of thinking with regard to consciousness. I think there are a number of building blocks that go into consciousness as we know it. Why do I beleieve so? Observe a child growing up - as their mind develops so does their consciousness from (arguably) lack thereof through self awareness to self consciousness (of physical self) through to the "I think therefore I am" stage.

Quote:
Originally posted by owleye:
<strong>....It reminds me of the notion of trying to explain the terms left and right, without giving making use of any spatial intuition....Husserl's phenomenology may give us hope, I think. </strong>
Yes there is lots to be curious about. Why, for example, when we look in a mirror, do we appear left-right reversed but not upside down...

I shall immediately go look at some Husserl, thanks.

[ April 06, 2002: Message edited by: John Page ]</p>
John Page is offline  
Old 04-07-2002, 01:47 PM   #67
Regular Member
 
Join Date: Feb 2002
Location: Home
Posts: 229
Post

DRFseven...

"owleye: In any case, your having a different view of things does refute what I was offering."

There was a typo here. I mean't your having a different view of things does not refute what I was offering.

"Can't you present anything to back up your idea that even though our thoughts depend upon things, they are free?"

Yes. It has to do with much of what I've said, which you probably haven't been paying attention to. But I'm hopeful we can find precise area of disagreement. See below.

"Why not? How does "no freedom" from something equal "free"? In order for that to be true, you've got to qualify "free" to mean "bound by some, but not all, restraints.""

Freedom and necessity are two opposing ideas only if they involve fullly deterministic laws of the basic constituents of matter and that matter cannot be formed into entities that operate under laws that are not fully determinate. That is, in supposing that there are such things as indivisible atoms (which in the standard model are bosons and fermions)), then in order to meet this requirement (for full determinism), nothing else exists. Indeed, in such a universe there is no information.

It is because I think there are such things as molecules, and planets, and butterfles, that I believe control system models help understand their existence, far better than deterministic models do, and control systems are understood as those systems which have a structural element to them, maintained through some controlling mechanism, describable through wave mechanics, not through full determinism (though not excluding reduction, merely the practicality of it). Then, since dynamic equilibrium is what would be responsible for the existence of things generally, there is always a provisional aspect to existence.

This is the background I accept, though much more is needed to complete this picture, to get to the level of human free will.

"If that is the case, then I can agree with you that we have free will; our behavior is not as hard-wired as that of other animals."

It is my contention that the significance of freedom is that it comes in degrees, and I can say the same thing for necessity. A collection of water molecules has a degree of freedom that depends on their proximity with each other. Taken in isolation, they have very few degrees of freedom. En mass, they have considerable. Of course, if there is an external force on the mass, to which they are all molecules are equally subject, then there degree of freedom is lessened for each molecule in accord with that external force. Indeed, if water mass is contained, we can describe the water as existing over and above all the individual water molecules and indeed, we can notice the smallest unit of mass as a drop and treat its degrees of freedom as if it were a unit, being subject to control laws.

"How is something bound by a governor free?"

Because the governor is free.

"Again, you just assert it; you don't explain. How does not being able to control our thoughts equate to free thought?"

It is the actions which are controlled by thoughts that makes it possible to say our actions were a result of free will. If our thoughts could be controlled (say by implanting thoughts into us), we would regard this as an infringement on our freedom.

"But nothing learns like humans; we're the star learners of the planet, due to all those neural connections that you don't seem to appreciate."

What disallows my saying that humans have achieved a different level or kind of learning than do cats. I say this principally because you seem to think that it is all bsed on behavioral conditioning.

"Yes, and cats have a kind of freedom we don't attribute to ourselves; freedom from societal impositions, freedom from language, freedom from certain emotional states, etc., so it's not clear how their behavior totals up to be "less free" than ours. They are less free than we are from certain things and we are less free than they are from certain others."

This is probably too subtle for my pea-brain. We attribute free-will to humans, not because the bible says we have it, but because it is needed to account for moral conduct. We have moral choices that cats don't have.

"That's because they're cats. Right and wrong are human constructs."

This is what the debate is all about. What are the implications of your theory which denies free will on this debate. What is the basis for right and wrong? If it is as you claim, merely behavioral conditioning, then there is no right and wrong, per se, merely what we have learned through conditioning how to behave. Some behavior is reinforcing, while another is punishing. There are no principles on which we act.

"And where do you think humans get these moral guidelines? They learn them and moral acquisition is a very good example of conditioning."

In your view, yes. In humans, though, it is not so easy to represent the way we learn through a model of behavioral conditioning. We don't learn in the same way because our mind is not constructed in the same way. I do think that there is a large element of behavioral modification in youngsters, but it seems as part of adolescence, that there is a need to transform what we've learned into it being our own, and this requires a certain rejection of what we've learned, in order, perhaps to reformulate it. In any case, the end result is quite different than a simple model of rewards and punishments.

"Beginning from our infancies, we receive emotional "lessons" (experiences) from those we trust, thereby learning that things are good and bad. As we mature, we categorize according to those lessons that we don't even remember. Later, things just "seem" good and bad to us according to how we reason that they fit in with our guidelines."

I suspect this is not entirely true. Is lying merely a guideline developed from hazy memory of the conditioning during your "formative" years? And from this being the case, does an instance of lying "seem" bad because it violates that guideline? What about self-deception? How is this conditioned?

"Since all conclusions are determined by what is known about something, all conclusions are determined by antecedent causes."

Again, even if this made sense, it is not an argument against my compatibalist position.

"Ex. Your overhead light goes out and you decide to replace the bulb so that you'll have light. How did you know it was burned out? How did you know you could install a new light bulb? Experience, that's how. Antecedent causes. Without experience, you wouldn't have known what to think."

This does not deny the freedom to replace the bulb, to determine the cause its being burned out. Experience and knowledge give us more options, not less. More options give us greater freedom.

"And without experience, there would be no knowledge of what might be available to choose,"

That's right. Experience and knowledge give us more options, not less. More options give us greater freedom.

"and, what's more, no way to determine preference."

It is reasonable to say that whatever is decided is decided on preference, but I'm not particularly satisfied with this, unless you define 'preference' to mean what was decided upon. What I'm not happy about, particularly in moral actions and judgments, is that we can make decisions to act or to judge despite our preferences.

"Why; because physiological ties contradict the idea of freedom? Physiology cannot be discounted; do away with it and you do away with all behavior, including thinking and choosing."

It is irrelevant not because it doesn't have a bearing on our thoughts. Indeed, our being alive has a bearing on our thoughts. What makes it irrelevant is that it fails to respond the question of whether or not our actions are a result of our having free will.

Consider this simple example: A ball is dropped from a precipice. Let me ignore for the moment that there was a precipitating cause of this dropping. In disregarding its precipitating cause, I'm not thereby rejecting that there was a precipitating cause, I'm merely trying to shed light on what happens as a result of the release of the ball. You would argue, of course, that the flight of the ball is fully determined -- nay, gravitation, perhaps, subject to air currents, informs us that the ball's flight follows a precise trajectory.

So, why do we say the ball is in "free flight:" (or to use a Wheeler's nomenclature, "free float".

We say this because nothing is impeding this flight, that would redirect it and take it off course. We can understand its free motion only if we think the possibility of constraining it.

Indeed, if we catch the ball, and confine it so that it doesn't move, relative to its container, we would regard it then as not free. Freedom, in this construal is the lack of containment. It is a control theoretic understanding of freedom. Despite that the ball was under the influence of gravity, we have the ability to stop its flight by catching it.

This model of action then regards free will in such a way that we can "catch ourselves." That is, we may find ourselves doing something which is wrong and thereby "veto" our actions. This is possible, presumably, because of the way our mind is constructed. There are two modes of experience -- one inner, and one outer. Thinking occurs (and percolates up to) inner experience, and through a reflective power, can evaluate our own outer experience -- evaluating our intended actions, if you like. Apparently there is enough time differential here that we allow for the possibility our actions are not too late to be changed mid-stream so to speak. As such, we attribute this capability to our having real choices. If we have real choices, we, therefore, have free will.

"You're bound to think of pink elephants if I say so, right?"

I'm not sure what you mean here. Do you mean that it is likely that my imagination will provide for me an image of a pink elephant? Do you think this sort of thing occurs in all experience? Do my remarks bind you to think in a certain way? Are you under my spell? (and vice-versa)?

"Can you conclude not to? Of course not; you are bound to react to the stimulus."

So you say.

"And do photons and neurons not cause sight in any meaningful way?"

No. They are merely physical conditions of the possibility of sight.

"Doesn't matter. There was an original sensory perception that started the ball of thought rolling. Quite right that we needn't be aware of it, though."

Of course, the word you use is 'react". Note that I distinguish sensory reception from sensory perception. It was the whole point of making this distinction so that it would make sense to say that we receive something without being aware of it. To be aware of it means we are perceiving it.

In any case, I don't buy your theory. Something may trigger our thoughts, but its content may be completely incidental to the train of thought that it precipitated. But more than this, I find it difficult at best to tie every thought to a reaction stimulated by some sensory stimulus.


"We alter our beliefs all the time, but always due to some experience, some new association."

I have my doubts about this. I think we alter our beliefs on the basis of experience, but it is not experience alone that makes for the change. Beliefs have to make sense. They have to be interpreted into a coherent picture. Indeed, the very idea of experience itself requires a prior conceptual scheme in which to organize it. New observations may not jive with prior experience, but this doesn't automatically mean we accept the observation, at least on the surface. We may discover good reasons for rejecting the observation as meaningful in the light of our current scheme.

"Ex. You replace the bulb and the light still won't come on. Your husband or wife comes in and tells you the utility company has the electricity off for thirty more minutes. In thirty minutes, the light comes back on, along with the television and the heat pump. Voila'; now you know something of which you were previously unaware. From now on you are bound to think of lack of electricity as a possible reason for lights going out in specific instances. You have no control over this."

The "voila" requires something in us to make it possible. We are not necessarily bound by this. It is our ability to construct this hypothesis from the data give that makes this leap possible. We might not even make this leap. We may not be that smart. Associations may be involved in structuring the schema, but they are not so rigid that we are bound by them. Of course, this is not what you said. You only indicated they become a possible reason for lights going out. But this gives us a choice of reasons, and with the added choice comes added freedom.

"Then you must also grant the goldfish free will because it does what it wishes, too. It just doesn't wish much."

There is a certain type of choice that a goldfish has, but it doesn't reach to the level that of human choice. However, the problem between you and I however, is much deeper than this. Your claim is that because causes can be found for all behavior, there can be no freedom. This is where we part ways.

"Mental content and variability do not necessarily describe the face of freedom any more than any other attribute a thing has.
You may as well say that tomatoes and not blueberries are free because tomatoes turn red."

If I read this correctly it is your contention that mental content is an attribute of something physical (presumably something having to do with neurons). Let's suppose that I'm thinking of my garden. My garden is the content of that thought. In your theory, then, my garden is an attribute of the neurons. Instead of wondering whether I need to water the flowers in my garden by going out and testing the soil, I should be able to do the same thing by interrogating this attribute of the neurons. This would would help me decide whether there is a need to water them.

Variability has to do with the randomness of things. There may be a randomness to thoughts that is measured by the variability within neural firings. I have no idea there. But this relationship does not extend to the content of the thought, which seems quite specific, though fleeting, at least at times. In effect, the contrast I was I highlighting had to do with the difference between the mental and the physical. Your inference to freedom in the examples doesn't appear to be relevant.

"These are merely different stimuli. Neural activity is controlled by any number of stimuli, including previous neural activity."

So, we do have control over them. We can take drugs, we can perform certain biofeedback procedures. This is not evidence of your contention that we are bound.


And how is the decision to decide decided?

Again, irrelevant. Since this is rather a drumbeat argument for yours, one that remains unchanged regardless of my response, I fear it will not benefit me to proceed further.

owleye
owleye is offline  
Old 04-07-2002, 05:04 PM   #68
Veteran Member
 
Join Date: Mar 2001
Posts: 2,322
Post

[QUOTE]Me: And how is the decision to decide decided?

You: Again, irrelevant. Since this is rather a drumbeat argument for yours, one that remains unchanged regardless of my response, I fear it will not benefit me to proceed further.

Translation: You don't know. My response remains unchanged because you can't give me a single reason to call our thoughts free, other than that it's a convenient figure of speech to distinguish the abstract and conceptual kind of thinking humans do with the kind that other animals do.

It's easy to call our kind of decision-making "free will" by compartmentalizing the way we think of freedom. For eveything else, free means one thing; for decision-making, it means another. I am not free if I am bound by chains; I am free if I am bound by experience. I have no quarrel with the statement "I did it of my own free will" as long as there is tacit understanding that it only means *I* chose to do it, instead of someone else choosing. As long as people understand that we are not free to choose how our concepts are formed and that it is useless to talk about decisions without implicating concepts.

If it really is your opinion that our concepts are not formed by experience, what do you think forms them? You say it's not a god, but if it's not experience, there has to be some kind of ghost in the machine.

Of course this is silly; it's easy for anyone who thinks about it to see that our thoughts ARE formed by the transmutation of experience into memories and associations. This, of course, is why every person is unique and why people have all kinds of different opinions about things.
DRFseven is offline  
Old 04-08-2002, 01:06 PM   #69
Regular Member
 
Join Date: Feb 2002
Location: Home
Posts: 229
Post

John...

"From the rest of your posting it seems we're looking at whether a human being is "controlled" in any way."

We can be, not only in a political/legal way, but also, if we choose to. This is to say, however, that we are rule followers. Rules, by their nature, are regulatory. When we operate merely by rules, we are often said to act in a mechanical way. When we've learned how to add a column of figures, we can do it mechanically, or we can do it intutively, depending on whether we want a precise answer or not. Computers, it seems to me, are only rule followers in this sense. Notwithstanding this, rules, generally speaking, at least when we're not completely rule bound, as a computer would be, regulate our behavior -- i.e., govern, manage, control, etc., as opposed to determine.

This isn't the only way we allow ourselves to be controlled, of course, we can allow ourselves to be controlled by the forces of nature.

None of this, however, dismisses outright the possibility that we ourselves can decide whether we are followers or leaders (acknowledging that we often find our choices limited).

"Are you comfortable with the comparison brain == hardware, mind == software + memory contents (data)?"

I'm sure this is not the whole story. Humans have consciousness such that data are meaningful in ways that data cannot be with respect to software. Nevertheless, it is helpful. Historically speaking, the concept of software is not all that different than what has been given the name of logic. Our rationality has been the seat of what constitutes our special human quality. That we have now have formalized processes is what is new about it. Up to about the 19th century, processes required the insertion of time in order to express them.

"I think of the "token" as the "axiomatic concept" of the type."

You are substituting a more complex terminology for a lesser one. I'm not sure of its value. Besides, I have no idea what an 'axiomatic concept' is. Perhaps I will learn this in the sequel.

"I believe it is very important to distinguish between a "symbol" as a arbitrary tag, token or name and a "representation" which is data that can be used for templating and hence reconstruction of the original entity."

Could be. Representation is a kind of structure that captures all the information intended by that which is presented (given) to it. Symbols, on the other hand, don't by themselves capture any information. They act more like the pointers you've previously used.

Data (or information), as it is representted in a representation, corresponds to the traces left behind during the process of "impression" making. These are metaphorical discriptions of how data is captured by a representation. Symbols are also representations, but the data rather are pointers to other representations. Encoding produces a different kind of representation that one involved in "impressions" because as this may be the point you are getting at, we tokenize, following certain discretizing procedures, so that a symbolic reference is created where there was none before. This undoubtedly complex task makes human intelligence linguistic, at least from a structural point of view.

"I'm jumping forward here but say we have an external entity, we receive sense data about that entity (color, shape, smell, sound etc.) and match this with our axiomatic concept (template) for a cat."

Humans probably have two kinds of representation -- one conceptual, and the other spatio-temporal. Kant explains this.

The word "cat" is the token, the type is the "axiomatic concept of cat", distinguishing features of the cat (time, location, bent ear etc.) can make that instance of cat unique in our memory.

From your knowledge of the modern OS, I suspect you understand the layering of software associated with software encoding. What is taken as a token in one layer is a type in higher layer. I suppose the kind of encoding that produces the token I had in mind is produced by electronics (or whatever) that transforms voltages (or other discriminating physical media) into meaningful units. It is assumed of course, that such a transformation captures information and represents it in a new form. Information (actually only data at this point) will relate somehow to that source which emitted it and we can infer that it is about that source (at least at the lowest layer). In computer communication, having no interest in space-time as the means of representing objects, typically provides information within the data stream that tells it of its source, though because of this, it can be readily fooled (as humans can due to mirrors and other distortions).

"So, while I'm not unhappy with the token/type idea, I think there's more to it. Everything above occurs in the mind (i.e. after receipt of sense data) and is abstract."

There are layers, I will agree. My point was only that it is known to be abstract only because a mind like ours understands it in this way. The computational process may be able to exhibit this, but it doesn't recognize that it is dealing with abstractions -- principally because it cannot recognize one. One might say it processes blindly. It can have no meaning to the process.

"I hope the above makes clear what I consider as the boundary between external reality and abstraction in the mind. I regard them all as part of reality (because my thoughts are real). I would say abstract and concrete are opposites."

Of course, abstract and concrete are opposing ideas, but what makes something abstract is that it can be dealt with independent of the concrete representation. Alternatively, we can think conceptually, or theoretically. Computers cannot do this, as far as I know. We do not need to substitute specific values for the variables in order to make sense of them. To the extent that computers do this, I suspect they do it only formally. It is purely a formal language that is being considered here. It wasn't derived from an actual abstraction, from concrete instances. It is we who have defined the formal language that is used by the process which it, rather blindly -- i.e., mechanically -- executes.

"I'm having a hard time with this one. First I think perception is a pre-requisite for knowledge is a pre-requisite for consciousness."

I make the distinction between sensory perception and sensory reception. The former includes consciousness, the latter doesn't.


"I would agree that consciousness "perceives" self but I don't think that is needed for perception in general."

Consciousness has two modes, direct and reflected. In reflected mode, it notices the "I." Otherwise it doesn't. As Heidegger says, we are "thrown into" a world.

"My definition of perception is the process that occurs when sense data is abstracted."

I think your point is that we throw away most of the data that reaches us. However, I don't think perception is sufficient for this. We need a mechanism that organizes perceptions and this is what our conceptual scheme does.

"Notes: 1. I believe one can view the mind as dealing with a number of layers of abstraction, perception being required to pass from one layer of abstraction to the next."

I think you are making perception do too much work. I do respect that we tend to say "I see your point." when we really mean we understand it. This is because, historically, the terms used in ordinary language draw from a primitive understanding of how our mind is constructed. To develop a theory of mind, one needs to develop a more sophisticated vocabulary, I think.

"I agree with you in the context of declaring the data types and structures in a computer progam. Perhaps my poor analogy again but if you take the physical structure of a neuron you can say that the physical location of the dendrites (which are pointers) enables them to sense data at that location - like a memory read from a pointer address. So, what you would declare in a computer program is implicit in the physical brain structure. If all brain cells were uniform in type we might say:"

I think it would be better to use the distinction you earlier made between representation and symbol. Does a neuron both represent data and act as a symbol of some other data?

struct brain{
struct cell[CELLS]{
dentrites[DENDRITES]; /* where dendrite is a structure declaring each dentrite as a pointer to the input data for the cell process */
cell_process_ptr;
axons[AXONS];}
struct sense_data_layers[SENSORS];}
}

Because of the generality, here, there is no way to determine whether a neuron can actually do anything. That is, there is a need to consider a synchronizing capability in which more than one input is received, but not necessarily simultaneously, which in computatation is performed by a clock. I understand neurons are state machines that operate on the principle of tolerance. They can absorb a fair amount of "input" without otherwise producing any "output." Moresignificantly, though, I don't think neurons themselves deal with sophisticated data structures. The complexity is rather in the global nature of them. As such, neurons, themselves, are blind.

"It seems to me that cells seek out relationships and the other cells that need to know them through migration of axons etc."

This you would have to demonstrate, I think.

"Agreed, and I think we need to understand abstraction better....
A bitwise & operation performs an abstraction - you take two pieces of data and the result is only understandable if you know the abstraction process. That something has been abstracted doesn't mean that its representation doesn't exist in space/time. I can see the results of abstraction in a computer output register but it doesn't mean a thing unless I know the modus operandi of the computer."

I don't know. Maybe you have something here, but it seems a bit obscure to me.

"I'm confused as to what the gap is and the benefits of it being filled in."

The idea here is that there is a gap between reality and our response to it. Given this, we don't stand much of a chance to deal with it unless we have compensating mechanisms. In the absence of these compensating mechanisms it is unlikely that we would survive for very long, thinking of evolutionary pressures here.

The gap is a time lag. I'm not just reacting to the environment, I'm responding to it. And it takes time to develop the response. If the response is poor because I don't have the capacity to deal with it in real time, then I will probably not survive. That's the argument anyway.

"Taking the last point first, I think senses can be deceived without the need for any consciousness."

Perhaps you can explain this. You may be aware that we have the capability of receiving information through one eye but not the other, where we are conscious of one but not the other. If so, can you tell me how the eye that receives data without our being aware of it, is (or can be) deceived by what is received.

"On the first point, perhaps this is just semantics. I'm saying illusions exist and are therefore real. I agree they're only imaginary (kinda by definition). I guess you're saying they represent something that doesn't exist in external reality, which I agree with."

What is the distinction between external reality and reality? An illusion is that which our mind represents as unreal because of the way it works with reality. In order to represent an illusion as real, you are thinking of the mind in some "inner sense" which isn't about reality, but about its own representations. Is the representation real? It is really a representation. But is it a representation of what's real, and this is the key point about our mind. We are interested in our minds because of the peculiar relation it has with the world. The mind itself is a subject that we can't get much of a handle on because of its first person basis. You have a great desire to think of it from a third person basis, i.e., scientifically. But if you do this, such words as "illusion" and even "reality" lose all meaning.

"Agreed, and furthermore I think it likely that there are undiscovered hard wired aspects of our system of perception that deceive us, we have optical illusions so why not logical illusions. We need to build working models to find out exactly why we think what we think."

What then is an illusion, if it can be a "logical illusion" or any other kind that a phenomenological one?

"I think most dog owners would say their dog knows when its been bad. (My cat does, without any cues from me, I believe!)"

Though I doubt they do know it, I rather suspect they are principally behaving on the bases of rewards and punishments.

"Disagree! I think a lack of understanding arises simply because people refuse to consider the mind and its operation as something that can't be explained. I think your second statement hard to defend - a tree can fall in the forest even if no-one is there to see it. Things don't "appear" in that sense, they are merely observed."

In what way is the tree that is not seen, heard, etc., observed to fall? What is it you have in mind here?

owleye
owleye is offline  
Old 04-08-2002, 07:22 PM   #70
Regular Member
 
Join Date: Feb 2002
Location: Home
Posts: 229
Post

DRFseven...

"Translation: You don't know. My response remains unchanged because you can't give me a single reason to call our thoughts free, other than that it's a convenient figure of speech to distinguish the abstract and conceptual kind of thinking humans do with the kind that other animals do."

Assuming this is true, why isn't it acceptable?

"It's easy to call our kind of decision-making "free will" by compartmentalizing the way we think of freedom."

What's wrong with this?

"For eveything else, free means one thing; for decision-making, it means another. I am not free if I am bound by chains; I am free if I am bound by experience. I have no quarrel with the statement "I did it of my own free will" as long as there is tacit understanding that it only means *I* chose to do it, instead of someone else choosing. As long as people understand that we are not free to choose how our concepts are formed and that it is useless to talk about decisions without implicating concepts."

Have I suggested anything to the contrary?

"If it really is your opinion that our concepts are not formed by experience, what do you think forms them? You say it's not a god, but if it's not experience, there has to be some kind of ghost in the machine."

I've never indicated that concepts are not formed by experience. However, I'm not an empiricist of the Humean kind, but instead impressed with Kant's ideas. We need a conceptual basis for the possibility of our having experience in the first place. Kant called this conceptual basis (or, alternatively, the epistemic conditions) for experience the pure categories of the understanding (not to be confused with innate knowledge).

owleye
owleye is offline  
 

Thread Tools Search this Thread
Search this Thread:

Advanced Search

Forum Jump


All times are GMT -8. The time now is 02:21 PM.

Top

This custom BB emulates vBulletin® Version 3.8.2
Copyright ©2000 - 2015, Jelsoft Enterprises Ltd.