FRDB Archives

Freethought & Rationalism Archive

The archives are read only.


Go Back   FRDB Archives > Archives > IIDB ARCHIVE: 200X-2003, PD 2007 > IIDB Philosophical Forums (PRIOR TO JUN-2003)
Welcome, Peter Kirby.
You last visited: Yesterday at 05:55 AM

 
 
Thread Tools Search this Thread
Old 01-27-2003, 11:14 AM   #31
jj
Senior Member
 
Join Date: Feb 2002
Location: Redmond, Wa
Posts: 937
Default

Quote:
Originally posted by AnthonyAdams45
Nope, not free will, just not pre-QM view of determinism. Still determinism, just not the blocked and locked version of the past.

There ya go
Hm, again, I guess we'll have to disagree. I think that excludes determinism, because nothing is determined until after it's done.

I do not necessarily assert, however, this represents free will. You raise some other issues in the thread about randomness that I will address separately. That subject is long, involved, I do have some experience with it in the form of sophisticated optimization algorithms and tree-pruning (simple learning) programs, and I most certainly do not agree that we learn DESPITE randomness.

But that takes some discussion, which I will try to recall to get into when I have the time to do it.
jj is offline  
Old 01-27-2003, 11:43 AM   #32
jj
Senior Member
 
Join Date: Feb 2002
Location: Redmond, Wa
Posts: 937
Default

This will be somewhat long, so please bear with me!

Quote:
Originally posted by excreationist
I think it is clearer to say that neurons fire once their inputs reach a certain threshold... (and some inputs can inhibit the neuron's firing).
Indeed, and the question of "do we reach this threshold" is directly, at the physical chemestry level, affected by the quantum states of the electrons and ions functioning as the bearers of the information that sets the threshold, so in fact neurons fire with a random component.

This is not the only random component, there is also a thermodynamic component, but I haven't brought this up in the determinism thread, because that is subject to "determinism" if we accept the no-random-QM, infinite precision and accuracy take on mechanics.

The thermodynamic component also requires randomness, though, if we discard the idea of infinite resolution. This is due not to QM directly, but due to the fact that charges are quantized. This quantization itself introduces unavoidable noise.


The input levels can be affected by "noise" though... which would be random quantum fluctuations. If the inputs aren't near the threshold level and the noise is fairly minor (as it would be compared to large-scale molecules) then the noise would hardly ever affect the workings of the neurons. They'd be mostly deterministic.

Oh, absolutely, there would be a large "predictable" component, this is not, strictly speaking, deterministic, rather it's something more like a Markov process, the previous state(s) influence the probabilities of the next state.

Something to remember here is that once ONE state is changed, the entire outcome from that point can diverge, over time to effectively covering most any other possible outcome. This seems like an extreme statement, but both neural net research as well as basic mathematics makes this a fairly clean statement.


I think it is kind of like a TV set... there might be an occassional inference with the signal - mostly due to deterministic things - like other radio waves but the picture gets through pretty well.

The place where this analogy fails is that after the "interference" there is no guarantee in a system with MEMORY (in this sense a TV program has no memory, it is a predetermined (that word!) sequence) that the system will ever return to the same state, in fact nearly all of the time it will NOT ever return to something like the same state. The TV, however, will return to something undetectably near the same state very quickly, within frames.

Or it could be like a computer that has RAM that doesn't work 100%.... the computer might work for minutes at a time seemingly perfectly, but then programs might crash, due to a crucial piece of information being in the faulty part of the RAM. The cause of the faulty part of the RAM would mostly be from high level problems during the RAM being manufactured... or at least that would have been the case when RAM had larger transistors. (They are approaching an atomic scale now)

This is not germane, because the average computer system is not arranged to be self-correcting, and neural nets (for one thing) are intensely self-correcting, and have a set of properties that can "keep almost the same idea around". This is unlike a RAM bit error (in a non-ECC RAM), where either it's right or it can be arbitrarily wrong.

So I'm saying quantum noise just adds a tiny little bit of extra noise, that can of course accumulate over time (the butterfly effect) but it wouldn't affect things much during short periods of time.

No, that's not right. This is true for a TV, but for a decision network, neural net, Markov process, or whatever, ONE divergence can create an entirely new outcome that is not small, but that rather is huge and not even recognizably like the "other" path at all. The evidence for this is much too complex and huge to put here, but you can find lots of references on the net. Chaos theory is only part of this.

At least one of the interpretations involves hidden variables. So it is conceivable that there are hidden variables that we can't measure.

Agreed. That is certainly possible. There could be hidden variables that can NEVER be measured inside this universe. How much this differs, in the view of this universe, from randomness is debatable, and the assertion may be unfalsifiable, which makes the science of it a wee bit tough.

I don't think so... it's kind of like a billiard ball with some dust on it. It won't act like a perfect sphere, but it is *approximately* deterministic, which means that good players can do amazing pool shots.

Agreed, in some sense. You'll find that this is what I'm saying with my references to Markov chains, for instance, but in a sense where the divergence, once created, can grow very rapidly.
There is quite a lot of evidence for natural processes of that sort, too.

No, learning begins with memory, and then some inputs and a mechanism that tries to get the system's predictions to match or make sense of the inputs - to find the patterns. Something like that.

Given exactly the same inputs, I think neurons would behave the same way, assuming that the sum of the inputs isn't right near a threshold and there isn't a lot of noise/interference.

Well, we disagree for several reasons. The first is that very very often, the sum of imputs IS close to a threshold, especially when you don't know the answer. (yes, really!) In such cases, you can see two very different results, i.e. "yes" and "no" from exactly the same inputs.
The second is that while we try to find the patterns, we must experiment with finding patterns, and as we learn, we must make the patterns more complex. If we have no randomness at all, we are limited in the kinds of pattery we can try. This may seem odd, but I'll get to the performance of optimization algorithms in a bit.

BTW, it appears that neurons in our brains are more complex than what I was saying... there are gasnets that simulate this - basically it involves neurons sending messages through gases as well, rather than only relying on signals between directly connected neurons. I guess that gas would be much more influenced by quantum randomness - since it is only a small molecule - NO (nitric oxide).

Whoa! Our nerves interconnect, but not "directly". All interconnections are effected by things like ions that create electromagnetic fields, small molecules that affect the cell membrane, larger molecules that affect the sensitivity of the membrane, and so on. There is no direct connect. Electrons and single-atom ions are some of the primary "connections". They are about as small as we get in the normal-energy world, and good thing that!

So I guess after all the brain is influenced by quantum fluctuations to *some* extent. And sometimes this interference builds up and gets bad - e.g. they have a mental illness. This would usually be blamed on genetics and things like stress, etc, rather than quantum fluctuations since those other sources of interference would probably have a much greater effect....

I don't think mental illness is much related at all, except when random processes get the neural net started off in the wrong direction at the start. I think it's pretty clear that large-molecule ratio or sensitivity imbalances, etc, that affect the regulatory methods that determine how we learn, on the other hand, are very involved.



I'd say that our brains learn *despite* random processes... well I guess randomness is good if you want a lot of creativity (and the risk of insanity)... but I don't think neural nets really need to be initialized to random values and there doesn't need to be lots of extra noise for neural nets to learn.

It's not a case of initialization.

Now, to randomness.

Some of the best optimization methods around use randomness, things like 'genetic' or 'evolutionary' algorithms, the various thermodynamic (freezing, etc) algorithms, and even some of the older algorithms that would add random probes into a deterministic method at some times, are shown quite conclusively to be BEST at some of the most obnoxious optimization problems that are around. It is in fact the randomness, which effectively allows the algorithm to try new hypothesises, that makes such algorithms effective, basically in a mathematical sense they can span more of the solution space, and more quickly, of course with the everpresent risk of a less than perfect outcome.


Having noise makes the strengths of the inputs/outputs (inhibitory/excitory(?)) signals stronger in the neural network and it can make it "guess" the right outputs sometimes... but it also makes it make more mistakes... but after learning it begins to make less and less mistakes. (Assuming noise/interference is presence, but in this applet at least, noise isn't essential)

I removed a lot here that I don't have much trouble with. The fact that it makes less and less mistakes (even with noise) is simple learning. IN fact, the "fuzzy" nature of neural nets, in the presence of uncertainty or noise, is one of their great strengths.

Your point about removing one neuron making large differences sometimes is interesting, too, and I'm sorry I've elided it. This shows, basically, what can happen when temporarily one neuron doesn't work the way it might due to probabilistic behaviors. You can get new behaviors that you never see before. If such behaviors are advantageous, well, now you have a "leap" in learning.

This is why I think randomness is absolutely essential. It means we get much farther at the risk of making worse mistakes.




No, I think the main reason we think different things is because we have had different past experiences (traumas, etc), different current experiences and different brain chemistry, which I think we use to work out what we desire and our thoughts are a reflection of our desires (and experiences).
I'm not sure of that. I've used a lot of optimization algorithsm. I haven't used neural nets myself for research purposes, but I have studield a lot of decision methods, decision theories, optimization methods and theory, Markov processes, and the like, and I'm convinced both of the need for, and the presense of, noise.

I think that things like twins show that there is some preordained part to behavior, but I also think that how they diverge is a good example of how random events as well as random processes cause divergence.

The key being that outcomes can change radically when something is on the "threshold".
jj is offline  
Old 01-27-2003, 11:57 AM   #33
Regular Member
 
Join Date: Jul 2000
Location: Florida
Posts: 156
Default

Quote:
The key being that outcomes can change radically when something is on the "threshold".
I agree that you're presenting a wonderful argument against old school determinism/fatalism/ etc. But your learning net filled with randomness appears to me to have no more free will than a storm cloud in a weather front; not predictable, but still deterministic.

Again, maybe its a definition thing. For me, determinism is the simple minded name for a group of theories which claim that human actions are caused. Or restated, that the causal chain traces back before the decision event. "Free will," as it is usually defended, states that my "choice" is the only causal force of any importance. As your theory does not seem to embrace the latter definition, but rather presents are far more sophisticated and scientifically defensible case for the former, I call it "determinism." Really good, really encompassing determinism that understands the contemporary problems associated with causality, but still determinism.

What I still want to know is, why would you want to call it something else?
AnthonyAdams45 is offline  
Old 01-27-2003, 02:04 PM   #34
jj
Senior Member
 
Join Date: Feb 2002
Location: Redmond, Wa
Posts: 937
Default

Quote:
Originally posted by AnthonyAdams45
I agree that you're presenting a wonderful argument against old school determinism/fatalism/ etc. But your learning net filled with randomness appears to me to have no more free will than a storm cloud in a weather front; not predictable, but still deterministic.

Again, maybe its a definition thing. For me, determinism is the simple minded name for a group of theories which claim that human actions are caused. Or restated, that the causal chain traces back before the decision event. "Free will," as it is usually defended, states that my "choice" is the only causal force of any importance. As your theory does not seem to embrace the latter definition, but rather presents are far more sophisticated and scientifically defensible case for the former, I call it "determinism." Really good, really encompassing determinism that understands the contemporary problems associated with causality, but still determinism.

What I still want to know is, why would you want to call it something else?
I'm not sure we entirely disagree, but perhaps we are down to semantics.

I take "determinism" in the strong sense, that everything was set from "the word go".

I take "free will" to mean much what you take it to mean, I think.

This means that what I see the situation to be is neither free will nor determinism, because I see "will" arising out of randomness, this randomness not being "determined from the word go".

I don't, by the way, regard the system as deterministic, but again perhaps we are having a semantic difference, because to me deterministic would imply the strong sense of determinism and would be imcompatable with randomness of any sort.

A Markov process is not deterministic, for instance, in the sense I would normally use the word. The probabilities that define the system can be deterministic, the system it self can be time invariant, but the OUTCOME is still probabilistic, and not therefore determined (ergo not deterministic).

Words!
jj is offline  
Old 01-27-2003, 03:17 PM   #35
Regular Member
 
Join Date: Jul 2000
Location: Florida
Posts: 156
Default

I will keep reading, but you and I are now on the same page.

If that's any consolation.
AnthonyAdams45 is offline  
Old 01-27-2003, 06:10 PM   #36
Veteran Member
 
Join Date: Aug 2000
Location: Australia
Posts: 4,886
Default

jj
Indeed, and the question of "do we reach this threshold" is directly, at the physical chemestry level, affected by the quantum states of the electrons and ions functioning as the bearers of the information that sets the threshold, so in fact neurons fire with a random component.

This is not the only random component, there is also a thermodynamic component, but I haven't brought this up in the determinism thread, because that is subject to "determinism" if we accept the no-random-QM, infinite precision and accuracy take on mechanics.

The thermodynamic component also requires randomness, though, if we discard the idea of infinite resolution. This is due not to QM directly, but due to the fact that charges are quantized. This quantization itself introduces unavoidable noise.

Yeah, thermodynamic things would be yet another source of interference... though it is somewhat deterministic on a larger scale.

Something to remember here is that once ONE state is changed, the entire outcome from that point can diverge, over time to effectively covering most any other possible outcome. This seems like an extreme statement, but both neural net research as well as basic mathematics makes this a fairly clean statement.
Actually I think in neural net learning, they converge on a "solution" (it extracts patterns) once it is repeatedly trained, and these would have a limited number of equivalent possibilities... perhaps like X squared = 4 has two solutions, 2 and -2.... (or 4 I think if you include "imaginary" solutions). On the other hand, for things that don't learn an identical thing and hence converge to some degree, tiny differences in the initial conditions would lead to huge differences later. (e.g. the "butterfly effect" where the flapping wings of a butterfly could eventually lead to a tornado on the other side of the globe - which wouldn't have occurred in an alternate history without the butterfly)

This is not germane [relevant?], because the average computer system is not arranged to be self-correcting, and neural nets (for one thing) are intensely self-correcting, and have a set of properties that can "keep almost the same idea around". This is unlike a RAM bit error (in a non-ECC RAM), where either it's right or it can be arbitrarily wrong.
My point was that a computer that has memory that is partly faulty can work in a deterministic way most of the time... much of the time the faulty memory might be used to store non-critical information and so not be noticeable by the user... I was talking about semi-determinism.... I guess you could think about it in terms of probabilities, but in fact it is just one part of the memory that is faulty, the rest would work perfectly. (Or relatively perfectly - there might be extremely rare instances of faults in the supposedly perfectly working RAM)

[So I'm saying quantum noise just adds a tiny little bit of extra noise, that can of course accumulate over time (the butterfly effect) but it wouldn't affect things much during short periods of time.]
No, that's not right. This is true for a TV, but for a decision network, neural net, Markov process, or whatever, ONE divergence can create an entirely new outcome that is not small, but that rather is huge and not even recognizably like the "other" path at all. The evidence for this is much too complex and huge to put here, but you can find lots of references on the net. Chaos theory is only part of this.

Quantum noise would only alter the outcome of neurons if the input sum is right at the threshold. And because of other noise (non-quantum fluctuation noise) the "weights" of the inputs would be quite strong so it would be rarer that it would be right near the threshold. (in my applet you can see how extra noise makes the "weights" of the neurons stronger)
I think the noise of other things (like thermodynamic stuff or whatever) would be stronger than the immediate effect of quantum fluctations. An analogy could be a dusty billiard ball with a little block of wood glued to it. The dust would affect the path of the billiard ball a bit, but the effect of the wood would be more obvious.

Well, we disagree for several reasons. The first is that very very often, the sum of imputs IS close to a threshold, especially when you don't know the answer. (yes, really!) In such cases, you can see two very different results, i.e. "yes" and "no" from exactly the same inputs.
I think that it is easier to see the effects of noise that isn't directly from quantum fluctuations on the result...
e.g. say the threshold it +1.000000 and non-quantum fluctuation noise (thermodynamics or whatever) is +/- 0.01 and say quantum fluctuation noise usually has an effect of +/- 0.00000001 or less.
Now say the inputs added up to 1.000001..... the non-quantum fluctation noise would have the most say in whether it stays above the threshold or goes below it. But there could be rare times when instantaneous quantum fluctuations have an effect... I think they would usually average each other out though... like how particles in the air are actually moving around at ridiculous speeds, but overall there isn't movement in one direction (well there's air pressure though...)

The second is that while we try to find the patterns, we must experiment with finding patterns, and as we learn, we must make the patterns more complex.
We refine our patterns... there is no need to make them any more complex than they actually are though.

If we have no randomness at all, we are limited in the kinds of pattery we can try. This may seem odd, but I'll get to the performance of optimization algorithms in a bit.
I think we try different things because we have a fundamental desire for a certain amount of "newness"... as well as "connectedness" where we want to understand how the world works... I think the different ideas we have begin from ideas that have the strongest associations with recently accessed information... we'd then see how well that fits our goals (e.g. to think up a new idea) and if it needs work we'd automatically fire off some more associations to see if can find a better solution. I think the main reason our thoughts seem random is because we have a lot of varied stimulus/experiences which trigger lots of different ideas and create lots of different associations. I think that whenever we have experiences, all of the elements are associated together. So if you were having a Christmas dinner that made you feel happy you'd associate all of those elements together. So if you had to "brainstorm" ideas based on the idea "Christmas" you would think of the Christmas dinner first, assuming that was right at the "front" of your mind. (It was associated with the very near past) But if this was months later, there would probably be stronger associations... like Santa and Christmas trees. Santa was my first association, and then Christmas trees was a result of "Christmas" and "not Santa". Emotions are also associated with elements of the experience... that's what ads do when they show an ad to try and trigger your emotions... then those triggered emotions are associated to some degree with their product. Your annoyance at the ad might also be triggered and perhaps outweigh any positive emotions triggered by the ad. But the association between a positive emotion and their product would still be there, underneath the dominating negative association... so I think that our thoughts aren't as random as you seem to think.

Whoa! Our nerves interconnect, but not "directly".....
I mean that the neurons in our brains mostly communicate with other specific neurons... gas on the other hand is chaotic.

Some of the best optimization methods around use randomness, things like 'genetic' or 'evolutionary' algorithms, the various thermodynamic (freezing, etc) algorithms, and even some of the older algorithms that would add random probes into a deterministic method at some times, are shown quite conclusively to be BEST at some of the most obnoxious optimization problems that are around. It is in fact the randomness, which effectively allows the algorithm to try new hypothesises, that makes such algorithms effective, basically in a mathematical sense they can span more of the solution space, and more quickly, of course with the everpresent risk of a less than perfect outcome.
A little while earlier I was explaining why I think that our thoughts aren't very random... they are influenced by our minute to minute experiences, that can be very varied... and these experiences can have a huge amount of variety (or involve mindless routine...) this variety is quite chaotic... I'm saying our creativity is more dependent on our minute to minute experiences (and in turn our memories) and our associations rather than it mostly being a result of neurons firing randomly. There is still a lot of randomness, but I think our trains of though has a large amount of structure...

Your point about removing one neuron making large differences sometimes is interesting, too, and I'm sorry I've elided it. This shows, basically, what can happen when temporarily one neuron doesn't work the way it might due to probabilistic behaviors. You can get new behaviors that you never see before. If such behaviors are advantageous, well, now you have a "leap" in learning.
Well I put that in my applet to show how people have have brain damage without any memory loss, and how others can lose all or most of the memories during brain damage, and how they can relearn it, even though they might only have half of the neurons they had before. But sometimes you can't teach the neural network all the patterns if crucial neurons aren't working... but perhaps this would be much less of an issue in multiple-layer neural networks. I think neurons dying would mostly be bad since it would have previously learnt a lot and suddenly that part of the network would be lost... but maybe it learnt something wrong and now it no longer has that wrong information... a much more likely thing would be that the misguided knowledge results in some kind of pain or it frustrates some desires and so the animal/person tries some different strategies to see if it can overcome that pain somehow...

[No, I think the main reason we think different things is because we have had different past experiences (traumas, etc), different current experiences and different brain chemistry, which I think we use to work out what we desire and our thoughts are a reflection of our desires (and experiences).]
I'm not sure of that. I've used a lot of optimization algorithm. I haven't used neural nets myself for research purposes, but I have studield a lot of decision methods, decision theories, optimization methods and theory, Markov processes, and the like, and I'm convinced both of the need for, and the presense of, noise.

Those decision theories would probably rely on a few simple predicates(?) - on the other hand, our brain has about 100 billion neurons each connected to thousands of others which the information we use is stored on. As I said earlier, I think we continuously associate all of the elements of our experiences together - and this gets stored in our brains (by adjusting the weights of neurons). When we come up with ideas we would usually navigate quite deeply through chains of associations (which can include learnt problem solving strategies and templates) in order to find our solutions... and as I said earlier, I think the places we begin our search at is based on the current contents of our working memory (it is our "stimulus") and this changes, and that's why our solutions to problems can change. (And we also learn from previous searches)

I think that things like twins show that there is some preordained part to behavior, but I also think that how they diverge is a good example of how random events as well as random processes cause divergence.
Well they would have different upbringings and experiences too... I think this has a very great effect on the formation of their memories/personalities, etc. I think we mostly *learn* things (as far as thoughts go) - rather than it being explicitly encoded in our DNA. (Which is one of the main similarities between twins)
excreationist is offline  
Old 01-27-2003, 08:15 PM   #37
jj
Senior Member
 
Join Date: Feb 2002
Location: Redmond, Wa
Posts: 937
Default To excreationist...

I think you misunderstand the amount by which I think thoughts are random. I think there is a random component, a SMALL random component. It only takes a SMALL random component. We're going to think 2+2=4, nearly all the time, yeah. No problem.

The random component will affect things near threshold in a very disproportionate fashion, because it's near threshold and the variance of the various "noises" will then be able to push one or the other of something over threshold randomly.

This is how the optimization algorithms work, by the way, things that create large changes in the wrong direction very rarely get changed, but the (often large) number of nearer-neutral variables get swapped around more.

Now, quantum noise isn't as small as you think it is in the brain, though, because all chemical reactions progress due to the quantum interactions. It's similar (but not the same) to charge in an electrical circuit, where the charge on the electron is significant in circuit design.

There are lots of electrons involved, but it only takes ONE to finally push things over threshold.
jj is offline  
Old 01-27-2003, 08:57 PM   #38
Veteran Member
 
Join Date: Aug 2000
Location: Australia
Posts: 4,886
Default Re: To excreationist...

Quote:
Originally posted by jj
I think you misunderstand the amount by which I think thoughts are random. I think there is a random component, a SMALL random component. It only takes a SMALL random component. We're going to think 2+2=4, nearly all the time, yeah. No problem.
Yeah, our memory of 2+2=4 is quite systematic, and I'm saying that our creativity is quite systematic too... though it is far more complex (it can involve a whole lifetime of associations from highly varied experiences)... (I guess you agree though)

Quote:
...This is how the optimization algorithms work, by the way, things that create large changes in the wrong direction very rarely get changed, but the (often large) number of nearer-neutral variables get swapped around more.
Converging neural nets work like that too.

Quote:
Now, quantum noise isn't as small as you think it is in the brain, though, because all chemical reactions progress due to the quantum interactions. It's similar (but not the same) to charge in an electrical circuit, where the charge on the electron is significant in circuit design.

There are lots of electrons involved, but it only takes ONE to finally push things over threshold.
Here is some information about neurons... you may already know most of that already.
Quote:
It says:
Neural Membrane -
....Single sodium pump maximum transport rate = 200 Na ions/sec; 130 K ions/sec; typical number of sodium pumps = 1000 pumps/square micron of membrane surface; total number of sodium pumps for a small neuron = 1 million; density of sodium channels (squid giant axon) = 300 per sq. micron.
So there would be thousands (or millions?) of ions going in and out of neurons when it is receiving input or firing. Perhaps quantum fluctuations could stop one or two of the ions... but that would only make a difference at the threshold (as you were also saying)... and that would happen when a person is unsure or indecisive anyway. Perhaps later on they would find the real answer and correct that indecisiveness...

So anyway, I agree that our brains aren't completely deterministic and they are directly affected by quantum fluctuations (genuine randomness) from time to time and indirectly affected by quantum fluctations very often... radioactive decay, etc, might involve genuine randomness... (I don't know much about quantum physics really)
excreationist is offline  
Old 01-27-2003, 10:30 PM   #39
Veteran Member
 
Join Date: Dec 2002
Location: Gatorville, Florida
Posts: 4,334
Thumbs down Wrong By Orders Of Magnitude...

Quote:
Originally posted by jj
It's safe to say that QM DOES in fact make our brains random at some level. It also does seem that QM is in fact describing a fundamentally random process, not a process that can not be observed.
Will you please post a copy of your paper proving this assertion? If you do, your Nobel prize is waiting.....

The human body is a classically-causal instrumentality. External causes (i.e., a flame placed under a finger) will affect various sensors that our bodies possess, those sensors will transmit information to our brains and, whether we even consciously recognize it or not, our brains will react appropriately to the stimuli that are applied. I can't see any room for randomness in this process. In fact, randomness would be a counter-survival trait, and thus it would benefit us to breed randomness out at virtually all costs. Thus, the evolutionary end of cognitive development should be a fully-causal brain. You can argue over how far we are along the path to that desired result, but I don't think that you can really argue that:
Quote:
Ergo, there exists randomness in decisions. Now, one can argue if that represents free will or not, but it most surely refutes determinism completely.
You could, if your premise were true. But I would argue that your premise is an almost-naked hypothesis, with nothing but a few parlor-trick physics lab experiments to back it up.

Lets approach this from a slightly different direction. Lets consider the case of the Intel Pentium 4 Processor Chip. Does anybody here believe that the Pentium 4 Processor Chip is really making random decisions as it runs through its programming? I surely don't (and I've been programming since about 1963, so I think that I do know how computers work by now.....)

Yes, the Pentium 4 uses electrons traveling through a molecular substrate in order to "make decisions," but it uses a heck of a lot of electrons to represent either a one or a zero, and because so-called "quantum randomness" can only affect one electron at a (random) time, even if quantum mechanical theory is correct, then the random arrival or departure of a single electron will not make any significant difference in the question of whether or not the electrons (or lack of electrons) represents a zero or a one. In other words, with the quantity of electrons flowing through the quantity of molecules in the semiconductor material, a few random quantum fluctuations here and there (even if they do happen) will not affect the outcome in any conceivable way. The "random noise" is quite simply and completely "drowned out" by the signal level applied (this is the epitome of a "high signal-to-noise ratio").

Now, do our brains function any differently than that? Again, while we are perhaps less certain of how the human brain works than we are of how the Pentium 4 works, I think it is reasonable to presume that the electron flows in our nerves and in our brain itself will be of sufficient quantity to ensure that the "signal" (thought) drowns out the "noise" (any random variations in our thoughts) so as to maintain our brains as just as deterministic in their computed outputs as is the Pentium 4 computer chip.
Quote:
Of course, once a mechanism to deal with randomness evolves, and our brains do have such mechanisms, this means that randomness allows multiple paths to the same thing. Since this mechanism is also governed by randomness, this means that eventually one can arrive at some different thing, as a function of random walk plus learning.

I would suggest that once learning can be shown to possibly arise from random events, that free will of some sort is effectively guaranteed. This comes from the chaotic behavior of neural networks, basically it takes a small change to come up with some new decision or "creation".
I think that you have no idea of the number of orders of magnitude which separate measurements of brain function from measurements of any possible randomness due to quantum effects. if you ever do come to understand that difference, then you must also realize what an erroneous post you have given to this board.

== Bill
Bill is offline  
Old 01-27-2003, 10:48 PM   #40
Veteran Member
 
Join Date: Dec 2002
Location: Gatorville, Florida
Posts: 4,334
Default Thanks excreationist...

From the paper linked to by excreationist comes this gem of a paragraph:
Quote:
In the soma and dendrites, point-by-point (spatial) and moment-by-moment (temporal) fluctuations in the transmembrane potential (called graded potentials) are "summed up", or integrated, and something like a running average is computed. This integrative action is referred to as spatial and temporal summation. In this manner individual neurons "consider" and "evaluate" the input they constantly receive from as many as 10000 synapses.
If that isn't a mechanism that would negate any random quantum fluctuations, then I don't understand the English language.....

== Bill
Bill is offline  
 

Thread Tools Search this Thread
Search this Thread:

Advanced Search

Forum Jump


All times are GMT -8. The time now is 10:46 AM.

Top

This custom BB emulates vBulletin® Version 3.8.2
Copyright ©2000 - 2015, Jelsoft Enterprises Ltd.