Freethought & Rationalism ArchiveThe archives are read only. |
01-27-2003, 11:14 AM | #31 | |
Senior Member
Join Date: Feb 2002
Location: Redmond, Wa
Posts: 937
|
Quote:
I do not necessarily assert, however, this represents free will. You raise some other issues in the thread about randomness that I will address separately. That subject is long, involved, I do have some experience with it in the form of sophisticated optimization algorithms and tree-pruning (simple learning) programs, and I most certainly do not agree that we learn DESPITE randomness. But that takes some discussion, which I will try to recall to get into when I have the time to do it. |
|
01-27-2003, 11:43 AM | #32 | |
Senior Member
Join Date: Feb 2002
Location: Redmond, Wa
Posts: 937
|
This will be somewhat long, so please bear with me!
Quote:
I think that things like twins show that there is some preordained part to behavior, but I also think that how they diverge is a good example of how random events as well as random processes cause divergence. The key being that outcomes can change radically when something is on the "threshold". |
|
01-27-2003, 11:57 AM | #33 | |
Regular Member
Join Date: Jul 2000
Location: Florida
Posts: 156
|
Quote:
Again, maybe its a definition thing. For me, determinism is the simple minded name for a group of theories which claim that human actions are caused. Or restated, that the causal chain traces back before the decision event. "Free will," as it is usually defended, states that my "choice" is the only causal force of any importance. As your theory does not seem to embrace the latter definition, but rather presents are far more sophisticated and scientifically defensible case for the former, I call it "determinism." Really good, really encompassing determinism that understands the contemporary problems associated with causality, but still determinism. What I still want to know is, why would you want to call it something else? |
|
01-27-2003, 02:04 PM | #34 | |
Senior Member
Join Date: Feb 2002
Location: Redmond, Wa
Posts: 937
|
Quote:
I take "determinism" in the strong sense, that everything was set from "the word go". I take "free will" to mean much what you take it to mean, I think. This means that what I see the situation to be is neither free will nor determinism, because I see "will" arising out of randomness, this randomness not being "determined from the word go". I don't, by the way, regard the system as deterministic, but again perhaps we are having a semantic difference, because to me deterministic would imply the strong sense of determinism and would be imcompatable with randomness of any sort. A Markov process is not deterministic, for instance, in the sense I would normally use the word. The probabilities that define the system can be deterministic, the system it self can be time invariant, but the OUTCOME is still probabilistic, and not therefore determined (ergo not deterministic). Words! |
|
01-27-2003, 03:17 PM | #35 |
Regular Member
Join Date: Jul 2000
Location: Florida
Posts: 156
|
I will keep reading, but you and I are now on the same page.
If that's any consolation. |
01-27-2003, 06:10 PM | #36 |
Veteran Member
Join Date: Aug 2000
Location: Australia
Posts: 4,886
|
jj
Indeed, and the question of "do we reach this threshold" is directly, at the physical chemestry level, affected by the quantum states of the electrons and ions functioning as the bearers of the information that sets the threshold, so in fact neurons fire with a random component. This is not the only random component, there is also a thermodynamic component, but I haven't brought this up in the determinism thread, because that is subject to "determinism" if we accept the no-random-QM, infinite precision and accuracy take on mechanics. The thermodynamic component also requires randomness, though, if we discard the idea of infinite resolution. This is due not to QM directly, but due to the fact that charges are quantized. This quantization itself introduces unavoidable noise. Yeah, thermodynamic things would be yet another source of interference... though it is somewhat deterministic on a larger scale. Something to remember here is that once ONE state is changed, the entire outcome from that point can diverge, over time to effectively covering most any other possible outcome. This seems like an extreme statement, but both neural net research as well as basic mathematics makes this a fairly clean statement. Actually I think in neural net learning, they converge on a "solution" (it extracts patterns) once it is repeatedly trained, and these would have a limited number of equivalent possibilities... perhaps like X squared = 4 has two solutions, 2 and -2.... (or 4 I think if you include "imaginary" solutions). On the other hand, for things that don't learn an identical thing and hence converge to some degree, tiny differences in the initial conditions would lead to huge differences later. (e.g. the "butterfly effect" where the flapping wings of a butterfly could eventually lead to a tornado on the other side of the globe - which wouldn't have occurred in an alternate history without the butterfly) This is not germane [relevant?], because the average computer system is not arranged to be self-correcting, and neural nets (for one thing) are intensely self-correcting, and have a set of properties that can "keep almost the same idea around". This is unlike a RAM bit error (in a non-ECC RAM), where either it's right or it can be arbitrarily wrong. My point was that a computer that has memory that is partly faulty can work in a deterministic way most of the time... much of the time the faulty memory might be used to store non-critical information and so not be noticeable by the user... I was talking about semi-determinism.... I guess you could think about it in terms of probabilities, but in fact it is just one part of the memory that is faulty, the rest would work perfectly. (Or relatively perfectly - there might be extremely rare instances of faults in the supposedly perfectly working RAM) [So I'm saying quantum noise just adds a tiny little bit of extra noise, that can of course accumulate over time (the butterfly effect) but it wouldn't affect things much during short periods of time.] No, that's not right. This is true for a TV, but for a decision network, neural net, Markov process, or whatever, ONE divergence can create an entirely new outcome that is not small, but that rather is huge and not even recognizably like the "other" path at all. The evidence for this is much too complex and huge to put here, but you can find lots of references on the net. Chaos theory is only part of this. Quantum noise would only alter the outcome of neurons if the input sum is right at the threshold. And because of other noise (non-quantum fluctuation noise) the "weights" of the inputs would be quite strong so it would be rarer that it would be right near the threshold. (in my applet you can see how extra noise makes the "weights" of the neurons stronger) I think the noise of other things (like thermodynamic stuff or whatever) would be stronger than the immediate effect of quantum fluctations. An analogy could be a dusty billiard ball with a little block of wood glued to it. The dust would affect the path of the billiard ball a bit, but the effect of the wood would be more obvious. Well, we disagree for several reasons. The first is that very very often, the sum of imputs IS close to a threshold, especially when you don't know the answer. (yes, really!) In such cases, you can see two very different results, i.e. "yes" and "no" from exactly the same inputs. I think that it is easier to see the effects of noise that isn't directly from quantum fluctuations on the result... e.g. say the threshold it +1.000000 and non-quantum fluctuation noise (thermodynamics or whatever) is +/- 0.01 and say quantum fluctuation noise usually has an effect of +/- 0.00000001 or less. Now say the inputs added up to 1.000001..... the non-quantum fluctation noise would have the most say in whether it stays above the threshold or goes below it. But there could be rare times when instantaneous quantum fluctuations have an effect... I think they would usually average each other out though... like how particles in the air are actually moving around at ridiculous speeds, but overall there isn't movement in one direction (well there's air pressure though...) The second is that while we try to find the patterns, we must experiment with finding patterns, and as we learn, we must make the patterns more complex. We refine our patterns... there is no need to make them any more complex than they actually are though. If we have no randomness at all, we are limited in the kinds of pattery we can try. This may seem odd, but I'll get to the performance of optimization algorithms in a bit. I think we try different things because we have a fundamental desire for a certain amount of "newness"... as well as "connectedness" where we want to understand how the world works... I think the different ideas we have begin from ideas that have the strongest associations with recently accessed information... we'd then see how well that fits our goals (e.g. to think up a new idea) and if it needs work we'd automatically fire off some more associations to see if can find a better solution. I think the main reason our thoughts seem random is because we have a lot of varied stimulus/experiences which trigger lots of different ideas and create lots of different associations. I think that whenever we have experiences, all of the elements are associated together. So if you were having a Christmas dinner that made you feel happy you'd associate all of those elements together. So if you had to "brainstorm" ideas based on the idea "Christmas" you would think of the Christmas dinner first, assuming that was right at the "front" of your mind. (It was associated with the very near past) But if this was months later, there would probably be stronger associations... like Santa and Christmas trees. Santa was my first association, and then Christmas trees was a result of "Christmas" and "not Santa". Emotions are also associated with elements of the experience... that's what ads do when they show an ad to try and trigger your emotions... then those triggered emotions are associated to some degree with their product. Your annoyance at the ad might also be triggered and perhaps outweigh any positive emotions triggered by the ad. But the association between a positive emotion and their product would still be there, underneath the dominating negative association... so I think that our thoughts aren't as random as you seem to think. Whoa! Our nerves interconnect, but not "directly"..... I mean that the neurons in our brains mostly communicate with other specific neurons... gas on the other hand is chaotic. Some of the best optimization methods around use randomness, things like 'genetic' or 'evolutionary' algorithms, the various thermodynamic (freezing, etc) algorithms, and even some of the older algorithms that would add random probes into a deterministic method at some times, are shown quite conclusively to be BEST at some of the most obnoxious optimization problems that are around. It is in fact the randomness, which effectively allows the algorithm to try new hypothesises, that makes such algorithms effective, basically in a mathematical sense they can span more of the solution space, and more quickly, of course with the everpresent risk of a less than perfect outcome. A little while earlier I was explaining why I think that our thoughts aren't very random... they are influenced by our minute to minute experiences, that can be very varied... and these experiences can have a huge amount of variety (or involve mindless routine...) this variety is quite chaotic... I'm saying our creativity is more dependent on our minute to minute experiences (and in turn our memories) and our associations rather than it mostly being a result of neurons firing randomly. There is still a lot of randomness, but I think our trains of though has a large amount of structure... Your point about removing one neuron making large differences sometimes is interesting, too, and I'm sorry I've elided it. This shows, basically, what can happen when temporarily one neuron doesn't work the way it might due to probabilistic behaviors. You can get new behaviors that you never see before. If such behaviors are advantageous, well, now you have a "leap" in learning. Well I put that in my applet to show how people have have brain damage without any memory loss, and how others can lose all or most of the memories during brain damage, and how they can relearn it, even though they might only have half of the neurons they had before. But sometimes you can't teach the neural network all the patterns if crucial neurons aren't working... but perhaps this would be much less of an issue in multiple-layer neural networks. I think neurons dying would mostly be bad since it would have previously learnt a lot and suddenly that part of the network would be lost... but maybe it learnt something wrong and now it no longer has that wrong information... a much more likely thing would be that the misguided knowledge results in some kind of pain or it frustrates some desires and so the animal/person tries some different strategies to see if it can overcome that pain somehow... [No, I think the main reason we think different things is because we have had different past experiences (traumas, etc), different current experiences and different brain chemistry, which I think we use to work out what we desire and our thoughts are a reflection of our desires (and experiences).] I'm not sure of that. I've used a lot of optimization algorithm. I haven't used neural nets myself for research purposes, but I have studield a lot of decision methods, decision theories, optimization methods and theory, Markov processes, and the like, and I'm convinced both of the need for, and the presense of, noise. Those decision theories would probably rely on a few simple predicates(?) - on the other hand, our brain has about 100 billion neurons each connected to thousands of others which the information we use is stored on. As I said earlier, I think we continuously associate all of the elements of our experiences together - and this gets stored in our brains (by adjusting the weights of neurons). When we come up with ideas we would usually navigate quite deeply through chains of associations (which can include learnt problem solving strategies and templates) in order to find our solutions... and as I said earlier, I think the places we begin our search at is based on the current contents of our working memory (it is our "stimulus") and this changes, and that's why our solutions to problems can change. (And we also learn from previous searches) I think that things like twins show that there is some preordained part to behavior, but I also think that how they diverge is a good example of how random events as well as random processes cause divergence. Well they would have different upbringings and experiences too... I think this has a very great effect on the formation of their memories/personalities, etc. I think we mostly *learn* things (as far as thoughts go) - rather than it being explicitly encoded in our DNA. (Which is one of the main similarities between twins) |
01-27-2003, 08:15 PM | #37 |
Senior Member
Join Date: Feb 2002
Location: Redmond, Wa
Posts: 937
|
To excreationist...
I think you misunderstand the amount by which I think thoughts are random. I think there is a random component, a SMALL random component. It only takes a SMALL random component. We're going to think 2+2=4, nearly all the time, yeah. No problem.
The random component will affect things near threshold in a very disproportionate fashion, because it's near threshold and the variance of the various "noises" will then be able to push one or the other of something over threshold randomly. This is how the optimization algorithms work, by the way, things that create large changes in the wrong direction very rarely get changed, but the (often large) number of nearer-neutral variables get swapped around more. Now, quantum noise isn't as small as you think it is in the brain, though, because all chemical reactions progress due to the quantum interactions. It's similar (but not the same) to charge in an electrical circuit, where the charge on the electron is significant in circuit design. There are lots of electrons involved, but it only takes ONE to finally push things over threshold. |
01-27-2003, 08:57 PM | #38 | ||||
Veteran Member
Join Date: Aug 2000
Location: Australia
Posts: 4,886
|
Re: To excreationist...
Quote:
Quote:
Quote:
Quote:
So anyway, I agree that our brains aren't completely deterministic and they are directly affected by quantum fluctuations (genuine randomness) from time to time and indirectly affected by quantum fluctations very often... radioactive decay, etc, might involve genuine randomness... (I don't know much about quantum physics really) |
||||
01-27-2003, 10:30 PM | #39 | |||
Veteran Member
Join Date: Dec 2002
Location: Gatorville, Florida
Posts: 4,334
|
Wrong By Orders Of Magnitude...
Quote:
The human body is a classically-causal instrumentality. External causes (i.e., a flame placed under a finger) will affect various sensors that our bodies possess, those sensors will transmit information to our brains and, whether we even consciously recognize it or not, our brains will react appropriately to the stimuli that are applied. I can't see any room for randomness in this process. In fact, randomness would be a counter-survival trait, and thus it would benefit us to breed randomness out at virtually all costs. Thus, the evolutionary end of cognitive development should be a fully-causal brain. You can argue over how far we are along the path to that desired result, but I don't think that you can really argue that: Quote:
Lets approach this from a slightly different direction. Lets consider the case of the Intel Pentium 4 Processor Chip. Does anybody here believe that the Pentium 4 Processor Chip is really making random decisions as it runs through its programming? I surely don't (and I've been programming since about 1963, so I think that I do know how computers work by now.....) Yes, the Pentium 4 uses electrons traveling through a molecular substrate in order to "make decisions," but it uses a heck of a lot of electrons to represent either a one or a zero, and because so-called "quantum randomness" can only affect one electron at a (random) time, even if quantum mechanical theory is correct, then the random arrival or departure of a single electron will not make any significant difference in the question of whether or not the electrons (or lack of electrons) represents a zero or a one. In other words, with the quantity of electrons flowing through the quantity of molecules in the semiconductor material, a few random quantum fluctuations here and there (even if they do happen) will not affect the outcome in any conceivable way. The "random noise" is quite simply and completely "drowned out" by the signal level applied (this is the epitome of a "high signal-to-noise ratio"). Now, do our brains function any differently than that? Again, while we are perhaps less certain of how the human brain works than we are of how the Pentium 4 works, I think it is reasonable to presume that the electron flows in our nerves and in our brain itself will be of sufficient quantity to ensure that the "signal" (thought) drowns out the "noise" (any random variations in our thoughts) so as to maintain our brains as just as deterministic in their computed outputs as is the Pentium 4 computer chip. Quote:
== Bill |
|||
01-27-2003, 10:48 PM | #40 | |
Veteran Member
Join Date: Dec 2002
Location: Gatorville, Florida
Posts: 4,334
|
Thanks excreationist...
From the paper linked to by excreationist comes this gem of a paragraph:
Quote:
== Bill |
|
Thread Tools | Search this Thread |
|