Freethought & Rationalism ArchiveThe archives are read only. |
09-07-2002, 10:20 AM | #11 |
Senior Member
Join Date: Feb 2001
Location: Toronto
Posts: 808
|
Another factor is randomness.
We 'Choose' because our brain has a great source of random input, and so we are never riding the rails of reason. That is to say, we are not 'merely' responding to input because there is a significant randomness to our thoughts. Add randomness, and the process falls off the rails. Computers have a single convienient source of randomness: the rand() call. This chooses a psudo-random number. It does this by pumping a 'seed' value through a complex formula. The same seed value will output the same 'random' number in every case. Typically (by default), rand() gets its 'seed' from a formula like (system clock in seconds) + (the number of times rand() has been called so far) This is NOT purely random, since if the same conditions (time and # of calls) are ever experienced, the program will output the same results. There may also be subtle patterns because the seed is always incrementing in a predictable way. Many game programmers, however, prefer to change the seed values to a more deterministic model, so that save-games and game-recorders work properly. They basically want to keep the current value of the seed around so that they can pause the game-system at any moment, and persist this information, including the seed. With a log of the seed and a recording of user input over time, the recorded game will behave exactly like the original when a player was behind the controls. Remember, if the seed is the same, the output is the same. This causes any AI depending on rand() to output the same move with the same seed. Great for playing back games, not so great for true undeterministic play. This would mean that randomness is the primary limitation of computers when making choices. Some systems provide seed generators based on mouse movement history, key logs, network traffic, webcams, etc. This extra randomness is of great benifit when undeterministic behavior is needed, such as encryption, genetic algorithm-based evolution, statistics, and more. If we were to build a human brain on top of a software program, which delivered all randomness from the rand() call, we could 'reset' the brain to a previous state by setting the seed back and playing back all input, and resume from there. Whatever thoughts it had would be had again, and so on and so forth. In summary, does a tic-tac-toe game make 'choices'? no. Its responding to all of its inputs, including a rand() function. the programmer may have even weakened the seeding process so that the game could be replayed afterwards. If the rand() function was as robust as our own, and all game moves were made based on floating-point with a slight random factor (instead of raw integer math), and the game had as many inputs as we do (body language, sounds, smells, the feel of the paper, the facial expression of its opponent) then it could make true choices just like us. |
09-07-2002, 10:41 AM | #12 |
Senior Member
Join Date: Jul 2000
Location: CA, USA
Posts: 543
|
Just a quick comment. Humans are terrible random number generators--we are better pattern matchers. We have an unconscious way of finding patterns, and when asked to generate "random" numbers actually filter repetitious patterns that would occur in true random sequences producing a flatter distribution than the Gaussian "bell shaped" distribution a true random generator would produce.
Also I think random number generation is more useful for simulating the environment for a decision, rather than the decision itself. For example you might have a program that simulates bugs. The program might use random numbers to place food around the simulated environment, and decide random conditions, but the decision making of the bug to find food, etc. would be based on more deterministic computations from simulated sensory inputs, rather than random calls. [ September 07, 2002: Message edited by: Vibr8gKiwi ]</p> |
09-07-2002, 10:47 AM | #13 |
Senior Member
Join Date: Feb 2001
Location: Toronto
Posts: 808
|
Vibr8gKiwi, at the meta-level of mind I agree completely. But at the lower levels of abstraction our brain is surely rooted in a good source of randomness. It might even be in variation of input, or other 'external' sources, but we certainly have some decent source of undeterministic 'stuff' (data, input, whatever).
|
09-07-2002, 10:59 AM | #14 | |
Senior Member
Join Date: Feb 2001
Location: Toronto
Posts: 808
|
Quote:
Pure random behavior is of course a misunderstanding of what I stated. In order to make a choice, we must choose from alternatives which are 'fuzzy', and not crisp integers like a typical tic-tac-toe-playing AI. Fuzzyness is another way of saying 'degree of randomness in an input'. This is importaint because the closer fuzz is to the actual integer, the more deterministic a choice is. Adding 2 and 2 is entirely deterministic. adding 'about 2' and 'about 2' is very uncertain, and can be considered 'fuzzy math'. The more pure the fuzz is (the about in 'about 2'), the more unpredictable the output will be. On a computer, fuzz can only be generated by the rand() call, and so it is still deterministic even when using the concept of fuzzy logic. I was refering to the degree of randomness present in the fuzzyness, and not the overall randomness of behavior. Sorry if that was unclear. [ September 07, 2002: Message edited by: Christopher Lord ]</p> |
|
09-07-2002, 11:09 AM | #15 | |
Veteran Member
Join Date: Dec 2001
Location: Tallahassee
Posts: 1,301
|
Quote:
You can't be sure of this. There have been experiements that seem to show even in humans choices are made by us, before we are aware of the choices. The fuzziness you imply would then possible be simply the alternate choices that weren't taken being mulled over by our conciouness. Plus, at what level does you fuziness dissappear? If I wrote a case statement containing 100 cases with each case containing it's own itterative logic that includes othre cases or logical statements that might suffice to qualify as having fuzziness if and only if I let someone see that top level case. Which is just an abstraction layer. A CPU makes a choice based upon the instructions it has been given. If this is so different for people than make a choice about something that it itself outside of your power to choose. The best case you could make for humans being vastly different is to hide behind complexity. [ September 07, 2002: Message edited by: Liquidrage ]</p> |
|
09-07-2002, 11:51 AM | #16 |
Senior Member
Join Date: Feb 2001
Location: Toronto
Posts: 808
|
Neurons are fundamentaly fuzzy devices, being analog. They are never certainly at one value, but within a certain pair of error bars. They are simply not like integer math.
A Giant switch{} block containing in turn a bunch of switch{} blocks might exibit complex behavior, but if you put in the exact same input, the output will also be the same. Basically this is what rand() does anyway. With fuzzy logic, you cant lock down a single exact value. It is always within a pair of error bars somewhere. The system responds based on approximates. this basic fuzzyness means a neural net can handle approximate values much better. The closer together the error bars get, the more deterministic the net behaves, until you get all the way to integer math-like systems such as a computer. A choice is between alternatives of approximate value. the inputs the chooser recieves cant be locked down without extraordinary effort. This is why I think we can have 'free will', yet still live in a mostly deterministic universe. |
09-07-2002, 12:36 PM | #17 | |
Veteran Member
Join Date: Dec 2001
Location: Tallahassee
Posts: 1,301
|
Quote:
But what is analog in it's purest form? Digital with a threshold. There is a smallest chunk anything can be just as there is a smallest chunk of time and a smallest temperature. Analog is only analog because of defined thresholds. By using analog in your example and claiming "inputs the chooser recieves cant be locked down without extraordinary effort" is just a call to complexity. Let us also not forget there are analog CPUs that can take inputs from non machine sources and makes decisions as well. The inputs can be exactly as you state above. edited: I need to stop typing while football is on. My typo count is even higher then normal atm [ September 07, 2002: Message edited by: Liquidrage ]</p> |
|
09-07-2002, 02:19 PM | #18 |
Senior Member
Join Date: Feb 2001
Location: Toronto
Posts: 808
|
Analog is not just a digital system with a threshold.
A digital system is ultimately an analog device, but an analog device can only approximate an analog one (by sampling at a high rate, for instance) Everything is ultimately quantized, which is what I think you actually ment in place of digital. In a digital system, the number 1.00000 is compeltely different from 1.00001. if these two were in variables X and Y, the expression X == Y would return false, indicating the computer sees them as two different values with no relationship. A neural system, however, can see that they are both within the error bars of what can be considered 1. The error bars are also dynamic, not hard-set rounding rules. A digital system could eventually abstract away this difference and emulate an analog computer, and vice versa. This is a property of any processor worth its salt. This is why we can understand integer math, and computers can predict weather. I am not refering to the possible, though. I am refering to the practical. Games do not use complex giga-neural simulations for opponent AI. They simply use integer math and the rand() function. This process is entirely deterministic by design, with no aspect of choice in the mix. Therefore, computers do not make choices, but merely act out their program. We make choices because our software is based on neural nets, which are self-calibrated input processors. they are self-calibrated because they learn which outputs are required of certain inputs. Thresholds play a part, but they are not the primary element of an analog processor. An analog processor mainly bends an input waveform or frequency depending on another input waveform of frequency. like a transistor, only without the thresholds a transistor keys on. |
09-07-2002, 02:28 PM | #19 |
Veteran Member
Join Date: Jan 2002
Location: The Netherlands
Posts: 1,047
|
Another angle: regardless of whether computers choose... what would they have to choose between?
One reason why computers work so fast, is because they don't hessitate. (Okay let me put it differently; IF a computer goes to 'slug' mode, it's not out of hessitation ) It's zeroes and ones, yes or no. WE also know such a thing as MAYBE. Translate yes/no, to certainly(!) is(1)/isn't(0). Translate maybe to possibly (?) is(1)/isn't(0). I consider certainty (!) and possibility (?) to be the primary criteria. If you were to divide everything into two main catagory's, THAT would be the ONLY way to do it. Each and every other criterium, attribute, quality, etc. you could come up with either certainly (!) or possibly (?) applies. So you can translate everything into questionmarks and exclamationpoints, same as you can translate a computerfile into zeroes and ones. INCLUDING what happens between our ears. Freaky, but I also think, very true. Computers are BINARY calculators. We are BI-O'-LOGICAL contemplators. Marcel (-?/+!) [ September 07, 2002: Message edited by: Infinity Lover ]</p> |
09-07-2002, 03:00 PM | #20 | |
Senior Member
Join Date: Jul 2000
Location: CA, USA
Posts: 543
|
Quote:
I'm quite aware of how fuzzy logic works. I've read several books on the topic and I've written many programs that use neural networks. None of the fuzzy logic algorithms I've ever worked with use calls to a randomization function in calculating their results. There might be random noise added to test data to help test nets, but the actual calculation from inputs to outputs is deterministic (you'll get the same set of outputs for the same set of inputs every time). I could point you to some neural network calculation samples, but none of them will have random number calls in them. Adding 'about 2' and 'about 2' is certain and is done all the time--it's called floating point math. Inputs to neural networks are typically floating point values in the range of 0 to 1, as are the output values. The net layer calculations to generate output from inputs do not involve any calls random number generators. One of the advantages of neural nets is they allow fuzzyness in the inputs and yet can still generate good outputs. Fuzzy inputs (like from indeterminate situations) lead to clear outputs (the best decision for the situation). If you had a random number call in the middle of such a calculation, you'd get strange non-deterministic behavior. You don't normally want that as it means you'd get one decision in a situation and then a different decision later, even in the exact same situation. Remember the inputs might be fuzzy, but the decisions on those inputs are deterministic. There are cases when you want non-deterministic outputs from a set of inputs (say if you were generating many blades of grass and you wanted them to look similar to each other but with slight differences), but those would be special cases and not really related to fuzzy logic in my opinion. Another case is Evolutionary Algorithms (which I've also used), as they use random number calls in them to simulate evolution which has a random component. The results of such algorithms are often non-deterministic by design (as is evolution). They will evolve an 'answer' to a more open sort of question (which often has many different "answers"). Each time it evolves a "fit enough answer" to some question it will likely be a different "fit enough answer" each time it is run. But again that is the nature of evolutionary algorithms, not fuzzy logic. Let me know if this clears anything up, or if I'm misunderstanding your point somehow. [ September 07, 2002: Message edited by: Vibr8gKiwi ]</p> |
|
Thread Tools | Search this Thread |
|