Freethought & Rationalism ArchiveThe archives are read only. |
06-11-2003, 09:47 AM | #271 | |
Senior Member
Join Date: May 2003
Location: Canada
Posts: 639
|
Quote:
I'll read that article, but I hardly think that showing me neurological disorders is adequate refutation of free will. |
|
06-11-2003, 02:48 PM | #272 |
Veteran Member
Join Date: Mar 2003
Location: Edinburgh
Posts: 1,211
|
It is notoriously hard to prove the non existence of anything, let alone something as intangible as free will. Looking at neurological disorders which involve derangements of the experience of free will do serve to highlight the fact that you can experience something and attribute it incorrectly to an outside influence, or conversely attribute your own intentions to something outside of your influence.
Given these factors it is just as, if not more, reasonable to consider free will as an illusion fostered by the brain attributing volition to our actions rather than as some mysterious spiritual process. If on the other hand you are attributing free will to quantum uncertainty then presumably 'the soul' is nothing more than a probabilistic eigenstate collapse and the outcome is stochastic rather than directed. |
06-11-2003, 03:02 PM | #273 |
Moderator - Science Discussions
Join Date: Feb 2001
Location: Providence, RI, USA
Posts: 9,908
|
The best way to demonstrate the nonexistence of metaphysical "free will" (as opposed to practical free will, which almost everyone believes in) would be something like "mind uploading"--map out a person's brain exhaustively and then create a simulation on a computer (giving it a robotic or simulated body so it's not just an isolated brain), and then see if the "simulation" passes the Turing Test, if it seems to display all the attributes of a biological human (creativity, understanding, emotions, curiosity, perhaps even 'spirituality'). If so, and if the computer running the simulation is itself totally deterministic, that would show that whatever random elements are present in our brains do not play any essential role in our own "human-ness." Of course there might still be a few doubters who said that even if uploaded minds acted human they might really still be unconscious automatons, but I think this would pretty quickly become an extreme fringe position, like claiming the Jews or blacks are really unconscious automatons despite all evidence from their behavior.
The technology and scientific knowledge necessary to do something like this is probably not that far off--I expect that such an experiment will most likely be possible within the next few decades. If various trends in computing (like Moore's Law) and brain-scan resolution hold up, I think it's predicted that it'll be doable by 2030 or so. |
06-11-2003, 03:12 PM | #274 | |||||
Veteran Member
Join Date: Apr 2003
Posts: 2,199
|
Quote:
That doesn't qualify as a positive assertion, I suppose. Quote:
Quote:
Quote:
Quote:
|
|||||
06-11-2003, 03:25 PM | #275 |
Veteran Member
Join Date: Mar 2003
Location: Edinburgh
Posts: 1,211
|
Jesse,
Goober already made an argument along similar lines. Although bringing the Turing test in to it makes things rather broader than in Gobber's scenario. TTFN, Wounded P.S. I know a few humans who might not pass the Turing Test. |
06-11-2003, 05:44 PM | #276 | ||
Senior Member
Join Date: May 2003
Location: Canada
Posts: 639
|
Quote:
Except the people who attribute it incorrectly to an outside influence have a disorder. They still have control over their actions from what I can tell by that study, being able to claim that their actions are dependant on an outside source is exhibiting control. Quote:
|
||
06-11-2003, 05:53 PM | #277 | |
Senior Member
Join Date: May 2003
Location: Canada
Posts: 639
|
Quote:
Even then, the Turing test is not exactly a test for the soul, the robot would merely have to mimic a humans behavior to the point where we could be fooled. That's not hard to do, now is it? |
|
06-11-2003, 08:08 PM | #278 | |
Moderator - Science Discussions
Join Date: Feb 2001
Location: Providence, RI, USA
Posts: 9,908
|
Normal
If you look at the "progress" we've made as far as the sight/touch/smell/hearing/tasting cognitive processes for robots, 2030 seems awfully close. I think the most we can do now is make them hap-hazardly move around a building, forget about resonably distinguishing objects. But what you're talking about is trying to build an A.I. with various cognitive abilities completely from scratch--to do that you need a lot of high-level understanding of how these functions actually work. Uploading is based on the idea that you just need the ability to map out an existing human brain at the synaptic level, and knowledge of the way individual neurons influence their nearest neighbors, so you can accurately simulate neurons on a computer. Assuming the reductionist idea is true and that high-level intelligence emerges out of some arrangement of lots of neurons interacting according to much simpler rules, a high-resolution simulation of a brain should behave pretty much like the original. Slavishly copying an existing brain would require much less insight into how the mind works than building a mind from the ground up. Normal: And as everyone related to the computer industry knows, Moore's law will not last forever. Fundamental physics says it can't last forever (the ultimate limit is provided by the Bekenstein bound), but I suspect you're talking about the more immediate limit on our ability to shrink transistors indefinitely. It is possible that this will lead to the end of Moore's law, but there's reason for optimism, as discussed in this article by Ray Kurzweil: Quote:
Even then, the Turing test is not exactly a test for the soul, the robot would merely have to mimic a humans behavior to the point where we could be fooled. That's not hard to do, now is it? A proper Turing test would not be based on a few minutes exchanging text messages with someone, but on spending years in close relationship with them. Do you think it would be easy to fool people in that version of the test? Can you imagine that any of your close friends could be unconscious automatons with no understanding of anything they say or do? Also, remember that in the case of an upload we are not creating a generic intelligence but a specific individual with a long history before being uploaded. I think people who already knew the person would be in a good position to judge whether the upload was really the "same person" or not. Again, although we could never totally rule out the possibility that an upload wasn't really conscious, we can never totally rule out the possibility that people of other races are unconscious (or 'soulless') either. But for any group of people that you have a good amount of first-person experience interacting with, it's going to seem pretty unthinkable that they're just zombies, and the same would be true of uploads if they did indeed act just like regular people. |
|
06-11-2003, 08:29 PM | #279 | ||||
Veteran Member
Join Date: May 2003
Location: Adelaide, Australia
Posts: 1,202
|
Normal,
Quote:
Quote:
The point is you cannot falsify the existence of multiple choices at the end, all we can ever observe is that a single outcome occurs. If we cannot falsify it, it is just an illogical belief, as much as fairies and invisible monkeys runnning the world. Quote:
Quote:
You have already said that free will is not a product of structure it is a product of the soul. Complexity is a matter of structure, so complexity has nothing to do with free will according to you, so there is no reason why an electron does not have a soul. These are just an ad hoc rationalisations to try and avoid the holes in your faulty definition. |
||||
06-11-2003, 08:53 PM | #280 | |||
Veteran Member
Join Date: May 2003
Location: Adelaide, Australia
Posts: 1,202
|
Wounded King,
Quote:
As I understand probability that a photon will end up in a particular place is dependent on it's wave function. It's wave function passes through both slits, and collides with the screen. But the photon ends up in one place only, even though its wave function extends over the screen in an interference pattern, so it could appear in a number of places. That's why I say it has a choice and makes a choice. It ends up in one place, when a number of places could be possible. Quote:
Quote:
Correct me if I'm wrong here, I can't say I'm that knowledgeble about quantum physics. |
|||
Thread Tools | Search this Thread |
|