Freethought & Rationalism ArchiveThe archives are read only. |
09-10-2002, 11:12 AM | #101 |
Veteran Member
Join Date: Aug 2002
Location: Chicago
Posts: 1,485
|
Keith:
That's exactly what I mean. I believe that if two absolutlely identical brains were given the exact same inputs for the entirety of their existence, they would be in exactly the same state at any given time. If you don't believe this to be the case, what do you think would make the states different between the two? |
09-10-2002, 11:18 AM | #102 |
Veteran Member
Join Date: Jul 2002
Location: Overland Park, Kansas
Posts: 1,336
|
K:
I don't believe that there could be two 'absolutely identical' brains: a thing is only 'absolutely identical' to itself. Keith. |
09-10-2002, 11:38 AM | #103 |
Veteran Member
Join Date: Aug 2002
Location: Chicago
Posts: 1,485
|
Keith:
It was a thought experiment. I don't believe there could be two identical brains either. I also believe it's even less likely that they could be given the same exact inputs for their entire existences. The big question is how the brain operates. Does it behave like other physical machines in the universe where the input + the current state yields the output? Or is it somehow tuned to translate the desires of a non-naturalistic will? I'm not trying to limit the options to two choices. I'd be perfectly willing to entertain other ideas of how a brain functions. I realize that we don't know entirely how brains work, but do we have any reason to believe that they don't follow the causality rules of the rest of the macroscopic universe? |
09-10-2002, 01:05 PM | #104 |
Banned
Join Date: Sep 2001
Location: Eastern Massachusetts
Posts: 1,677
|
This is one of those questions bring up the issues I have with certain practices of both philosophy and law.
Being moral does not depend upon some theoretically seamless, perfect conception. As proof, I am simply a moral person, despite having little interest in this debate nor any particular opinion about its outcome. Just as science, unlike faith, does not require an answer to everything in order to work, so, it seems to me, being morally good and ethical and principled does not require a meticulous pholisiphical answer to matters of free will, determinism, etc., etc. Not trying to dampen the enthusiasm for the debate, just making a point. |
09-10-2002, 01:19 PM | #105 | ||
Veteran Member
Join Date: Jul 2000
Location: King George, VA
Posts: 1,400
|
Quote:
Typically everyone agrees that there is such a thing as free will, because everyone (more or less) agrees that people are sometimes morally responsible for their actions. The problem is that the LFW advocates say that one is morally responsible for an act only when one acted of one’s own free will in the LFW sense, whereas compatibilists typically reply (as I do) that this must be wrong because it would imply that no one is ever morally responsible for their actions, since LFW does not exist. LFW advocates often reply by saying that the claim that LFW does not exist is a wicked doctrine because it implies that morality itself does not exist. It’s hardly surprising that such arguments often produce more heat than light! Quote:
Let’s say for the sake of argument that there is “something else” to the “self” (i.e., the entity who is said to act, and about which we are asking whether he/she/it is “morally responsible” for the act) besides a “brain state” - consciousness, soul, whatever. The question is still: did the “self” choose to do X rather than Y? If so, by virtue of what it means to “choose”, the state of the “self” at that moment caused act X rather than act Y. If something else caused X to be done rather than Y (or if there was no cause), the “self” did not choose X. But to say that the self caused act X is to say that the self could not have done otherwise at that moment. And according to Kip, that means that the self is not morally responsible for doing X. [I developed this argument more fully in my Sept. 6 post in Section 3: On libertarian free will. I would recommend that anyone who wants to reply to it read this more complete exposition first.] This is the unresolvable paradox of the concept of libertarian free will: if A chose to do X, then by virtue of what it means to say that A chose, A could not (in Kip’s sense of “could”) have done otherwise, and hence is not morally responsible for doing X. There’s no getting around this. The PAP, as Kip (and advocates of LFW generally) interpret it, implies that no one is ever morally responsible for an action. Either one didn’t choose to do it, and hence is not responsible (one is not responsible for what one did not choose) or one did choose to do it, and hence is not responsible (one is not responsible for an act if one could not have done otherwise). |
||
09-10-2002, 01:31 PM | #106 |
Veteran Member
Join Date: Aug 2002
Location: Chicago
Posts: 1,485
|
bd-from-kg:
I believe the problem stems from the first premise - not the state machine nature of the brain. We get bogged down trying not to hold people responsible for acts they had no control over. I think a better way to look at it is that we hold people morally responsible when doing so will help prevent detrimental behavior in the future. Therefore, if someone slips and accidentally knocks someone over, they are not morally responsible unless their slip was caused by their own negligence. Holding them morally responsible won't reduce the odds of a similar act happening in the future. However, if someone's brain state causes them to knock somebody over, we do hold them morally responsible. Doing so has the possibility of curbing such actions in the future. |
09-10-2002, 03:51 PM | #107 |
Veteran Member
Join Date: Jul 2002
Location: Overland Park, Kansas
Posts: 1,336
|
Greetings:
People who are mentally ill are generally conceded not to be 'responsible' (legally, if not morally) for their actions. They are viewed as being unable to participate in their own defense, should they be accused of crimes. If convicted of a crime, they are generally not given as harsh a sentence as people who are judged 'sane' at the time they committed their crime(s). But, we still don't allow people who are 'not responsible for their actions' to run around loose, if strong evidence exists to support the claim that these folks are a danger either to themselves and/or others. If a person is removed from society and permanently placed in a mental hospital, given a life sentence with no possibility of parole, or summarily executed, there really isn't any difference to me. I am, indefinitely, protected from this person. So, are there any real ramifications to me if a person does something out of choice, or because 'the voices' said to, or because 'their brain was wired that way'? Maybe there are some people who claim that we cannot, should not, punish (or even incarcerate) people, if human beings don't have 'free will'. I haven't met them yet... Keith. |
09-10-2002, 04:39 PM | #108 |
Veteran Member
Join Date: Aug 2002
Location: Chicago
Posts: 1,485
|
Keith:
I agree totally. Free will is a non-issue in the crime / punishment / societal protection scheme. |
09-11-2002, 08:46 AM | #109 | ||||
Regular Member
Join Date: Jan 2001
Location: not so required
Posts: 228
|
bd:
I am glad that you have the patience to entertain such a slow learner. I am reading An Introduction To Western Philosophy by Antony Flew and he summarizes the dispute quite well (from page 269 of my hardcover): Quote:
Now I will post some ideas that I have. I will reply to (both of) your posts later. Moral Responsibility When I reflect upon the idea of moral responsibility, I wonder whether the word "moral" is redundant. Moral considers notions of "right" and "wrong" and responsible, what does responsible signify? I immediately appeal to the dictionary which says that: 1. Liable to be required to give account, as of one's actions or of the discharge of a duty or trust. 2. Involving personal accountability or ability to act without guidance or superior authority: a responsible position within the firm. 3. Being a source or cause. 4. Able to make moral or rational decisions on one's own and therefore answerable for one's behavior And further that "account for" means: To constitute the governing or primary factor in The essential items to notice, I think, are the emphases placed upon autonomy and "source". Thus a thing is responsible for actions if the actions are done autonomously, as a source, without "guidance" and "on one's own". The fundamental question (summarized by Flew above), is whether a person's natural inclinations and character, constitute "guidance" in a sense that would remove responsibility. Is a person who acts according to the character he is given acting "on his own"? Thus, the question of responsibility is also a question of personal identity: are preferences "external", "guides" branded upon a metaphysical "person" or does personal identity already include preferences? I must say that I am inclined toward the latter view that a person is only the sum of his preferences and that there is no metaphysical person that is given preferences. And yet, we do not hold artifical machines (as opposed to biological machines), which also are the sum of the preferences and do not choose their character, responsible. Indeed, I submit that we do not hold them responsible precisely because they are robotic (and I invite suggestions of alternative distinctions), and determined by characters that the robot did not and could not, choose. So, I suspect that this definition of responsibility may be somehow lacking and not sufficiently "metaphysical" and could possibly agree that determinism is compatible with this dictionary "responsibility". Indeed, you write that: Quote:
"The power of making free choices that are unconstrained by external circumstances or by an agency such as fate or divine will." So, upon reflection, I am inclined to say that an agent is responsible for whatever action he or she freely wills. This is to distinguish the human and divine "metaphysical" domain from the domain of puppies, ill people, and robots (which we do not grant responsibility). Perfect, Autonomous Correlation The phenomenon that we frequently allude to is that of "constant conjuction" as Hume would say, or (with less ugliness), perfect correlation. I feel that you repeatedly fail to distinguish between cause and correlation, and then utilize this failure to label as determined that which could be entirely causeless and thus to define determinism is a weaker sense which is compatible with LFW. This motive is also found in your mentioning probabilities and statistics, which, by definion, only measure actual correlations and make no claims about causality whatsoever. For example, if I threw a dice twice and "2" came up both times, there was a 2 / 2 = 100% chance that the dice would be a 2 and thus the result of 2 was "determined". Obviously, however, the result was completely random. The problem with your statistical definition is that "determined" occurances can be produced not only by causation but also by perfect, autonomous correlation. You may object that you were not referring to such a small sample, but that difference is one of only number and not kind. For I may just as well extend the length of time to ETERNITY and extend the possible dice results to every possible state of the universe and the principle still holds. The problem of perfect, autonomous correlation creeps back into your definition each time. You may say that this is a preposterous and trivial objection. I would agree with you that in reality no such events actually occur, but hypothetically, such demands are being made of LFW especially when we consider "rolling the tape back" and playing the same event many times. You wish to accuse LFW with a failure to meet these demands, but the phenomenon of perfect correlation is exactly how the libertanian can do so. Indeed, perfect correlation is fundamental and necessary to any concept of LFW because, although the libertarian may wish to have the power to freely choose, he also needs the power to make the same choice eternally, or else he must admit intolerable inconsistency. You threaten the libertarian with this inconsistency but perfect correlation is precisely what allows him to make the same choice eternally. Leibniz gave this classic illustration of perfect correlation: Quote:
In particular, I do not want to be met with this reply again: Quote:
More later. |
||||
09-11-2002, 04:16 PM | #110 | |||||||||||||
Veteran Member
Join Date: Jul 2000
Location: King George, VA
Posts: 1,400
|
Kip:
1. Ultimate responsibility The idea that you’ve been appealing to more and more in your campaign to absolve everyone of all moral responsibility is not PAP, but the principle of “Ultimate Responsibility”. David Kane defines it as follows in his paper <a href="http://www.ucl.ac.uk/~uctytho/dfwVariousKane.html" target="_blank">Reflections on Free Will, Determinism and Indeterminism</a>: Quote:
Quote:
You comment: Quote:
Certainly it is impossible to produce a demonstrative argument showing that A or B is “correct,” and I have made no attempt to do so. But, like Flew, I have tried to demonstrate that these are indeed the only alternatives. As for your apparent rejection of B (thereby accepting A by default), it seems to me that your thinking is confused. So far as I can see, your rejection of B is based solely on a supposed moral intuition that UR is a condition of moral responsibility. I don’t share this intuition; I don’t even see why you think it plausible, much less why you put so much stock in it. But in any case there is another, more fundamental moral intuition: that some people are sometimes morally responsible for their actions. It appears that you are prepared to abandon this moral intuition on the grounds that it conflicts with the former one. It would seem to me to be far more reasonable to abandon the former on the grounds that it conflicts with the latter. It seems to me to be downright perverse to conclude on the basis of a moral intuition that there is no such thing as morality. Of course, if you think of morality as a set of “objective truths” independent of all human desires and preference, that exist in some mystical realm beyond the reach of either logic or experience, then you should conclude that it doesn’t exist, because it doesn’t. 2. Moral responsibility Although, as Flew notes, your acceptance of the UR condition makes all talk about moral responsibility nonsensical, you continue to talk about it anyway. Your next move is to (yet again) cite a dictionary definition. You really need to abandon this obsession with dictionary definitions; they are often completely useless in philosophy. Here your reliance on the dictionary leads you to the absurd conclusion that: Quote:
Next, you actually propose as a serious question: Quote:
And this: Quote:
Finally, you bring up yet again the subject of robots: Quote:
Anyway, this approach to the question of moral responsibility is perverse. Rather than saying “We hold humans responsible, but not (existing) machines; what does this tell us about the conditions required for moral responsibility?” you say “We don’t hold machines responsible; since there is obviously no relevant difference between machines and humans, isn’t is obvious that we shouldn’t hold humans responsible either?” You don’t seem to be willing to even entertain the possibility that machines that were sufficiently like humans in the relevant ways should be held responsible. But why is this so obviously unreasonable? 3. “Perfect, Autonomous Correlation” You say: Quote:
Later, along the same lines, you say: Quote:
Quote:
Quote:
Quote:
Second, as I argued at some length in my Sept. 6 post, if (as you put it) no cause forces the agent to choose (or more properly, if the choice has no cause) he cannot properly be said to choose at all. In order for it to be a choice, it must have a cause, and this cause must reside in the agent. [Please don’t respond to this statement; respond to the argument to this effect given earlier.] Now let’s go back to what seems to be the critical point here: Quote:
So what events is this “perfect correlation” a correlation between? What choices are these that, if different, would involve an “intolerable inconsistency”? What occasions (plural) are these in which the agent “must have the power to freely choose”, yet must also have “the power to make the same choice” in every case? Isn’t it obvious that you’re talking about different possible worlds here? Worlds that are exactly like this one up to some critical point preceding the act in question? It appears that you’re saying that in all of the relevant possible worlds, the agent makes the same choice as in this one, yet that it is possible that he could make some other one. But what could this mean? What sense does it make to say that it is possible that a certain event will occur under certain conditions, but that in all possible worlds which satisfy these conditions the event does not occur? Perhaps you have some satisfactory answer to this. Perhaps you really have some intelligible meaning of “Perfect, Autonomous Correlation” in mind that I haven’t thought of. If so, perhaps you would like to share it with us. But my experience has been that, although advocates of LFW always claim to have a coherent concept in mind, they never seem to be able to communicate it to the rest of us. [ September 12, 2002: Message edited by: bd-from-kg ]</p> |
|||||||||||||
Thread Tools | Search this Thread |
|