Freethought & Rationalism ArchiveThe archives are read only. |
08-25-2002, 10:59 AM | #71 |
Regular Member
Join Date: Feb 2002
Location: Home
Posts: 229
|
Kip...
"My argument is founded upon a contradiction. This contradiction is the contradiction between our attitudes towards humans and machines." I believe I had assumed this (at least in a broad brush way). "According to determinism, humans *are* mechanical. And yet, if a computer does something "wrong" we do not blame the computer (although sometimes we may instinctly blame the computer we recognize that such blame is misplaced)." One might say that humans are self-determining, whereas computers are other determined. This, it seems to me, makes a big difference. "We do, however, blame the person. And yet the difference between a robot and a human appears to be one of quantity (number) and not quality (kind), and if we could only "turn the dials" of complexity robots would become quite human without losing or gaining anything that would compromise the argument." The question really has to do with what constitutes self-determination. Robots may be able to be programmed to be self-determining if in addition to the cognitive capabilities being programmed in, it also had the capability of perception (in a human sense, which includes self-consciousness). If all this is possible I should expect that we would think the computer had free-will and was responsible for what it determined for itself. For the time being, though, we believe that robots are human constructs that, like tools and technology, generally, serve us, rather than themselves. "When discussing moral responsibility, I keep asking for a list of attributes necessary for moral responsibility because I suspect that any list of attributes will eventually either be shown to too broad, and apply to systems that, under scrutiny, it should not apply to, or too narrow, and not apply to systems that it should." I missed this I'm afraid. However, I would hope that your position is not based on the inability of others to come up with something. Are you entirely happy with your position or are you floundering (not unlike many of us) over these deep philosophical problems. Perhaps you are thinking the problem is an easy one. "1. My foundation of argument is the assumption that we only hold a person morally responsible for his action if he or she had the ability to possibly commit an action other than the action committed." Ok. "Moral responsibility appears to be largely misplaced or even illusory and yet the illusion is quite convincing, much like the sensation of being free. Are "free will" and "blame" human constructs or "memes" that have been successful because they are adaptive? Does free will have an evolutionary explanation?" I should think it does. I think you will find many folks attracted to compatibalism, despite that you seem to have reservations about it. Compatibalists believe that at the present time we just don't know enough about how our actions are (self-)determined. According to them, someday we will and it will be easier to reconcile determinism with freedom (or as Kant might say, the laws of freedom with the laws of nature). owleye |
08-25-2002, 11:11 AM | #72 | ||
Contributor
Join Date: Jun 2000
Location: Buggered if I know
Posts: 12,410
|
Quote:
Quote:
Free-will choices are not at all held tgo be necessarily perfectly determined, and in fact as far as I can see the majority of compatibilists would deny that. You also display much confusion in your posts. We're talking about two completely different things: a) An otherwise determinist world (ignoring randomicity for the moment) b) psychological determinism These are two very different things - do you see why your question above is predicated on your confusion and conflation between the two things ? The questions are: 1) Can a determinist world (ignoring randomicity) produce free will ? 2) Can free will exist ? (and to get rid of a common rhetorical strawman immediately, I'm not talking about perfect free will, I'm talking imperfect free will as recognized by the great majority of adult humans) 3) If free will can exist, is it then (theoretically) a system that can be computationally produced ? BTW, my own stance is "Yes" to all 3 questions [ August 25, 2002: Message edited by: Gurdur ]</p> |
||
08-27-2002, 10:36 PM | #73 | |
Junior Member
Join Date: Jan 2001
Location: North of Los Angeles
Posts: 29
|
Quote:
No robot built to date can comprehend right and wrong, or comprehend and be deterred from committing wrong by knowledge that such act might lead to punishment. Nor is there any sense in which a robot could be punished. This, I think, accounts perfectly well for the contradiction. It would be quite pointless to convict any state-of-the-art robot of say, armed robbery, and lock it up in jail. On the other hand, given the way human brains are constructed and work, we can't "reprogram" humans to not do wrong other than by moral and legal sanctions. We simply do not know how to "reprogram" a human being. Also, it would be pointless to try blame a human's "programmer" whatever that may be. Humans can be said to be robotic only in a very vague sense. Hence, I think you are making a fallacy of vagueness here. Traditional law may consider coercion or insanity as relieving one of moral culpability. But philosophical determinism? I don't think so. I doubt very much one could plead "having been determined to steal by the laws of nature" in a court of law. Don't think it would fly. Sorry. As for common sense, well, seems to me that sometimes one person's commen sense is another's lunacy. To me it seems commen sensical that our choices are determined, by our past experience, our current moods and thoughts and what we perceive of our current situation (also a coupla billion years of evolution). I wouldn't have it any other way. I have a hard time understanding why so many people place such importance on their choices being uncaused, undetermined or, draw such dreadful conclusions from the idea that they are caused and determined. <img src="graemlins/banghead.gif" border="0" alt="[Bang Head]" /> -Toad Master |
|
08-28-2002, 12:02 AM | #74 | |
Junior Member
Join Date: Jan 2001
Location: North of Los Angeles
Posts: 29
|
Quote:
The first premise of Kip's argument is this: 1. We only hold a person morally responsible if that person could have possibly not committed the immoral action. His conclusion though is: "we cannot hold people morally responsible for any action." I think the conclusion requires a somewhat stronger premise: 1'. We *can* only hold a person morally responsible if that person could have possibly not committed the immoral action. With the above amended premise, I believe Kip's argument is perfectly valid. (If both premises are true, the conclusion must be true.) But is it sound? (Valid and both premises are true.) Now, the discussion assumes determinism, so premise 2 is granted. Therefore, if the argument is unsound, it must be because premise 1' is unsound. I would suggest there are good reasons to believe premise 1' is unsound because the conclusion appears to be very suspect in the light of other considerations. For example, why should we allow a cold blooded serial killer to go free simply because he could not possibly have done other than he did given the precise state of the universe prior to commencement of his career as a cold blooded killer? Do we not have a right to protect ourselves from harm from others? If so, the conclusion must be false and hence 1' must be false. -Toad Master |
|
08-28-2002, 07:34 AM | #75 | |||||||
Regular Member
Join Date: Jan 2001
Location: not so required
Posts: 228
|
Quote:
If a robot (such as Data from Star Trek) possessed a sense of morality and committed an immoral action, should people blame the robot, or the robot's creator (would people blame the robot or the creator?)? Quote:
Quote:
Quote:
This is part of the summation given by defense attorney and determinst Clarence Darrow: Quote:
Quote:
My suspicion is that you claim that morality and determinism are not mutually exclusive but rather mutually necessary, but what you truly mean is that the world is amoral but determinism is necessary for deterrents. Blame and deterrents, however, are two different things. I entirely agree that we need deterrents but I do not understand how we could blame anyone for do committing an action for which they possess no power to stop and are "destined" to commit. Quote:
First, let me say that the first premise is not argued. I do not know how one could *establish* the maxims of a moral system (without an infinite regression of appeals). This premise is simply assumed. I think the premise is quite popular, but if you disagree, perhaps you can establish your own system of morality? Second, and most importantly, your "other considerations" are entirely beside the point. Contrary to your "example", I am not arguing that we should allow a murderer to go free if caught. Nor do I claim that we should not protect ourselves from murderers. A prison system would be a necessary deterrent to murderers and people should protect themselves to prevent murders. I am only arguing that we abolish *blame* not *deterrents*. We would lock murderers in prison but we would not say "you are evil, you have sinned, you should have done otherwise". |
|||||||
08-28-2002, 07:22 PM | #76 | ||||
Veteran Member
Join Date: Oct 2000
Location: Alberta, Canada
Posts: 5,658
|
Kip:
Quote:
Quote:
Quote:
That is why I asked you "what is sufficient for moral responsibility?" because who cares what most people believe is sufficient? Most people also believe in God! You finally you cite Bill's answer to the question. Quote:
2. I prefer "human motivation and decision making" rather than "no excuse." If a tiger chooses to eat someone, we do not hold them morally responsible because their motivations and decision making processes differ so much from those of a human that it is unrealistic to expect them to conform to human moral expectations (the same could potentially be true of an intelligent alien race). 3. It is not clear that determinism does exclude people from moral responsibility according to either traditional law or common sense. Only in cases where there is only one option (or the available options are equally terrible) in your "trivial" sense does this appear to be the case. [ August 29, 2002: Message edited by: tronvillain ]</p> |
||||
08-28-2002, 07:45 PM | #77 | |
Veteran Member
Join Date: Oct 2000
Location: Alberta, Canada
Posts: 5,658
|
Toad Master:
Quote:
[ August 28, 2002: Message edited by: tronvillain ]</p> |
|
08-28-2002, 09:53 PM | #78 | |
Veteran Member
Join Date: Jul 2000
Location: King George, VA
Posts: 1,400
|
Kip:
Your argument hinges on the premises: Quote:
Clearly in (2) "possible" is used in the sense of "having nonzero probability". So for the argument to be valid, this must be the sense in which it is used in (1). But if this is the sense intended in (1), it is plainly false, and practically everyone recognizes that it is false. Imagine the following situation: a man finds a wallet with a lot of money in it, but also with documents giving the owner's name, address, etc. The circumstances are such that it's clear that there is practically no possibility that anyone else will know about it if he just keeps the money. So he has two options: take the money and run, or return the wallet, money and all, to its owner. Now Smith is a man of the utmost virtue. If he should find himself in this situation, there is absolutely no possibility that he will keep the money; the thought doesn't even cross his mind. In other words, there is zero probability that he will take the money and run. Jones, on the other hand, is a man of no virtue whatsoever. His motto, which he invariably acts on, is "look out for number one". If he should find himself in this situation, there is absolutely no possibility that he will return the wallet; the thought doesn't even cross his mind. In other words, there is zero probability that he will "do the right thing". According to (1), Smith should not be praised for returning the money. Why? Because his virtue is too perfect! If only he had a drop or two of corruption in his soul, we would of course praise him to the skies, because then there would be a chance - perhaps only one in a million, but a chance - that he would keep the money. But sadly, there isn't. And so his perfect virtue makes him unworthy of praise. By the same token, (1) tells us that Jones is not to be blamed. Why? Because his corruption is so complete! If only he had just the slightest touch of virtue - enough so that there was a tiny chance, say one in a million, that he would return the money - that would be a different matter. But luckily for him there isn't. And so the utter depravity of his character makes him undeserving of blame. I submit that these conclusions, which are logically entailed by (1), are so absurd and counterintuitive as to make it completely untenable. Once one understands the logical implications of this principle it loses the slightest shred of plausibility. And in fact, I submit that virtually no one does accept it. Most people accept a principle that can be stated in the same words, but they mean something quite different by "possible" than "having nonzero probability" in this context. Time for bed. I hope to have more to say tomorrow. [ August 29, 2002: Message edited by: bd-from-kg ]</p> |
|
08-29-2002, 04:18 PM | #79 | |
Regular Member
Join Date: Jan 2001
Location: not so required
Posts: 228
|
Quote:
Let me formalize our arguments a bit. Mine: p1. We only hold a person morally responsible if that person possessed the power to not commit the condemned action. p2. According to determinism a person does not possess the power to not commit any action and is physically destined to do so. p3. Determinism. ---------------- c1. We can never hold anyone morally responsible for their actions. Yours attempts to undermine p1: p1b. If p1 is not popular, p1 is false. p2b. The idea that we should not hold people responsible for actions, because these people are perfectly moral or immoral and therefore have no real choice (according to p1), is not popular. ---------------- c1b. p1 is false (and therefore c1 is unproven) First, I deny your p1b. Human convention is no authority. My argument assumes p1. If you disagree with that moral system I cannot imagine proving that my moral system is true and others are false. What would I appeal to? So unless you assume p1 my argument is lost on you because of moral relativism. However, if we were to appeal to human convention I still think that human convention (and the history of moral philosophy) agrees that the requirement of "real" (as opposed to trivial) possibility for moral responsibility is quite popular. To demonstrate that, I have a further objection to your argument (besides denying p1b). I also deny p2b. I do not think that it is at all obvious that most people would agree that the "people" in your example should be morally responsible. The reason I think this is because your argument is subtle but, upon inspection, quite incoherent. The truth is that your example does not only assume my first premise p1, but also my third premise, p3 (determinism). Most people, however, deny determinism and therefore most people would find your argument incoherent. Your people who have no choice to be moral or immoral are simply impossible. The masses believe that everyone has a choice. There is no such thing as a perfectly moral or immoral robotic person and your argument only works to the extent that it deceives the reader into regarding your impossible moral robots as real people with free will, the only people most people claim to know. I maintain that most people would respond to your example with confusion, not agreement, and that those who agree have been deceived. [ August 29, 2002: Message edited by: Kip ]</p> |
|
08-29-2002, 04:50 PM | #80 | |
Regular Member
Join Date: Jan 2001
Location: not so required
Posts: 228
|
tron:
Quote:
DOMAIN A: conceivable, trivial, apparent DOMAIN B: possible, non-trivial, real Forgive me if that language is loaded but let's not argue semantics if we both recognize what these two domains signify. So, my argument: p1. We only hold a person morally responsible if that person possessed the power to not commit the condemned action. p2. According to determinism a person does not possess the power to not commit any action and is physically destined to do so. p3. Determinism. ---------------- c1. We can never hold anyone morally responsible for their actions. Your objection, as you clarified, is not that I am equivocating "possible" (as I assured you I was not) but rather that you simply deny p1. I had thought that p1 was a very popular notion and dissent is a suprise to me. Upon inspection, however, I am unable to articulate exactly why p1 is necessary (or unnecessary). Indeed, I do not know how to establish any moral maxim whatsoever, as some point the maxim is simply "assumed". However, efforts have been made, particularly efforts to demonstrate that the maxim in questions entails consequences with which human moral convention disagrees (and therefore the maxim is shown to be false). So, I suppose all that is left between you and me is to demonstrate that demonstrate that denying p1 leads to disagreeable consequences (as I would do) or that accepting p1 leads to disagreeable consequences (as you would do if you continue). I think you have already made some small efforts (these sounded to be paraphrases of Dennett) to demonstrate that accepting p1 is absurd. I wish you would repeat that argument and elaborate upon that because I did not fully understand the first time. As for my demonstration, I allude to the contradiction I have already mentioned (have you addressed this yet) between human attitudes toward robots and other humans. How do you justify this contradiction? Do you distintinguish between the robots of today and the robots of the future (or would we blame the creator of all robots and never the robot)? Such sophisticated robots may have distinguishing features such as knowledge of moral codes and expectations, meta desires, and world modelling. If you cite any of these distinctions, however, the question is why that distinction is relevant (does such a feature complete the your "three requirements" for moral responsibility that you borrowed from Bill?). As a side note, I think the ultimate conclusion to be reached from your argument is that people who commit "immoral actions" are defective, bad, broken and that you use the world "immoral" to signify that. Indeed, that is all the word immoral can signfiy according to your logic. To me, this is an abuse of language, but if you admit that this is all your "immorality" entails then we are arguing semantics. [ August 29, 2002: Message edited by: Kip ]</p> |
|
Thread Tools | Search this Thread |
|