FRDB Archives

Freethought & Rationalism Archive

The archives are read only.


Go Back   FRDB Archives > Archives > IIDB ARCHIVE: 200X-2003, PD 2007 > IIDB Philosophical Forums (PRIOR TO JUN-2003)
Welcome, Peter Kirby.
You last visited: Yesterday at 05:55 AM

 
 
Thread Tools Search this Thread
Old 08-25-2002, 10:59 AM   #71
Regular Member
 
Join Date: Feb 2002
Location: Home
Posts: 229
Post

Kip...

"My argument is founded upon a contradiction. This contradiction is the contradiction between our attitudes towards humans and machines."

I believe I had assumed this (at least in a broad brush way).

"According to determinism, humans *are* mechanical. And yet, if a computer does something "wrong" we do not blame the computer (although sometimes we may instinctly blame the computer we recognize that such blame is misplaced)."

One might say that humans are self-determining, whereas computers are other determined. This, it seems to me, makes a big difference.

"We do, however, blame the person. And yet the difference between a robot and a human appears to be one of quantity (number) and not quality (kind), and if we could only "turn the dials" of complexity robots would become quite human without losing or gaining anything that would compromise the argument."

The question really has to do with what constitutes self-determination. Robots may be able to be programmed to be self-determining if in addition to the cognitive capabilities being programmed in, it also had the capability of perception (in a human sense, which includes self-consciousness). If all this is possible I should expect that we would think the computer had free-will and was responsible for what it determined for itself. For the time being, though, we believe that robots are human constructs that, like tools and technology, generally, serve us, rather than themselves.

"When discussing moral responsibility, I keep asking for a list of attributes necessary for moral responsibility because I suspect that any list of attributes will eventually either be shown to too broad, and apply to systems that, under scrutiny, it should not apply to, or too narrow, and not apply to systems that it should."

I missed this I'm afraid. However, I would hope that your position is not based on the inability of others to come up with something. Are you entirely happy with your position or are you floundering (not unlike many of us) over these deep philosophical problems. Perhaps you are thinking the problem is an easy one.

"1. My foundation of argument is the assumption that we only hold a person morally responsible for his action if he or she had the ability to possibly commit an action other than the action committed."

Ok.

"Moral responsibility appears to be largely misplaced or even illusory and yet the illusion is quite convincing, much like the sensation of being free. Are "free will" and "blame" human constructs or "memes" that have been successful because they are adaptive? Does free will have an evolutionary explanation?"

I should think it does. I think you will find many folks attracted to compatibalism, despite that you seem to have reservations about it. Compatibalists believe that at the present time we just don't know enough about how our actions are (self-)determined. According to them, someday we will and it will be easier to reconcile determinism with freedom (or as Kant might say, the laws of freedom with the laws of nature).

owleye
owleye is offline  
Old 08-25-2002, 11:11 AM   #72
Contributor
 
Join Date: Jun 2000
Location: Buggered if I know
Posts: 12,410
Arrow

Quote:
Originally posted by Steven Carr:

GUDRUN
I hate to disillusion you, but I'm still "Gurdur".
Quote:
CARR
It appears to me to be the very essence of compatibilism that choices of free will are perfectly determined. Isn't that what compatibilism means - that free will is compatible with determinism?

It doesn't seem odd to me at all. Perhaps you can tell me what 'compatabilism' means if proponents of compatiblism don't think free will and determinism are compatabile. What do compatabilists think are compatible?
For a start, there are many, many different flavours of compatibilism.
Free-will choices are not at all held tgo be necessarily perfectly determined, and in fact as far as I can see the majority of compatibilists would deny that.

You also display much confusion in your posts.

We're talking about two completely different things:

a) An otherwise determinist world (ignoring randomicity for the moment)

b) psychological determinism

These are two very different things - do you see why your question above is predicated on your confusion and conflation between the two things ?

The questions are:

1) Can a determinist world (ignoring randomicity) produce free will ?

2) Can free will exist ?

(and to get rid of a common rhetorical strawman immediately, I'm not talking about perfect free will, I'm talking imperfect free will as recognized by the great majority of adult humans)

3) If free will can exist, is it then (theoretically) a system that can be computationally produced ?


BTW, my own stance is "Yes" to all 3 questions

[ August 25, 2002: Message edited by: Gurdur ]</p>
Gurdur is offline  
Old 08-27-2002, 10:36 PM   #73
Junior Member
 
Join Date: Jan 2001
Location: North of Los Angeles
Posts: 29
Post

Quote:

Kip:
"3. Determinism, according to traditional law and common sense, *excludes* people from moral responsibility rather than includes them. Once again I mention the contradiction between our behavior toward people and robots. We do not blame bad robots, we blame their programmers. We do blame people, however, we both agree that people are essentially robotic. What distinction between people and robots justifies this contradiction? Is this distinction included within your conveniently vague "no excuse" condition?"
Kip:

No robot built to date can comprehend right and wrong, or comprehend and be deterred from committing wrong by knowledge that such act might lead to punishment. Nor is there any sense in which a robot could be punished. This, I think, accounts perfectly well for the contradiction. It would be quite pointless to convict any state-of-the-art robot of say, armed robbery, and lock it up in jail.

On the other hand, given the way human brains are constructed and work, we can't "reprogram" humans to not do wrong other than by moral and legal sanctions. We simply do not know how to "reprogram" a human being. Also, it would be pointless to try blame a human's "programmer" whatever that may be.

Humans can be said to be robotic only in a very vague sense. Hence, I think you are making a fallacy of vagueness here.

Traditional law may consider coercion or insanity as relieving one of moral culpability. But philosophical determinism? I don't think so. I doubt very much one could plead "having been determined to steal by the laws of nature" in a court of law. Don't think it would fly. Sorry.

As for common sense, well, seems to me that sometimes one person's commen sense is another's lunacy. To me it seems commen sensical that our choices are determined, by our past experience, our current moods and thoughts and what we perceive of our current situation (also a coupla billion years of evolution). I wouldn't have it any other way. I have a hard time understanding why so many people place such importance on their choices being uncaused, undetermined or, draw such dreadful conclusions from the idea that they are caused and determined.

<img src="graemlins/banghead.gif" border="0" alt="[Bang Head]" />

-Toad Master
Toad Master is offline  
Old 08-28-2002, 12:02 AM   #74
Junior Member
 
Join Date: Jan 2001
Location: North of Los Angeles
Posts: 29
Post

Quote:

tronvillian quoting Kip:
"In summary, according to my logic:
1. We only hold a person morally responsible if that person could have possibly not committed the immoral action.
2. According to determinism, a person only has one possible (although many conceivable) responses to any situation.

Conclusion: we cannot hold people morally responsible for any action."

tronvillain:
I deny the conclusion because the premises do not use the word "possible" in the same sense.
Kip's argument cannot be valid unless "possible" is used in the same sense in both premises. I think the principle of charity demands we grant Kip that.

The first premise of Kip's argument is this:

1. We only hold a person morally responsible if that person could have possibly not committed the immoral action.

His conclusion though is: "we cannot hold people morally responsible for any action."

I think the conclusion requires a somewhat stronger premise:

1'. We *can* only hold a person morally responsible if that person could have possibly not committed the immoral action.

With the above amended premise, I believe Kip's argument is perfectly valid. (If both premises are true, the conclusion must be true.) But is it sound? (Valid and both premises are true.)

Now, the discussion assumes determinism, so premise 2 is granted. Therefore, if the argument is unsound, it must be because premise 1' is unsound.

I would suggest there are good reasons to believe premise 1' is unsound because the conclusion appears to be very suspect in the light of other considerations.

For example, why should we allow a cold blooded serial killer to go free simply because he could not possibly have done other than he did given the precise state of the universe prior to commencement of his career as a cold blooded killer? Do we not have a right to protect ourselves from harm from others? If so, the conclusion must be false and hence 1' must be false.

-Toad Master
Toad Master is offline  
Old 08-28-2002, 07:34 AM   #75
Kip
Regular Member
 
Join Date: Jan 2001
Location: not so required
Posts: 228
Post

Quote:
Originally posted by Toad Master:

Kip:

No robot built to date can comprehend right and wrong, or comprehend and be deterred from committing wrong by knowledge that such act might lead to punishment. Nor is there any sense in which a robot could be punished. This, I think, accounts perfectly well for the contradiction. It would be quite pointless to convict any state-of-the-art robot of say, armed robbery, and lock it up in jail.
Okay - you provide the distinction of a concept of morality. That is easy to understand because we would only blame someone if he or she knew what they were doing. So, continuing:

If a robot (such as Data from Star Trek) possessed a sense of morality and committed an immoral action, should people blame the robot, or the robot's creator (would people blame the robot or the creator?)?

Quote:
On the other hand, given the way human brains are constructed and work, we can't "reprogram" humans to not do wrong other than by moral and legal sanctions. We simply do not know how to "reprogram" a human being. Also, it would be pointless to try blame a human's "programmer" whatever that may be.
My argument is about moral principles that should apply now as well as the future, in this world as well as hypothetical worlds, so this objection need not compromise my argument.

Quote:
Humans can be said to be robotic only in a very vague sense. Hence, I think you are making a fallacy of vagueness here.
No, we only have a vague *understanding* of human's determined nature. But our ignorance does diminish the degree to which we are robotic in the least. We may not fully understand how a car engine operates but we know that the engine must be mechanical.

Quote:
Traditional law may consider coercion or insanity as relieving one of moral culpability. But philosophical determinism? I don't think so. I doubt very much one could plead "having been determined to steal by the laws of nature" in a court of law. Don't think it would fly. Sorry.
Have you never heard of the Leopold and Loeb trial?

This is part of the summation given by defense attorney and determinst Clarence Darrow:

Quote:
Is Dickey Loeb to blame because out of the infinite forces that conspired to form him, the infinite forces that were at work producing him ages before he was born, that because out of these infinite combinations he was born with out it? If he is, then there should be a new definition for justice. Is he to blame for what he did not have and never had? Is he to blame that his machine is imperfect? Who is to blame? I do not know. I have never in my life been interested so much in fixing blame as I have in relieving people from blame. I am not wise enough to fix it. I know that somewhere in the past that entered into him something missed. It may be defective nerves. It may be a defective heart or liver. It may be defective endocrine glands. I know it is something. I know that nothing happens in this world without a cause.
The defendents were saved from the death penalty.

Quote:
As for common sense, well, seems to me that sometimes one person's commen sense is another's lunacy. To me it seems commen sensical that our choices are determined, by our past experience, our current moods and thoughts and what we perceive of our current situation (also a coupla billion years of evolution). I wouldn't have it any other way. I have a hard time understanding why so many people place such importance on their choices being uncaused, undetermined or, draw such dreadful conclusions from the idea that they are caused and determined.
The foundation of my argument is that the power to commit the immoral action is necessary for moral blame. This is a very popular idea and that is why people such as Darrow argue that determinism should exempt people from blame. If you agree, we can accept this standard as granted. If you disagree, I ask that you establish your own system of morality. That may be difficult because I have no idea how we could do that.

My suspicion is that you claim that morality and determinism are not mutually exclusive but rather mutually necessary, but what you truly mean is that the world is amoral but determinism is necessary for deterrents. Blame and deterrents, however, are two different things. I entirely agree that we need deterrents but I do not understand how we could blame anyone for do committing an action for which they possess no power to stop and are "destined" to commit.

Quote:
1. We only hold a person morally responsible if that person could have possibly not committed the immoral action.

His conclusion though is: "we cannot hold people morally responsible for any action."

I think the conclusion requires a somewhat stronger premise:

1'. We *can* only hold a person morally responsible if that person could have possibly not committed the immoral action.

With the above amended premise, I believe Kip's argument is perfectly valid. (If both premises are true, the conclusion must be true.) But is it sound? (Valid and both premises are true.)

Now, the discussion assumes determinism, so premise 2 is granted. Therefore, if the argument is unsound, it must be because premise 1' is unsound.

I would suggest there are good reasons to believe premise 1' is unsound because the conclusion appears to be very suspect in the light of other considerations.

For example, why should we allow a cold blooded serial killer to go free simply because he could not possibly have done other than he did given the precise state of the universe prior to commencement of his career as a cold blooded killer? Do we not have a right to protect ourselves from harm from others? If so, the conclusion must be false and hence 1' must be false.
I have no problem with your edit to my premise. The distinction, however, seems to me to be quite trivial (the distinction between what we do and what we can do). Further, your objection to my argument that the premise "appears to be very suspect in the light of other considerations" does not appear to be related to this distinction at all.

First, let me say that the first premise is not argued. I do not know how one could *establish* the maxims of a moral system (without an infinite regression of appeals). This premise is simply assumed. I think the premise is quite popular, but if you disagree, perhaps you can establish your own system of morality?

Second, and most importantly, your "other considerations" are entirely beside the point. Contrary to your "example", I am not arguing that we should allow a murderer to go free if caught. Nor do I claim that we should not protect ourselves from murderers. A prison system would be a necessary deterrent to murderers and people should protect themselves to prevent murders. I am only arguing that we abolish *blame* not *deterrents*. We would lock murderers in prison but we would not say "you are evil, you have sinned, you should have done otherwise".
Kip is offline  
Old 08-28-2002, 07:22 PM   #76
Veteran Member
 
Join Date: Oct 2000
Location: Alberta, Canada
Posts: 5,658
Post

Kip:
Quote:
This was your original objection. I replied that moral responsibility requires that "people could have done otherwise at that very moment" and not only in the future. You replied that "all THAT means is that they had other options available at that moment". But, as I have already shown, you are equivocating words such as "options" and "choice". According to determinism, there is only one choice or option in the non-trivial sense, therefore there is no choice or options and consequently no moral responsibility.
No, according to determinism, which of the options will be chosen is set, but there may be many options from which the choice is made. If you wish to say that these other options exist only in the trivial sense under determinism, then I must reply that the trivial sense is all that is required for moral reponsibility.

Quote:
Your accusation that I am equivocating the word "possible" is a naked assertion for which you provide no argument.
Well, either the words are not used equivalently or I reject the first premise. That is, I only accept the first premise as true when the two premises do not use the word "possible" in the same sense.

Quote:
Now I will repeat *my* argument that I am using the words equivalently and that this non-trivial sense of "possible" is the sense required for moral responsibility:

If you were to ask the average person whether or not he or she "has the power" choose a different option at the same point in time, if that situation would "turned back" and met again, most people would be extremely reluctant to deny that power. Why? Simply because most people do not believe they are robots! You and me are in the minority! You cannot apply your compatiblist definitions to the majority who disagree with you.

This however, is an appeal to human convention, which is NOT an authority. If you agree with this appeal, as I do, I will not dispute the claim. I would not blame someone for something that the person is destined to do. However, if you dispute that "real possibility" is necessary for moral responsibility, I demand that you establish your competing definition as the true requirements for moral responsibility.
As I have repeatedly pointed out, it is not clear that the appeal to human convention supports your position at all! Unless people think that if the tape was wound back over and over, they would sometimes make a different choice, then human convention is completely compatible with determinism.

That is why I asked you "what is sufficient for moral responsibility?" because who cares what most people believe is sufficient? Most people also believe in God! You finally you cite Bill's answer to the question.

Quote:
There are many problems with this formulation.

1. This is Bill's "personal" formulation.

2. "No excuse" is a poorly defined and vague "catch-all" phrase for including or excluding behavior at your "whim".

3. Determinism, according to traditional law and common sense, *excludes* people from moral responsibility rather than includes them. Once again I mention the contradiction between our behavior toward people and robots. We do not blame bad robots, we blame their programmers. We do blame people, however, we both agree that people are essentially robotic. What distinction between people and robots justifies this contradiction? Is this distinction included within your conveniently vague "no excuse" condition?
1. It may be Bill's "personal" formulation, but it describes my position rather nicely.

2. I prefer "human motivation and decision making" rather than "no excuse." If a tiger chooses to eat someone, we do not hold them morally responsible because their motivations and decision making processes differ so much from those of a human that it is unrealistic to expect them to conform to human moral expectations (the same could potentially be true of an intelligent alien race).

3. It is not clear that determinism does exclude people from moral responsibility according to either traditional law or common sense. Only in cases where there is only one option (or the available options are equally terrible) in your "trivial" sense does this appear to be the case.

[ August 29, 2002: Message edited by: tronvillain ]</p>
tronvillain is offline  
Old 08-28-2002, 07:45 PM   #77
Veteran Member
 
Join Date: Oct 2000
Location: Alberta, Canada
Posts: 5,658
Post

Toad Master:
Quote:
Kip's argument cannot be valid unless "possible" is used in the same sense in both premises. I think the principle of charity demands we grant Kip that.
Well, I grant that he use using them in the same sense, and in that case I reject the first premise as false; however, I usually accept the first premise as true but use "possible" in a different sense, and so the conclusion does not follow. It is simply a matter of perspective.

[ August 28, 2002: Message edited by: tronvillain ]</p>
tronvillain is offline  
Old 08-28-2002, 09:53 PM   #78
Veteran Member
 
Join Date: Jul 2000
Location: King George, VA
Posts: 1,400
Post

Kip:

Your argument hinges on the premises:

Quote:
(1) We only hold a person morally responsible if that person could have possibly not committed the immoral action.

(2) According to determinism, a person only has one possible ... response to any situation.
Now as several people have noted, your argument goes through only if "possible" is being used in the same sense in (1) and (2).

Clearly in (2) "possible" is used in the sense of "having nonzero probability". So for the argument to be valid, this must be the sense in which it is used in (1). But if this is the sense intended in (1), it is plainly false, and practically everyone recognizes that it is false.

Imagine the following situation: a man finds a wallet with a lot of money in it, but also with documents giving the owner's name, address, etc. The circumstances are such that it's clear that there is practically no possibility that anyone else will know about it if he just keeps the money. So he has two options: take the money and run, or return the wallet, money and all, to its owner.

Now Smith is a man of the utmost virtue. If he should find himself in this situation, there is absolutely no possibility that he will keep the money; the thought doesn't even cross his mind. In other words, there is zero probability that he will take the money and run.

Jones, on the other hand, is a man of no virtue whatsoever. His motto, which he invariably acts on, is "look out for number one". If he should find himself in this situation, there is absolutely no possibility that he will return the wallet; the thought doesn't even cross his mind. In other words, there is zero probability that he will "do the right thing".

According to (1), Smith should not be praised for returning the money. Why? Because his virtue is too perfect! If only he had a drop or two of corruption in his soul, we would of course praise him to the skies, because then there would be a chance - perhaps only one in a million, but a chance - that he would keep the money. But sadly, there isn't. And so his perfect virtue makes him unworthy of praise.

By the same token, (1) tells us that Jones is not to be blamed. Why? Because his corruption is so complete! If only he had just the slightest touch of virtue - enough so that there was a tiny chance, say one in a million, that he would return the money - that would be a different matter. But luckily for him there isn't. And so the utter depravity of his character makes him undeserving of blame.

I submit that these conclusions, which are logically entailed by (1), are so absurd and counterintuitive as to make it completely untenable. Once one understands the logical implications of this principle it loses the slightest shred of plausibility. And in fact, I submit that virtually no one does accept it. Most people accept a principle that can be stated in the same words, but they mean something quite different by "possible" than "having nonzero probability" in this context.

Time for bed. I hope to have more to say tomorrow.

[ August 29, 2002: Message edited by: bd-from-kg ]</p>
bd-from-kg is offline  
Old 08-29-2002, 04:18 PM   #79
Kip
Regular Member
 
Join Date: Jan 2001
Location: not so required
Posts: 228
Post

Quote:
Originally posted by bd-from-kg:
<strong>Kip:

Your argument hinges on the premises:

Now as several people have noted, your argument goes through only if "possible" is being used in the same sense in (1) and (2).

Clearly in (2) "possible" is used in the sense of "having nonzero probability". So for the argument to be valid, this must be the sense in which it is used in (1). But if this is the sense intended in (1), it is plainly false, and practically everyone recognizes that it is false.

Imagine the following situation: a man finds a wallet with a lot of money in it, but also with documents giving the owner's name, address, etc. The circumstances are such that it's clear that there is practically no possibility that anyone else will know about it if he just keeps the money. So he has two options: take the money and run, or return the wallet, money and all, to its owner.

Now Smith is a man of the utmost virtue. If he should find himself in this situation, there is absolutely no possibility that he will keep the money; the thought doesn't even cross his mind. In other words, there is zero probability that he will take the money and run.

Jones, on the other hand, is a man of no virtue whatsoever. His motto, which he invariably acts on, is "look out for number one". If he should find himself in this situation, there is absolutely no possibility that he will return the wallet; the thought doesn't even cross his mind. In other words, there is zero probability that he will "do the right thing".

According to (1), Smith should not be praised for returning the money. Why? Because his virtue is too perfect! If only he had a drop or two of corruption in his soul, we would of course praise him to the skies, because then there would be a chance - perhaps only one in a million, but a chance - that he would keep the money. But sadly, there isn't. And so his perfect virtue makes him unworthy of praise.

By the same token, (1) tells us that Jones is not to be blamed. Why? Because his corruption is so complete! If only he had just the slightest touch of virtue - enough so that there was a tiny chance, say one in a million, that he would return the money - that would be a different matter. But luckily for him there isn't. And so the utter depravity of his character makes him undeserving of blame.

I submit that these conclusions, which are logically entailed by (1), are so absurd and counterintuitive as to make it completely untenable. Once one understands the logical implications of this principle it loses the slightest shred of plausibility. And in fact, I submit that virtually no one does accept it. Most people accept a principle that can be stated in the same words, but they mean something quite different by "possible" than "having nonzero probability" in this context.
</strong>
bd:

Let me formalize our arguments a bit. Mine:

p1. We only hold a person morally responsible if that person possessed the power to not commit the condemned action.
p2. According to determinism a person does not possess the power to not commit any action and is physically destined to do so.
p3. Determinism.
----------------
c1. We can never hold anyone morally responsible for their actions.

Yours attempts to undermine p1:

p1b. If p1 is not popular, p1 is false.
p2b. The idea that we should not hold people responsible for actions, because these people are perfectly moral or immoral and therefore have no real choice (according to p1), is not popular.
----------------
c1b. p1 is false (and therefore c1 is unproven)

First, I deny your p1b. Human convention is no authority. My argument assumes p1. If you disagree with that moral system I cannot imagine proving that my moral system is true and others are false. What would I appeal to? So unless you assume p1 my argument is lost on you because of moral relativism.

However, if we were to appeal to human convention I still think that human convention (and the history of moral philosophy) agrees that the requirement of "real" (as opposed to trivial) possibility for moral responsibility is quite popular.

To demonstrate that, I have a further objection to your argument (besides denying p1b). I also deny p2b. I do not think that it is at all obvious that most people would agree that the "people" in your example should be morally responsible. The reason I think this is because your argument is subtle but, upon inspection, quite incoherent.

The truth is that your example does not only assume my first premise p1, but also my third premise, p3 (determinism). Most people, however, deny determinism and therefore most people would find your argument incoherent. Your people who have no choice to be moral or immoral are simply impossible. The masses believe that everyone has a choice. There is no such thing as a perfectly moral or immoral robotic person and your argument only works to the extent that it deceives the reader into regarding your impossible moral robots as real people with free will, the only people most people claim to know. I maintain that most people would respond to your example with confusion, not agreement, and that those who agree have been deceived.

[ August 29, 2002: Message edited by: Kip ]</p>
Kip is offline  
Old 08-29-2002, 04:50 PM   #80
Kip
Regular Member
 
Join Date: Jan 2001
Location: not so required
Posts: 228
Post

tron:

Quote:
Originally posted by tronvillain:
No, according to determinism, which of the options will be chosen is set, but there may be many options from which the choice is made. If you wish to say that these other options exist only in the trivial sense under determinism, then I must reply that the trivial sense is all that is required for moral reponsibility.
Okay, we are distinguishing between these two domains:

DOMAIN A: conceivable, trivial, apparent
DOMAIN B: possible, non-trivial, real

Forgive me if that language is loaded but let's not argue semantics if we both recognize what these two domains signify.

So, my argument:

p1. We only hold a person morally responsible if that person possessed the power to not commit the condemned action.
p2. According to determinism a person does not possess the power to not commit any action and is physically destined to do so.
p3. Determinism.
----------------
c1. We can never hold anyone morally responsible for their actions.

Your objection, as you clarified, is not that I am equivocating "possible" (as I assured you I was not) but rather that you simply deny p1.

I had thought that p1 was a very popular notion and dissent is a suprise to me. Upon inspection, however, I am unable to articulate exactly why p1 is necessary (or unnecessary). Indeed, I do not know how to establish any moral maxim whatsoever, as some point the maxim is simply "assumed".

However, efforts have been made, particularly efforts to demonstrate that the maxim in questions entails consequences with which human moral convention disagrees (and therefore the maxim is shown to be false). So, I suppose all that is left between you and me is to demonstrate that demonstrate that denying p1 leads to disagreeable consequences (as I would do) or that accepting p1 leads to disagreeable consequences (as you would do if you continue). I think you have already made some small efforts (these sounded to be paraphrases of Dennett) to demonstrate that accepting p1 is absurd. I wish you would repeat that argument and elaborate upon that because I did not fully understand the first time.

As for my demonstration, I allude to the contradiction I have already mentioned (have you addressed this yet) between human attitudes toward robots and other humans. How do you justify this contradiction?

Do you distintinguish between the robots of today and the robots of the future (or would we blame the creator of all robots and never the robot)? Such sophisticated robots may have distinguishing features such as knowledge of moral codes and expectations, meta desires, and world modelling. If you cite any of these distinctions, however, the question is why that distinction is relevant (does such a feature complete the your "three requirements" for moral responsibility that you borrowed from Bill?).

As a side note, I think the ultimate conclusion to be reached from your argument is that people who commit "immoral actions" are defective, bad, broken and that you use the world "immoral" to signify that. Indeed, that is all the word immoral can signfiy according to your logic. To me, this is an abuse of language, but if you admit that this is all your "immorality" entails then we are arguing semantics.

[ August 29, 2002: Message edited by: Kip ]</p>
Kip is offline  
 

Thread Tools Search this Thread
Search this Thread:

Advanced Search

Forum Jump


All times are GMT -8. The time now is 04:21 AM.

Top

This custom BB emulates vBulletin® Version 3.8.2
Copyright ©2000 - 2015, Jelsoft Enterprises Ltd.