FRDB Archives

Freethought & Rationalism Archive

The archives are read only.


Go Back   FRDB Archives > Archives > IIDB ARCHIVE: 200X-2003, PD 2007 > IIDB Philosophical Forums (PRIOR TO JUN-2003)
Welcome, Peter Kirby.
You last visited: Today at 05:55 AM

 
 
Thread Tools Search this Thread
Old 08-19-2002, 06:13 PM   #21
Veteran Member
 
Join Date: Dec 2002
Location: Gatorville, Florida
Posts: 4,334
Lightbulb

I just posted this in one of the other threads; but it looks like it belongs here.....

The moral facts of "good" and "bad" are human-invented constructs. They have no values (no MEANING, even) outside of what values (meanings) that humans apply to them.

Thus, "right" and "wrong" never exist on a per se basis, but only on some basis set forth by humans. If there are universally "right" or "wrong" ("good" or "evil") moral facts, then those must arise out of universally shared human characteristics. Among the candidates for such characteristics are: <ol type="1">[*]Valuing children over adults;[*]Valuing human life over animal life, plant life, and inanimate objects;[*]Valuing a peaceful and bountiful existence over a sparse and hunted existence;[*]Etc.[/list=a]Anything that we now see as a common human moral value arises from the sorts of characteristics that we share with the rest of humanity. We might disagree about methodology (particularly on something like the question of peace), but we all do agree that at least we would like to live our own lives in peace rather than in a constant state of danger.

From the non-theistic perspective, morality cannot arise from any other source but human experience. As humanity grows and matures, so too do our shared moral values. Genocide was an entirely acceptable form of warfare until very recently in human history. Genocide is even commanded by God in the Old Testament. But civilized humanity has almost universally renounced genocide as a method of gaining victory over our enemies. (As the old Tom Lehrer song goes, "we'd rather kill them off by peaceful means....")

==========

Adding to the above, for the purposes of this thread, I would also say:

Morality exists as a human paradigm because it is part of human evolution. Somehow, during our evolutionary past, humans found that instituting a moral code made it easier to survive. Morality became a survival characteristic. It isn't hard to understand how it can operate that way.

And from those early beginnings, its all been about the bigger-stronger group having the better chance of survival. How do you build a bigger-stronger group? You band together under some sort of a common moral code (or "religion"). While the greatest of civilizations (the Roman Empire and Western Civilization being the two main examples) found it to be in their own best interest to promote religious tolerance (at least, within some sort of limits), you cannot deny the strength exhibited by a cohesive group of religious fanatics in totally-unified pursuit of their declared common goal.

But all of this is "cause and effect" in action. All of evolution is "cause and effect" in action! So, we should not be at all surprised to discover that whatever we think about moral values, having some sort of shared moral framework is all part of the "cause and effect" of human survival and evolution into our shared future.

== Bill
Bill is offline  
Old 08-19-2002, 06:21 PM   #22
Veteran Member
 
Join Date: Dec 2002
Location: Gatorville, Florida
Posts: 4,334
Thumbs down

Quote:
Originally posted by MadMordigan:
<strong>Lets all kill each other because we cannot do otherwise.

Lets be clear, the deterministic position is exactly that. Those who kill other people could not have done otherwise. </strong>
It is true that the determinist believes that the person in question could not have done otherwise at that particular instant in time. But the compatibilist position is: that does not relieve that person of moral responsibility unless the rest of society agrees (according to our shared moral standards) that the killing was a "good" one rather than a "bad" one.

People who engage in "bad" killings are better off dead, at least from the perspective of the rest of us who would like to live our lives in peace without the fear of becomming a victim of such a killer.

By the way, I really resent the comment in the first line of your post, above, which implies that humans have no option other than to kill each other. Yes, we do have such options; but we must CAUSE our fellow humans to come to believe in the existence of said options.

== Bill
Bill is offline  
Old 08-19-2002, 06:38 PM   #23
Veteran Member
 
Join Date: Dec 2002
Location: Gatorville, Florida
Posts: 4,334
Exclamation

Quote:
Originally posted by Kip:
<strong>I will demonstrate my argument with an example:

Replace humans with robots - not humanoid robots, either, but simple, mechanical, clunky machines. If we both agree that humans are essentially robotic, this is only a difference of number and not kind and should not compromise the argument. Now we are rid of our human prejudice towards complex, organic robots (ourselves). These robots are programmed to examine the sky, and according to weather or not the moon is showing, either kill a human child or seek food. These robots satisfy both the requirements you have mentioned: they are determined and choose from available choices. Is the robot morally responsible for the killing of the child?

If you say "yes", your position is absurd and I have nothing more to say. If you say "no", I must ask what relevant distinction exists between these hypothetical robots and humans? </strong>
Human morality is an evolved concept. As such, it has all of the rationality of the human genome, which contains twists and turns and mountains of "junk" with a few golden nuggets in a few key places.

You can't apply human morality to robots because robots are merely artifacts and part of our human moral code is that only humans are participants in the moral codes. If your hypothetical robots existed, we would:
  • Destroy them all; and
  • Hunt down every human responsible for setting them into motion in order to determine their relative culpability for the atrocity that you describe.
Human morality exists for the benefit of humans, not robots. Thus, we ought to expect something like Asimov's Three Laws of Robotics as being one of the prerequisites to the construction of any robots capable of independent action of the sort you envision. This will be naturally designed in by any designer who is morally responsible due to the inherent fear that otherwise the robots could turn on all humans and destroy us all. That fear is very close to the surface of human consciousness.

What is wrong with your hypothetical is any sort of set-up that demonstrates that these robots are fully and completely the moral equivalent of humans. If (or when?) that ever occurs, then humans will need to re-evaluate our shared moral code vis-a-vis our robotic bretheran.

However, since the moral code evolved for the sake of the protection and advancement of homo sapiens, it is very unlikely that any mechanism would ever be accepted as morally equivalent to a human being. (And in fact, Asimov's robot novels attempted to deal with exactly that issue. Asimov was quite the humanist, by the way; just in case you didn't know.)

In any case, the "relevant distinction" between your hypothetical robots and humans is the plain fact that the robots are not human! Morality exists only for humans, and robots don't qualify.

== Bill
Bill is offline  
Old 08-19-2002, 08:20 PM   #24
Beloved Deceased
 
Join Date: Jul 2000
Location: Vancouver BC Canada
Posts: 2,704
Post

By the way, I really resent the comment in the first line of your post, above, which implies that humans have no option other than to kill each other.

You only feel that way, Bill, because of your genetics, your environment and what you ate for breakfast.

Actually, if it was unclear that the first line in my response refered ONLY to the person who had, in fact, killed, that's my fault, and I apologize. I only did so because of my genetics, my environment, and what I ate for breakfast.



Determinism always breaks down into a form of motivational nihilism. While we admit that, to ourselves, we have a running commentary and analysis of different choices we could make in our own lives, it refuses to extend that experience into other actors. Its not much different than the problem of 'zombies'. The world may be populated by automatons who neither 'feel' conciousness, nor are aware of their own decision making capability.

But I know what I experience, and that is an ongoing concious evaluation of my life and the choices that make it up. Chasing my own tail to find the cause of the cause of the cause of the cause of the cause of my responding to this thread doesn't do anything to help me live a good life. But evaluating the words I choose to use, an being concious of this evaluation, does serve a purpose. (if only in vanity)

As a poster said above, the adoption of determinism makes arguing about it meaningless. I'll leave it to the determinists, in the meantime, I choose to go get a beer. Too bad you guys can't join me, but then, you had no choice.
MadMordigan is offline  
Old 08-20-2002, 12:27 AM   #25
Veteran Member
 
Join Date: Jul 2001
Location: England
Posts: 5,629
Post

Quote:
Originally posted by Kip:
<strong>1. We only hold a person morally responsible if he or she could possibly have done otherwise
2. Robots have only one possible response to any given situation
......

Does anyone agree with this strong determinist position?

[ August 19, 2002: Message edited by: Kip ]</strong>
You are equivocating on 'could possibly have done otherwise'.

Most people are happy, for example, to say that two hydrogen atoms can combine with an oxygen atom to make water, or the can combine with a sulphur atom to make hydrogen sulphide.

If we take a large collection of hydrogen, oxygen and sulphur atoms, most people will say that the hydrogen atoms which combined with the oxygen atoms could easily have happened to have combined with the sulphur atoms instead.

Most people will also agree that chemistry and Newtonian mechanics are deterministic, and if you looked at all the paths of the hydrogen atoms which just happened to hit the oxygen atoms, then it was impossible for them to have combined with the sulphur atoms.


So people are quite happy to say that different things could have happened and also say that nothing else could have happened.

There is no contradiction between your statements 1 and 2.

Try it for yourself.

Try telling yourself that chemistry is deterministic and that it is therefore wrong to say that any two hydrogen atoms can combine with an oxygen atom, because it was impossible for them to do anything else than combine with the sulphur atom. So if chemistry is deterministic, your logic means that there are no laws of chemistry!

People are quite happy to say 'could have possibly done otherwise' about totally deterministic systems.

Another analogy. Most lotteries produce numbers by deterministic methods. Does this mean that you can sue if you lose, because you could not possibly have won?
Steven Carr is offline  
Old 08-20-2002, 04:49 AM   #26
Veteran Member
 
Join Date: Dec 2002
Location: Gatorville, Florida
Posts: 4,334
Thumbs up

Quote:
Originally posted by Steven Carr:
<strong>Another analogy. Most lotteries produce numbers by deterministic methods. Does this mean that you can sue if you lose, because you could not possibly have won? </strong>
OK, now, where are the class action attorneys who are willing to pursue this one? I've got a box full of losing lottery tickets (held against the day that I might "hit it big" and the IRS will then allow me to write off all of my losing tickets....).

== Bill
Bill is offline  
Old 08-20-2002, 08:30 AM   #27
Kip
Regular Member
 
Join Date: Jan 2001
Location: not so required
Posts: 228
Post

Quote:
Originally posted by Doubting Didymus:
<strong>Wait a minute, are we really arguing here that humans have only one single option in every situation? If that were the case, then wouldn't it be true that none of us would have a choice in whether we passed a moral judgement? Wouldn't it be pointless to argue about whether we should morally judge people, if we have only one option in each situation?

If we can choose to pass judgement, then a murderer can choose not to murder. If we cannot choose, why do we argue about whether we should?</strong>
You raise an excellent point.

If my position, strong determinism, is correct, arguing my position is entirely futile. However, just as my actions will never change another's opinion from what the opinion was physically determined to be, so your asking this question "why do we argue about whether we should?" will never change *my* behavior, whether I stop or continue to argue strong determinism, from what that behavior would always have been. We must not assume that people are entirely rational.

Personally, I have an unchosen desire to further the discussion on this message board and so I am going to continue. I would leave the question of "fatalism" for another thread. The consequences of determinism are becoming more and more devastating.
Kip is offline  
Old 08-20-2002, 08:39 AM   #28
Kip
Regular Member
 
Join Date: Jan 2001
Location: not so required
Posts: 228
Post

Quote:
Originally posted by Bill:
<strong>Human morality is an evolved concept. As such, it has all of the rationality of the human genome, which contains twists and turns and mountains of "junk" with a few golden nuggets in a few key places.

You can't apply human morality to robots because robots are merely artifacts and part of our human moral code is that only humans are participants in the moral codes. If your hypothetical robots existed, we would:
  • Destroy them all; and
  • Hunt down every human responsible for setting them into motion in order to determine their relative culpability for the atrocity that you describe.
Human morality exists for the benefit of humans, not robots. Thus, we ought to expect something like Asimov's Three Laws of Robotics as being one of the prerequisites to the construction of any robots capable of independent action of the sort you envision. This will be naturally designed in by any designer who is morally responsible due to the inherent fear that otherwise the robots could turn on all humans and destroy us all. That fear is very close to the surface of human consciousness.

What is wrong with your hypothetical is any sort of set-up that demonstrates that these robots are fully and completely the moral equivalent of humans. If (or when?) that ever occurs, then humans will need to re-evaluate our shared moral code vis-a-vis our robotic bretheran.

However, since the moral code evolved for the sake of the protection and advancement of homo sapiens, it is very unlikely that any mechanism would ever be accepted as morally equivalent to a human being. (And in fact, Asimov's robot novels attempted to deal with exactly that issue. Asimov was quite the humanist, by the way; just in case you didn't know.)

In any case, the "relevant distinction" between your hypothetical robots and humans is the plain fact that the robots are not human! Morality exists only for humans, and robots don't qualify.

== Bill</strong>
Bill, you posit that a relevant distinction between these robots and humans is that the robots are not human. My immediate reply is to ask why that distinction is relevant? Your statement feels anthropocentric.

Would your distinction also apply to:

1. Intelligent Extra Terrestrials (E.T.)
2. Intelligent Robots (Data from Star Trek?
3. Neandertals
4. Homo Erectus

How exactly do you define human? Defining humanity can be very difficult because people have unique bodies and unique genetic codes. So what exactly is "human" and why is this distinction of being human relevant?
Kip is offline  
Old 08-20-2002, 08:53 AM   #29
Kip
Regular Member
 
Join Date: Jan 2001
Location: not so required
Posts: 228
Post

Quote:
Originally posted by Steven Carr:
<strong>

You are equivocating on 'could possibly have done otherwise'.

Most people are happy, for example, to say that two hydrogen atoms can combine with an oxygen atom to make water, or the can combine with a sulphur atom to make hydrogen sulphide.

If we take a large collection of hydrogen, oxygen and sulphur atoms, most people will say that the hydrogen atoms which combined with the oxygen atoms could easily have happened to have combined with the sulphur atoms instead.

Most people will also agree that chemistry and Newtonian mechanics are deterministic, and if you looked at all the paths of the hydrogen atoms which just happened to hit the oxygen atoms, then it was impossible for them to have combined with the sulphur atoms.


So people are quite happy to say that different things could have happened and also say that nothing else could have happened.

There is no contradiction between your statements 1 and 2.

Try it for yourself.

Try telling yourself that chemistry is deterministic and that it is therefore wrong to say that any two hydrogen atoms can combine with an oxygen atom, because it was impossible for them to do anything else than combine with the sulphur atom. So if chemistry is deterministic, your logic means that there are no laws of chemistry!

People are quite happy to say 'could have possibly done otherwise' about totally deterministic systems.

Another analogy. Most lotteries produce numbers by deterministic methods. Does this mean that you can sue if you lose, because you could not possibly have won?</strong>
Steven Carr, thank you for a very thoughtful reply.

I fully agree that there is a distinction to be made here. That distinction, I think, is the distinction between:

1. possibility
2. conceivability

You refute my proof my accusing me of equivocating the phrase "could possibly have done otherwise". Note, however, that I qualified the phrase with the word "possibly" and did not write "could have done otherwise". I am quite aware of the distinction that you are raising and I assure you that I was not equivocating.

According to your refutation, I use the same phrase twice, and first I only mean that a person could conceive of doing otherwise, just as we can conceive of carbon combining with various particles, but second I mean that a person could possibly only have one choice, just as a carbon particle can only combine with one particle in a specific way at any given moment.

Steven Carr, I assure you that I did not mean "conceivably done otherwise" where I wrote "possibly done otherwise". Here is the same response I wrote to Tronvillain:

Quote:
quote:
"When, on the subject of free will, we say "I could have chosen otherwise" we simply mean that there were other choices available, and if we encounter the same set of choices again we very well may choose otherwise."

Who is this we? For surely you are not referring to the majority of human beings. When a person says "I could have chosen otherwise" he does not mean, as you assert, that "I could not have done otherwise then (because that would somehow "diminish responsibility"), but I can do otherwise in the future". Rather, the person means "I could have done otherwise at that very moment". Your definition admits that people are more or less robotic and the majority of human beings deny that claim.
So, no, I was not equivocating. By "could possibly have done otherwise" I mean "possibly" not "conceivably" and this is the meaning that people use every day, not the more convenient definition compatibilists use which admits that humans are biological robots.

[ August 20, 2002: Message edited by: Kip ]</p>
Kip is offline  
Old 08-20-2002, 02:19 PM   #30
Regular Member
 
Join Date: Feb 2002
Location: Wellington, New Zealand
Posts: 484
Post

If people are using the term "could have" in regards to human bahaviour, then to be consistent they should also use this phrase with regards to other systems as well.

When we are sentencing a murderer we say that they did not have to kill someone, they could have done something different. However, if a person was forced to kill somebody else in self defence it would not be murder. If someone is threatening to kill you and your family with a gun you may not have a reasonable alternative, you may not have a reasonable option.

The use of "could have" also applies to the lottery, as it does to most systems. The lottery did not have to produce that winning number, it could have produced another number. If say the lottery machine was so rigged that it could only produce your ticket number, then we would say there was no alternative in terms of that number being produced.

The use of "could have" also applies to the weather. It did not have to be fine or wet today. It could have been fine today, or it could have been wet.

It is only in retrospect that things must have happened. In retrospect it must have been fine or wet yesterday, because of various causes, as it is a historical fact that it was fine or wet yesterday. In retrospect you must have got out of bed today due to various causes, as it is a historical fact that you got out of bed today.
Kent Stevens is offline  
 

Thread Tools Search this Thread
Search this Thread:

Advanced Search

Forum Jump


All times are GMT -8. The time now is 03:01 PM.

Top

This custom BB emulates vBulletin® Version 3.8.2
Copyright ©2000 - 2015, Jelsoft Enterprises Ltd.