FRDB Archives

Freethought & Rationalism Archive

The archives are read only.


Go Back   FRDB Archives > Archives > IIDB ARCHIVE: 200X-2003, PD 2007 > IIDB Philosophical Forums (PRIOR TO JUN-2003)
Welcome, Peter Kirby.
You last visited: Today at 05:55 AM

 
 
Thread Tools Search this Thread
Old 09-18-2002, 01:11 PM   #121
Senior Member
 
Join Date: Sep 2002
Location: San Marcos
Posts: 551
Post

Why can't the holding of individuals as responsible be a purely emotional reaction? It this case moral values can be fundmanetal pleasures or pains, not derrivatives.

I like utilitarianism btw but have some problems with the theory, what does it mean to say that something has "utility"?

If it means that the object brings pleasure, well then there are different types of pleasure and each is fundamental.

I like sex because it brings a certain type of pleasure, one can then say that one likes the death penalty for criminals because that brings to him a different type of pleasure.

Also utilitarianism seems to ignore why it is an individual should concern his or her-self with societies state, it seems to imply that greater good for society=greatest good for individual but obviously the individual may see things as pleausurable that are harmful to society. I don't see how that gap is bridged, if organisms are expected to go by the pleasure principle, then one should expect them to go against the social good whenever conveniant. The only way around it seems to be by making the promotion the social good a pleasure for the organism in itself.

[ September 18, 2002: Message edited by: Primal ]</p>
Primal is offline  
Old 09-18-2002, 01:27 PM   #122
K
Veteran Member
 
Join Date: Aug 2002
Location: Chicago
Posts: 1,485
Post

Primal:

Good questions. I would argue that there is an underlying drive to punish those that don't behave morally. I also believe that that drive is evolutionary in nature. Those individuals that had traits that allowed them to organize into groups had a clear survival over those with anti social behavior. Groups that removed or altered the behavior of antisocial individuals also had an advantage. Therefore, I believe we have a drive to act in a morally (socially) acceptable manner that sometimes conflicts with our own self-preservation drives. I also believe that as members of society, we have a drive to preserve the social groups we're in. Therefore, the utility you asked about would be in the survival of the species. Individuals would not necessarily be aware of this species level utility. For them, it would simply satisfy the natural desire to punish antisocial behavior.
K is offline  
Old 09-18-2002, 01:38 PM   #123
Senior Member
 
Join Date: Sep 2002
Location: San Marcos
Posts: 551
Post

Good points K. I'm thinking that the idea of promoting social norms would also likewise become a drive, as well as punishing wrong doers. As well as behaving in ways we consider "moral", morals themselves could hence become drives in themselves, just as real as the sex drive. In which case morals may not be a means to an end but an end in itself. Is it possible hence, as social animals evolved that certain morals would become genetically or exegentically ingrained? I think so.
Primal is offline  
Old 09-18-2002, 01:43 PM   #124
K
Veteran Member
 
Join Date: Aug 2002
Location: Chicago
Posts: 1,485
Post

Primal:

I think your post hits the nail on the head. Morals would be an end to themselves in terms of natural drives.
K is offline  
Old 09-19-2002, 12:11 PM   #125
Veteran Member
 
Join Date: Jul 2000
Location: King George, VA
Posts: 1,400
Post

Primal:

I agree with you entirely that morality should be based on consequences, and that libertarian free will would undermine true moral responsibility. More importantly, I agree that a person is morally responsible for an act just insofar is it flows from persistent psychological characteristics such as character and personality, or at least was made possible by them.

I don’t agree that a robot would be morally responsible if it could feel pain. This would be a relevant, but hardly a sufficient, condition. However, I agree that robots would be morally responsible if they satisfied certain conditions that are certainly satisfiable in principle by robots. I suspect that you were just being a bit sloppy here.

As for holding a person responsible being a purely emotional reaction, this is clearly not correct. Normal people do not base their judgments of moral responsibility on emotional reactions. This is demonstrated by the fact that they are almost always open to moral reasoning showing that their initial reaction was mistaken, and even that the way that they have reacted consistently to such cases is mistaken. One can argue that the term “mistaken” is meaningless here, but even if this is true the fact that someone can be led to change his mind by patient moral reasoning shows clearly that much more is involved than emotions.

I agree with you that the fact that utilitarianism per se is unable to give a reason for a person to take other people’s interests into account is a problem, but it may show only that the theory is incomplete. There are two ways to interpret utilitarianism:

(1) To say that an act is right means that it conduces to the “greatest good for the greatest number”. (We won’t bother to try to make this more precise just now.)

(2) The choice that conduces to the “greatest good for the greatest number” is, as a matter of fact, always “right”.

Your criticism is (IMO) pretty much decisive against the first version, but it is always open to the advocate of the second to give an account of what it means to say that an act is right that does show why a rational person would take other people’s interests into account.

In other words, the first version purports to give a complete account of morality, whereas the second purports only to offer a criterion of rightness without committing itself to a specific theory of what “rightness” is.

As to your latest post, trying to “derive” morality by showing how “moral” behavior might have “evolved” seems to me to be a dead end. It is easy to show how certain behaviors that no one would call “moral” (such as killing all the men and raping all the virgins of a defeated tribe) might have been produced by natural selection. But as soon as you adopt some criterion for deciding which “evolved” behaviors “count” as moral, you have already defined what you mean by “moral” – i.e., you have already decided what sorts of behaviors are “right” and “wrong”. At that point your evolutionary explanation is purely psychological. That is, if true it is part of the explanation of why we behave as we do, but it has no implications regarding what actions are “right” and “wrong”.
bd-from-kg is offline  
Old 09-19-2002, 12:18 PM   #126
Veteran Member
 
Join Date: Jul 2000
Location: King George, VA
Posts: 1,400
Post

K:

You say:

Quote:
We get bogged down trying not to hold people responsible for acts they had no control over. I think a better way to look at it is that we hold people morally responsible when doing so will help prevent detrimental behavior in the future.
But as Kip points out, we routinely punish dogs and small children to “prevent detrimental behavior in the future” without holding them morally responsible. Something’s missing in this account. I’ll come back to this eventually in a reply to Kip.

Your first Sept. 18 post has the same problem. We can and do routinely punish people (and animals) for all of these reasons without holding them morally responsible. What Kip is arguing is that we should continue to punish people for essentially these reasons, but should never hold anyone morally responsible.

In a later post you say:

Quote:
Free will is a non-issue in the crime / punishment / societal protection scheme.
Certainly libertarian free will is a non-issue, since it is a logically incoherent concept. But the question of whether someone acted “freely” or did something “of his own free will” clearly is relevant to the question of moral responsibility. LWF is simply a mistaken or confused analysis of the sense in which a person must have acted freely in order to be morally responsible.

Finally, your last post is a good illustration of the problem I pointed out in my reply to Primal. For example:

Those individuals that had traits that allowed them to organize into groups had a clear survival over those with anti social behavior. Groups that removed or altered the behavior of antisocial individuals also had an advantage.

But many other traits such as being big, fast, and strong, also conferred a survival advantage. Why aren’t these considered “moral” traits? Your answer, in effect, is that the traits that “count” are the ones that allowed the individuals to have them to organize into groups and to act ways that were advantageous to those groups. Well then, this is essentially your definition of moral behavior: behavior that allows one to organize into a group (or “fit into” an existing one), and behavior that benefits one’s group.

But some behavior that is clearly advantageous to the individual, or to propagating his genes, may be detrimental to the group as a whole. For example, a strong urge to impregnate as many women as possible will clearly help him propagate his genes, yet if he does so by peaceful means it is unlikely that the group will take strong measures to restrain him. Also, a certain percentage of the population has always consisted of psychopaths (not insane killers, but simply people who don’t care in the least about anyone else and pursue their own interests single-mindedly). (This strategy seems to work well as long as it is not followed by more than about 5% of the population.) Such behaviors are not considered “moral” because they do not represent sound evolutionary strategies, but simply because they are – well, immoral – they are detrimental to the interests of the society as a whole.

Finally, some traits that would be advantageous to the group as a whole, such as a total willingness to sacrifice one’s life for the group, have not evolved for obvious reasons. Shall we therefore say that such behavior is plainly immoral?

But at this point we have to ask what exactly is the relationship between evolved traits and moral behavior. It appears that the best that we can say is that certain kinds of behavior that we would call “moral” have evolved because they are favored by natural selection, but that some other traits that we would not call moral have also evolved for the same reason, and some traits that we call moral when they appear (usually as a result of social conditioning) did not evolve since they are “selected against”. Interesting, but not terribly relevant to moral philosophy.
bd-from-kg is offline  
Old 09-19-2002, 12:31 PM   #127
Veteran Member
 
Join Date: Jul 2000
Location: King George, VA
Posts: 1,400
Post

Kip:

1. “Free will” yet again.

On this subject I just want to clean up some loose ends.

Quote:
bd:
It seems to clear to me that many people believe that... some people are certain (at least some of the time) to commit a crime such as stealing an old lady’s purse if it seems clear that they will benefit significantly with no risk.

Kip:
Sorry, but I challenge this ... claim. I do not think the notion that anyone is ever certain to do anything is popular... But certainty itself is a strong word. People are sufficiently unpredictable...
I think that you’re confusing two different meanings of “It is certain that Y will happen”. the first meaning is that the available information makes it certain that Y will happen. The second is that given the exact state of affairs at this moment it is inevitable that Y will happen. Of course no one ever knows the exact state of affairs at any given time. and since human beings are incredibly complicated, it may well be true that the available information is never enough to determine with certainty what choice a person will make. But few thoughtful people, at any rate, really believe that it is never the case that the exact state affairs (including the state of the agent’s brain and mind) makes it inevitable that he will choose to do Y rather than Z. Even advocates of LFW almost always claim only that it is sometimes the case that the current state of affairs does not determine the agent’s choice. The claim is that these undetermined choices are responsible for the character and motives that determine the other choices – rather like your take on my example of a man who is certain to return a lost wallet.

Quote:
I was not questioning your logic so much as raising an eyebrow at the liberties you take in defining the term "free will". You appear to "picking and choosing" whichever definition pleases you most and satisfies your need for moral responsibility.
2. Morality and moral responsibility

Now lets get back to morality proper, and specifically the question of moral responsibility. I listed seven propositions that I suggested were logically equivalent. Actually on reconsideration I agree that they’re not logically equivalent (for reasons that will be explained later). But since we’re trying to get to the conclusion that people are sometimes morally responsible, it’s more natural to list them in the opposite order. So here they are in reverse (but with the original numbering):

7. The effects of punishing Smith for killing Jones are preferable to those of any alternative
6. Smith should be punished for killing Jones.
5. Smith deserves to be punished for killing Jones.
4. It is just to punish Smith for killing Jones.
3. It is just to make Smith answer for killing Jones.
2. Smith is answerable for killing Jones.
1. Smith is morally responsible for killing Jones.

Now the only steps that you have questioned are the ones from 7 to 6 and from 6 to 5. In this post (because of limited time) I’m going to look only at the first one.

Obviously the transition from 7 to 6 is based on consequentialism, the idea that the “rightness” of an act depends solely on its consequences.

Oddly enough, you seemed to agree with this idea at one point:

Quote:
Now that I understand you meant the preference of an “objective moral system”, obviously your use of “should” and “preference” (in 6 and 7) are equivalent (almost by definition).
But they are “equivalent” only if consequentialism is true – i.e., if whether one “should” do something depends on whether its consequences are preferable to those of other choices. But in that case, clearly 7 entails 6, which you deny. So I’m puzzled as to what you could have meant here.

But elsewhere you attack consequentialism on several grounds:

Quote:
The morality of an action cannot be a function of only its consequences, because, how do you measure the consequences?
But how do you measure anything relating to human affairs? What does it mean to say that democracy is preferable to tyrrany? Do you have a government-o-meter handy to read out the exact quantitative difference between the desirability of the two?

Deontological theories have the same problem. If the “rightness” or “wrongness” of an act is an intrinsic property of the act itself, how exactly do we measure this supposed property? Surely some kinds of actions are more “wrong” than others: murder, for example, is more wrong than failing to call your mother on Mother’s Day. But how do we compare the “wrongness” of these things? Is murder is twice as wrong as failing to call your mother? Ten times as wrong? A hundred? A thousand? How do you measure such things?

Quote:
Would you measure the consequences in terms of their consequences are so on ad infinitum?
Of course not. This is dealt with in any decent elementary textbook on moral philosophy. The standard analysis goes as follows: As Hume pointed out in the passage you quoted, many purposes relate to “instrumental goods” – that is, things that are good because they lead to or produce other things. These other things are often themselves instrumental goods. But eventually such chains must terminate in things that are desired for their own sake. Such things are referred to as “intrinsic goods”. Of course, there are also “intrinsically bad” things – that is, things it is considered desirable to avoid.

Note that in referring to some things as “intrinsically good” we are not necessarily saying that they have an objective, intrinsic property of “goodness”. A subjectivist can happily accept the distinction between instrumental and intrinsic goods by interpreting “intrinsically good” as meaning simply that the thing is in fact desired for its own sake (by himself or the agent, or whomever, depending on the brand of subjectivism). An objectivist will of couse say that it is objectively desirable for its own sake, which is indeed an intrinsic property.

In any case, an intrinsic good can always be interpreted as a state of affairs, and this is the standard interpretation. Thus if you say that your happiness is an “intrinsic good,” you are saying that your being happy is an intrinsically good state of affairs. The same goes for “intrinsically bad”.

Now we can say more precisely what (according to consequentialism) it means to say that an act is good or right. It means that, if we look at all of the states of affairs produced by the action and consider only those that are intrinsically good (or bad), the balance of intrinsic goodness over intrinsic badness is more favorable than for any alternative. (Again, all of this can be interpreted objectively or subjectively.)

So the consequences of an act are not judged by further consequences ad infinitum at all. In fact, the consequences aren’t judged by their consequences at all. “Being a consequence” is a transitive property: if B is a consequence of A and C is a consequence of B, then C is a consequence of A. So if we are already looking at all of the consequences of an act (as I specified earlier) there are no further consequences in terms of which they could be judged.

Finally, it might be argued that an act itself can be “intrinsically good”. For example, perhaps dancing with my honey is an “intrinsically good” state of affairs. If so, a consequentialist theory would have no problem in principle with including this “intrinsic goodness” as among the consequences to be taken into account. That is, the fact that the act automatically brings about this “intrinsically good” state of affairs would be a legitimate consideration in judging whether the act is “right”. But most consequentialists (including me) would deny that an act is ever “intrinsically good” in itself; they would say (for example) that it is the happiness produced by the act of dancing with my honey that is intrinsically good. In any case, any consequentialist will deny that the “intrinsic goodness” of an act can ever consist of the fact that it has a mysterious property of “ought-to-be-doneness”. A consequentialist will say that, while it is possible that an act ought to be done because it is “intrinsically good”, it is never the case that an act is “intrinsically good” because it ought to be done. This is putting the cart before the horse.

Quote:
7 says that, considering all available options, punishing Smith is the best. But if all of the options are bad, that is not saying much. Indeed, if there are only two options, "killing all human beings" and "punishing Smith", the latter is preferable. But it does not follow, (or only very awkwardly) that Smith "should" be punished for killing Jones or that any possibility "should" happen, only that that event is the least undesirable.
To say that someone should do something is quite different from saying that a certain state of affairs “should” obtain. So far as I can see, the latter could only mean that the state affairs in question is intrinsically good. But to say that one “should” do something clearly does not imply that the consequences will be intrinsically good, only that they will be better that the consequences of the alternatives. Thus, to say that a general should order a retreat does not mean that retreating will have “good” consequences, only that they will be less bad than the consequences of standing and fighting and having the entire army destroyed or captured. Or suppose that a power grid is getting overloaded. Shutting it down will deprive a lot of people of power for a time, but refusing to shut it down will result in the entire sytem being destroyed. Surely you would agree that it should be shut down?

Now of course in these cases (and in any others) you could represent the worse consequences of the alternatives as a “good consequence” of the better option. Thus, the general saves the army by ordering a retreat; shutting down the power grid system saves it from destruction. Sure, but these statements are just saying that the consequences are better than the alternative. And this is the very criterion that should be applied, according to consequentialism.

So I simply do not see how the fact that all options are “bad” (if you don’t compare them to the alternatives) entails that oen should not select the one with the least bad consequences. Even in your example I don’t see your point. How can you say that Smith should not be punished if failing to punish him will result in the annihilation of the human race? This looks to me like a no-brainer. Even if the option were to kill an innocent man or see the human race wiped out, I don’t see what you’re getting at. Obviously in such a case one should kill the innocent man.

Quote:
The truth, I suspect, is some mix [consequentialist/deontological] .
But the truth can only be a “mix” if there is such a thing as an intrinsic property of “ought-to-be-doneness”. As I said before, it is difficult if not impossible for a non-theist to give any reasonable account of the nature or source of such a property. This is why deontological theories have been abandoned by many (I would say most) moral philosophers who are not theists nowadays.

The problem may not be apparent if you do not understand what an “intrinsic” property is, so let me give a couple of examples of non-intrinsic properties. Say that I have invented a machine that can produce an exact duplicate of an inanimate object. I put in a painting by Renoir, and out pops a perfect copy and hang them side by side. Even the most sophistcated tests cannot determine which is the original, for the obvious reason that they are identical. Now let’s say that the one on the right is the original. It is not an intrinsic property of this painting that it is the original; it is simply a fact about its history. The intrinsic properties of the two are exactly the same. In fact, that’s what it means to say that they are identical.

Or, say that a child is given the name “Sam” by his parents. Unfortunately, at the age of two months he gets separated from them by some freak circumstance, and is raised by another couple who name him “Joe”. Eventually the original parents locate him and take him back. But they want to call him “Sam” while he insists that his name is “really” Joe. Who’s right? No one. The child’s name is not an intrinsic property, so there is no “objective truth” as to what it is.

Now deontologists maintain that, unlike the property of being the “original” or the property of being named “Joe”, the “rightness” of an act is an intrinsic property of the act itself. Most moral philosophers nowadays find this claim unintelligible - in fact, rather bizarre. This supposed property is neither physical (it can’t be detected by any scientific test) nor logical; in fact, it doesn’t fit into any of the categories that all other intrinsic properties fit into. And if it is sui generis – unlike any other property – where does it “come from”? How do some acts, and not others, come to have this property? How do you tell which are which? How did you come to know about this distinguishing criterion?

This is what I mean by saying that no one has been able to give a plausible account of the nature of this supposed intrinsic property of “ought-to-be doneness”. It is the apparent unanswerability of questions like these that has led many moral philosophers to reject to idea of such a property, and hence deontologcal theories – or even theories that are partly deontological.

That’s all I have time for at the moment.
bd-from-kg is offline  
Old 09-19-2002, 01:40 PM   #128
Senior Member
 
Join Date: Sep 2002
Location: San Marcos
Posts: 551
Post

Quote:
I don’t agree that a robot would be morally responsible if it could feel pain. This would be a relevant, but hardly a sufficient, condition. However, I agree that robots would be morally responsible if they satisfied certain conditions that are certainly satisfiable in principle by robots. I suspect that you were just being a bit sloppy here.
Yes I agree but was merely saying that a robot capable of such acts in which case it would have a certain degree of reasoning, intellect and such would be very much made morally reprehensible if it could expereince pain.

Quote:
As for holding a person responsible being a purely emotional reaction, this is clearly not correct. Normal people do not base their judgments of moral responsibility on emotional reactions.
This is the very thing under debate and I would disagree on the basis of not seeing what else moral judgements could be reduced to. As well as personal experience, when I feel something is morally wrong, I see it as wrong due to a feeling I experience when such action is taken. Even utilitarianism is reduced to an emotional standard, the standards of pleasurable and painful emotions often times.


Quote:
This is demonstrated by the fact that they are almost always open to moral reasoning showing that their initial reaction was mistaken, and even that the way that they have reacted consistently to such cases is mistaken. One can argue that the term “mistaken” is meaningless here, but even if this is true the fact that someone can be led to change his mind by patient moral reasoning shows clearly that much more is involved than emotions.
Yes but mistaken in what sense? In the sense that they got their facts wrong, or they were ignorant and hence experienced a differing emotional reaction then they otherwise would have had their facts been right or been more informed? Or in the sense that the emotion was fundamentally inapropriate? I do not see how the second could be disproven by anyone, but the first could as new evidence is revealed; which is often the case.

In this case moral reasoning may stem from emotional evaluations of given facts and may change as knowledge increases. However emotion would still provide the underlying basis.

Quote:
I agree with you that the fact that utilitarianism per se is unable to give a reason for a person to take other people’s interests into account is a problem, but it may show only that the theory is incomplete. There are two ways to interpret utilitarianism:

(1) To say that an act is right means that it conduces to the “greatest good for the greatest number”. (We won’t bother to try to make this more precise just now.)

(2) The choice that conduces to the “greatest good for the greatest number” is, as a matter of fact, always “right”.
Well utiltarianism actually starts by saying to an organism the greatest good is what is most conductive to that organisms happiness and the greatest evil is what is detrimental to that happiness. From this an unwarranted leap is made from the greatest good with the organism to the greatest good of society.

Quote:
Your criticism is (IMO) pretty much decisive against the first version, but it is always open to the advocate of the second to give an account of what it means to say that an act is right that does show why a rational person would take other people’s interests into account.
Well that depends, if one is to define "moral" as greatest good for society, then our disagreement would be fundamental. In which case all I can do is ask what does "greatest good" mean? And how does one measure whether something is of greatest good for society?

If one says because that type of organism likes X, and generalizes, one is starting from the organism then extending to society and there is a gap here. I suppose one can then go by "majority likes" as greatest good, and defines that as moral. However there will still be a problem as the standard of "likes" was already seen as a valid condition for establishing morality, in which case one has to ask why it is that the majorities likes count and the minorities do not? The only way thorugh this is to say the majorities likes count as moral by definition. However in this case then the world must be given to bacteria, as they are the most numerous and would certainly like to feast on us, hence medical science can then be seen as "selfish" and "immoral". I find such a conclusion somewhat unacceptable and fundamnetally inaccurate and hence would try for a different definition of morality, one more organism centered and based on a certain type of "likes". I see such a "majority rules" view as needed qualification and as somewhat at odds with basic moral sense if taken at face value.


Quote:
As to your latest post, trying to “derive” morality by showing how “moral” behavior might have “evolved” seems to me to be a dead end.
I'd disagree as I think science can very much offer information relevant to moral discussion.This is because any discussion of morality must take human nature into account and science, evolutionary and such, are the most poerful ways of understanding human nature. In the above case I would not be trying to establish a given moral via explaining its origins but explaining where that moral may have come from and its source. I suppose in the realm of morals the line between justification and explanation becomes blurry, but I thinkanalyzing the source of a given moral norm we emotionally attach to is relevant. Mainly because we can make mistakes in moral reasoning like the type mentioned above based on ignorance and misinformation.


Quote:
It is easy to show how certain behaviors that no one would call “moral” (such as killing all the men and raping all the virgins of a defeated tribe) might have been produced by natural selection. But as soon as you adopt some criterion for deciding which “evolved” behaviors “count” as moral, you have already defined what you mean by “moral” – i.e., you have already decided what sorts of behaviors are “right” and “wrong”. At that point your evolutionary explanation is purely psychological.
Well it is exegenetic mostly, which is psyhcological but many psychological drives arise from biology.

In response to your comment about the tribe I would like to make two points:

1) Even if we evolved to see rape after a conquest as moral, that would not refute my post.

2) Morals would in my system, reflecy a certain type of value, not any value in itself, in which case people may have evolved rape drives as a value but seen it at the same time as immoral. How could this be possible? One might ask this and the reasons could be:

A) To avoid inner-tribal rape/conflict. If a man tries to take every women he sees within his own tribe he invites organized revolt from other men.

B) Women don't like to be raped, such women may not make as good wives and/or mothers. Women may also retaliate. Having too many kids also means you spread your rescources thinly among each and it also means you invite a lot of demands on yourself. In this case it could be easier to see why it is a man would rather have a woman willingly rather then by force.

In both the above cases rape may be seen as more situational, acceptable if only practiced on other tibes and undersirable within the tribe. Also rape will likely only occur if the male feels he has the rescources to take care of future offspring and/or can somehow police the woman that may escape or retaliate in some manner.

C) Human beings may have a natural sense of empathy and a willingness not to harm others. Such a emotion would be essential in maintaining cooperation within a group and probably evolved in a deep sense. If this is the case, people inclined to rape may feel somewhat conflicted as part of them would see it as causing pain to another and this will bring emotions of empathy. Such emotions can be strengthened more or less by a given society to the point where it totally outweighs the need to rape. Mutual respect, would also be needed for cooperation and this emotion can be likewise enhanced or retarded in a given society to the point where it outweighs emotions that promote rape. I doubt you nor I would rape a woman even if there was no punishment for such an act, mainly because we have been conditioned by a society that has reinforced certain biological predispositions. Could a nonsocial animal be conditioned in the same manner? Doubtful.

Quote:
That is, if true it is part of the explanation of why we behave as we do, but it has no implications regarding what actions are “right” and “wrong”.
I do not adhere to the is/ought dichotomy. I define morals as certain types of emotional reactions to given information. Thus the whole idea of how morals may have evolved is more explanatory then defining. In the case that humans evolved to judge certain actions as moral, then the action would be moral by definition. That is, if it was still present in human beings today. Noting biological origins could only be useful for making certain generalizations and conclusions for an already established moral ,such as how deeply rooted it is, how general it is, and how constant it will be within a given human's or groups lifetime.
Primal is offline  
Old 09-19-2002, 01:50 PM   #129
Senior Member
 
Join Date: Sep 2002
Location: San Marcos
Posts: 551
Post

Quote:
Those individuals that had traits that allowed them to organize into groups had a clear survival over those with anti social behavior. Groups that removed or altered the behavior of antisocial individuals also had an advantage.

But many other traits such as being big, fast, and strong, also conferred a survival advantage. Why aren’t these considered “moral” traits? Your answer, in effect, is that the traits that “count” are the ones that allowed the individuals to have them to organize into groups and to act ways that were advantageous to those groups. Well then, this is essentially your definition of moral behavior: behavior that allows one to organize into a group (or “fit into” an existing one), and behavior that benefits one’s group.
One we speak of how morals would have evolved as certain emotions used to keep groups together I don't think I nor K are saying that a moral is a rule that helps groups organize. K I think and I are saying that certain emotions may have evolved while people were evolving into a group. These would be emotional drives of a certain type by themselves, the drives we call morals. Now a given moral may at some point become detrimental to a group, however this does not mean the emotion isn't there or that the emotion didn't come about as a result of evolving within a group.

To use an analogy, certain drives and instincts evolve within organisms because they help such organisms survive and pass on their genes. These instinctive behaviors may become detrimental to survival and reproduction though, in which case the organism is SOL.

Does this mean though that the innate behavior no longer counts as instinctive because it arose as a result of past sucess and is no longer sucessful? Nope. It just means that the instinct no longer does what it has evolved to do.

Morals I likewise believe are types of emotions that evolve, whether they help groups organize or not now is irrelevant as they will remain strong drives. This can be compared to birth control, sex has evolved as a way to help us reproduce and has become pleasurable for that reason. But sex is still a strong motivating and pleasureable force even with birth control in place. It's purpose in the past holds no relevance to how it motivates the organism at the moment, onlythe feeling does. The same with morality, it can still motivate even if it has lost the purpose originally intdended by natural selection.
Primal is offline  
Old 09-19-2002, 02:41 PM   #130
Veteran Member
 
Join Date: Jul 2002
Location: Overland Park, Kansas
Posts: 1,336
Post

Greetings:

I think morality, for me, is a personal thing. I don't really worry too much about what most other people do.

But, I do often think about what I should do. As an artist, I have a nearly infinite number of ideas that I could represent, but a limited amount of time. I thus have to determine what the best use of my time is.

Though these aren't often moral choices, they are choices nonetheless.

How do determinists view this kind of thought? When one agonizes over which choice to make, and makes a decision by weighing the benefits vs. the cost of the outcome of each choice, and chooses the most benefits with the least cost, is that decision determined?

Keith.
Keith Russell is offline  
 

Thread Tools Search this Thread
Search this Thread:

Advanced Search

Forum Jump


All times are GMT -8. The time now is 07:08 PM.

Top

This custom BB emulates vBulletin® Version 3.8.2
Copyright ©2000 - 2015, Jelsoft Enterprises Ltd.