FRDB Archives

Freethought & Rationalism Archive

The archives are read only.


Go Back   FRDB Archives > Archives > IIDB ARCHIVE: 200X-2003, PD 2007 > IIDB Philosophical Forums (PRIOR TO JUN-2003)
Welcome, Peter Kirby.
You last visited: Today at 05:55 AM

 
 
Thread Tools Search this Thread
Old 07-21-2003, 12:23 PM   #11
Veteran Member
 
Join Date: Mar 2002
Location: 920B Milo Circle Lafayette, CO
Posts: 3,515
Default

Quote:
Originally posted by Nowhere357
I agree it's a putative sense only. However, your conclusion that this would reduce morality to mere opinion does not follow.
I would agree that it does not follow -- though this is due in a large part because likes and dislikes have to do with desires, and opinions have to do with beliefs. To shift from a discussion of desire to a discussion of belief is a category mistake.

My statement is that we have no 'moral sense". Instead, we have an ability to sense our own likes and dislikes. People who claim to be using a 'moral sense' are doing nothing more than imposing an obligation on others that 'If I like X, then you must do X; and if I do not like Y, you may not do Y."

Which hardly seems like a suitable foundation for a moral theory.


Quote:
Originally posted by Nowhere357
I think of it as an heritable instinct that guides our behavior as we relate to groups. . .
I believe that there are heritabale factors influencing our likes and dislikes.

Furthermore, I hold that likes and dislikes are relevant to right and wrong -- in my case, moral statements are statements about whether certain likes and dislikes are "good for us" in the sense of being generally compatible with fulfilling our other likes and dislikes.

To this extent, there is an evolutionary influence. But this is wholly inconsistent with the thesis that "I have evolved a disposition to treat you in a particular way, therefore it is morally permissble that I do so."

This type of direct link is unjustified.


Quote:
Originally posted by Nowhere357
Morality codifies how we deal with others, and rises above personal likes and dislikes, evidenced by the fact that we can take action based on our morality which may be detrimental to the individual. Such as taking risks to help people in need.
I agree with this. However, it is compatible with a wide variety of possible theories. We are still justified in asking: Of all of the theories compatible with these statements, which of those theories work and which do not? The "aesthetic" theory still does not work for the reasons provided. Namely, because it makes moral statements nothing more than reports of individual sentiments. Morality, conceived in such a way, have none of the elements of mutually agreed upon rules that promote our well being generally.

Now, we may have well evolved a disposition to prefer life according to rules that benefit us generally. Yet, the question still arises, what should we do when our evolved dispositions deviate from what benefits us generally. Should we go with the evolved dispositions, or should we go with what benefits us generally? This is the type of test that determines which is the true foundation of moral claims. And, on this test, I hold that the evolved dispositions come out lacking.

Whether evolved or learned, a disposition is good in virtue of its capacity to make us better off. It is not good in virtue of the fact that it has a particular evolutionary pedigree. Because if something with an evolutionary pedigree ends up not making us better off, we keep what makes us better off, and abandon that with the evolutionary pedigree.


Against my statement, The main issue is that, you don't need any type of special 'super-natural' value property to sense to put the 'good' before the 'like'..

You write

Quote:
Originally posted by Nowhere357
I imagine that's why no-one has postulated a supernatural value property.
Not explicitly, but this is often the next claim that people make. It is argued, "We must put the 'likes' of the person making the moral claim before the 'good' because putting the 'good' before the 'like' requires postulating some sort of intrinsic value property. No such property exists. Therefore, we must put the 'like' before the 'good'."

It was not explicitly stated, yet previous experience gives me reason to believe it would be in the mind of most readers, and the argument would be incomplete if this issue was not addressed.

The first premise is false. Putting the good before the like does not require intrinsic value properties. It only requires admitting that the person making the moral claim has only a fraction of the total set of likes and dislikes that exist.


Quote:
Originally posted by Nowhere357
So I see no problem with thinking of morality as codification of our putative moral sense. We may actually be sensing genetic information, or we may actually be sensing group mind.
In this, it seems that the 'putative moral sense' is unnecessary -- in the same way that our 'sense of direction' is unnecessary with respect to North and South. It might be useful to fall back on in a pinch, but they should yield to more reliable methods when possible. 'North' ceases to be "That direction that an individual senses to be 'north' and is, instead, what turns out to be North in fact.

Accordingly, morality becomes what benefits the group in fact, rather than what a particular individual may sense as benefitial to the group. The 'sense' does not determine the right answer. It is, instead, an instrument of last resort for making rough approximations.
Alonzo Fyfe is offline  
Old 07-22-2003, 08:44 AM   #12
Veteran Member
 
Join Date: Mar 2003
Location: The South.
Posts: 2,122
Default

Quote:
yguy: Why would you think antipathy to the idea of infanticide is culturally imparted? Does it not make at least as much sense to say the tolerance of infanticide is culturally imparted?
I don't necessarily think that. It certainly seems to me that either situation is possible.

But it does make one at least stop and wonder when saying (as in the OP):

Quote:
we all have a general feeling (I hope) that killing is wrong, that causing others pain is wrong.
whether that feeling is truly inherent to us versus culturally instilled. That was my point.


Michelle
Bad Kitty is offline  
Old 07-22-2003, 12:16 PM   #13
Veteran Member
 
Join Date: Mar 2003
Location: Grand Junction CO
Posts: 2,231
Default

Quote:
Alonzo Fyfe
My statement is that we have no 'moral sense". Instead, we have an ability to sense our own likes and dislikes. People who claim to be using a 'moral sense' are doing nothing more than imposing an obligation on others that 'If I like X, then you must do X; and if I do not like Y, you may not do Y."
I understand the point, I'll respond again.

The moral sense expands our awareness BEYOND our personal likes and dislikes. For example, we'll feel compelled to help a child in danger, even though this carries risk to ourselves. So it does not follow that using the moral sense is nothing more than imposing our personal likes and dislikes.

Quote:
Which hardly seems like a suitable foundation for a moral theory.
I disagree. We develop a moral theory because we have the moral sense. We don't help the child because, upon reflection, we think this would benefit society. The rationalizations come after the fact - and that fact is that we feel like we should help.

Quote:
I believe that there are heritabale factors influencing our likes and dislikes.
Furthermore, I hold that likes and dislikes are relevant to right and wrong -- in my case, moral statements are statements about whether certain likes and dislikes are "good for us" in the sense of being generally compatible with fulfilling our other likes and dislikes.
Yes. Notice that your moral sense expands your awareness from individual to group.

Quote:
To this extent, there is an evolutionary influence. But this is wholly inconsistent with the thesis that "I have evolved a disposition to treat you in a particular way, therefore it is morally permissble that I do so."
Why is that, exactly? You've said "It is true that morality is concerned with like and dislikes -- but it is concerned not with what we DO like, but with what we SHOULD like." So "I have evolved a disposition to understand how I SHOULD treat you in a particular way, therefore it is morally permissble that I do so."

Quote:
I agree with this. However, it is compatible with a wide variety of possible theories. We are still justified in asking: Of all of the theories compatible with these statements, which of those theories work and which do not? The "aesthetic" theory still does not work for the reasons provided. Namely, because it makes moral statements nothing more than reports of individual sentiments. Morality, conceived in such a way, have none of the elements of mutually agreed upon rules that promote our well being generally.
But you haven't actually addressed my objections, so I'm not convinced the "aesthetic" system does not work. My understanding is that our "moral statements" are codified, therefore involving knowledge, reason, communication. So moral statements derived from the "moral sense" ARE more than "individual sentiments". AND the moral sense is more that "individual sentiments" in the first place!

Quote:
Now, we may have well evolved a disposition to prefer life according to rules that benefit us generally. Yet, the question still arises, what should we do when our evolved dispositions deviate from what benefits us generally. Should we go with the evolved dispositions, or should we go with what benefits us generally? This is the type of test that determines which is the true foundation of moral claims. And, on this test, I hold that the evolved dispositions come out lacking.
Nothing in my position precludes using reason to reach moral conclusions, in fact it demands it, so I pass your test.

Quote:
Whether evolved or learned, a disposition is good in virtue of its capacity to make us better off. It is not good in virtue of the fact that it has a particular evolutionary pedigree. Because if something with an evolutionary pedigree ends up not making us better off, we keep what makes us better off, and abandon that with the evolutionary pedigree.
All this is fine, and doesn't contradict my position. We have an innate "putative moral sense" which guides us to determine right from wrong, which is above our normal ability to determine our likes/dislikes.

Quote:
Not explicitly, but this is often the next claim that people make. It is argued, "We must put the 'likes' of the person making the moral claim before the 'good' because putting the 'good' before the 'like' requires postulating some sort of intrinsic value property. No such property exists. Therefore, we must put the 'like' before the 'good'."
Well, I have no interest in "supernatural". And I'm not sure "intrinsic value property" has meaning. "Value" requires a "valuer" and is subjective, not intrinsic to the object or idea being valued.

Putting the "good" before the "like" is simply individual perspective expanding to group perspective.

Quote:
In this, it seems that the 'putative moral sense' is unnecessary -- in the same way that our 'sense of direction' is unnecessary with respect to North and South. It might be useful to fall back on in a pinch, but they should yield to more reliable methods when possible. 'North' ceases to be "That direction that an individual senses to be 'north' and is, instead, what turns out to be North in fact.
From my view, the "moral sense" can equate with "empathy". Does your system for codifying morality include the concept of empathy?

Quote:
Accordingly, morality becomes what benefits the group in fact, rather than what a particular individual may sense as benefitial to the group. The 'sense' does not determine the right answer. It is, instead, an instrument of last resort for making rough approximations.
I can't object to that - we do use reason to temper our empathy. Note, however, that people don't try to behave morally because there are codified moral laws; we codify moral laws to describe how we try to behave morally. Just like things don't fall down because there is a codified law of gravity, rather we codified the law of gravity because things fall down.
Nowhere357 is offline  
Old 07-22-2003, 01:14 PM   #14
Veteran Member
 
Join Date: Mar 2002
Location: 920B Milo Circle Lafayette, CO
Posts: 3,515
Default

Quote:
Originally posted by Nowhere357
The moral sense expands our awareness BEYOND our personal likes and dislikes. For example, we'll feel compelled to help a child in danger, even though this carries risk to ourselves. So it does not follow that using the moral sense is nothing more than imposing our personal likes and dislikes.
Let's try it this way.

The reason that I object to a 'moral sense' is, actually, an Occham's Razor argument that it does not do any work. It is ontologically impotent.

I can fully account for a person compelled to help a child in danger simply by stating that the agent desires that the child be safe. This is just one desire among many, that the agent weights with his other desires, such as the desire to experience pleasure, the aversion to pain, the desire to have sex, the desire to eat. We simply add to it a desire that the child is well.

An agent might, just as easily, acquire a desire that the child be in great pain (as a sadist might have). There is no more reason to believe that the desire that the child be well involves a 'moral sense' than that the desire that the child be in pain involves an 'immoral sense'.

So, there is no reason to believe that it exists.


Quote:
Originally posted by Nowhere357
I disagree. We develop a moral theory because we have the moral sense. We don't help the child because, upon reflection, we think this would benefit society. The rationalizations come after the fact - and that fact is that we feel like we should help.
We all have different desires. We figure out that we can better fulfill our desires if we cooperate, so we come up with the rules for cooperation.

Desires are learned -- we can 'train' people to acquire certain desires just as we can 'train' them to acquire certain beliefs. We (or, at least, those of us who are moral), learn good desires when we are children, and have bad desires beaten out of us. We come to like those things that are 'good for us all' and dislike things that are 'bad for us all.'

There is no 'moral sense' telling us what is good for us all. But there is an objective fact of the matter that can be learned.

So, you are right, we do not do the right thing because we reflect on it and discover it is good for society. We do the right thing because we were taught to value these things when we were children -- we value the 'right thing' for its own sake.

But, again, it does not involve a 'moral sense'. It involves nothing more complex or mysterious than learned desires.


Quote:
Originally posted by Nowhere357
Why is that, exactly? You've said "It is true that morality is concerned with like and dislikes -- but it is concerned not with what we DO like, but with what we SHOULD like." So "I have evolved a disposition to understand how I SHOULD treat you in a particular way, therefore it is morally permissble that I do so."
Again, the distinction lies in the difference between the sense, and the thing sensed. The 'should' refers to the thing that the sense 'should' pick out, not necessarily the thing that the sense 'does' pick out. It involves an unwarranted leap of logic to say that the sense DOES pick out X therefore it SHOULD pick out X.


Quote:
Originally posted by Nowhere357

Nothing in my position precludes using reason to reach moral conclusions, in fact it demands it, so I pass your test.
Not really.

Let's look at it this way. There are two options: "Picked out by moral sense" and "Good for the group."

What happens if "picked out by moral sense" <> "good for the group".

In these types of cases, "good for the group" implies "not that which is picked out by the moral sense." That is to say, if, in cases where the two tests diverge, you go with "good for the group", then there is no reason not to also go for "good for the group" in those instances where they coincidentally agree. Why argue for one standard when they agree and a different standard when they diverge, when one standard in both instances yields the same results?

Note: Moral Aesthetic theory actually tends to be interpreted as "if the individual's preferences diverge from what is good for the group -- if the individual finds aesthetic/moral value, for example, in torturing small children, then this overrules any group concerns. Which is often given as a reason for rejecting this type of theory.

It is like saying that "North" is whatever direction one's sense of direction picks out as north. If this deviates from true north, than we go with perceived north rather than true north.

This goes back to the original claim that a 'moral sense' is ontologically impotent. Why use two explanatory mechanisms when you can do everything you need to do with one?


Quote:
Originally posted by Nowhere357
From my view, the "moral sense" can equate with "empathy". Does your system for codifying morality include the concept of empathy?
It is a strange equation, given that many of the things people define as "right", "just" and whatever seem quite removed from empathy. Concepts of punishment, for example, seem difficult to explain in terms of empathy -- even empathy for the victim, because nothing about the suffering of the victim entails punishment.

Be that as it may, I guess one can say that the theory I define makes the 'right thing to do' that thing which the perfectly empathetic person would do in fact, without requiring that the agent actually be empathetic. The perfectly empethetic person would consider all desires. But since none of us are perfectly empethetic, there is a discontinuity between doing what our moral sense tells us, and what a perfectly empethetic person would do.

Yet, still another problem that I have with moral sense theories, are the countless cases where people have followed their moral senses into evil. The slave owner, crusader, inquisitor, death squad member, torturer, rapist, murderer, drug smuggler, all have a 'moral sense' that tells them that there is nothing wrong with what they do. This also, I think, should give one pause in attempting to argue that right and wrong is picked out by some type of moral sense.
Alonzo Fyfe is offline  
Old 07-22-2003, 04:59 PM   #15
Veteran Member
 
Join Date: Mar 2003
Location: Grand Junction CO
Posts: 2,231
Default

Quote:
Originally posted by Alonzo Fyfe
Your post makes your same points, again without directly responding to my arguments. That's rather frustrating - how can I detect an error in my position, if your rebuttal does not actually address my position?

I'm not saying your system is invalid or inconsistent. Any system which involves reason and empathy has a chance to be valid and consistent.

Quote:
It is a strange equation, given that many of the things people define as "right", "just" and whatever seem quite removed from empathy. Concepts of punishment, for example, seem difficult to explain in terms of empathy -- even empathy for the victim, because nothing about the suffering of the victim entails punishment.
The suffering of the victim "entails" that we keep the perp from creating more victims, for example. Empathy also allows us to recognize unjust punishment.

I insist the equation is not strange. Empathy and reason are sufficient and necessary to develop a sound moral system.

Quote:
Yet, still another problem that I have with moral sense theories, are the countless cases where people have followed their moral senses into evil.
I have the same problem. It's not merely that some people don't see with moral clarity - they actually seem to see things in a way that opposes what I would call moral clarity. Instead of being attracted to the light, they seem attracted to the dark. I don't understand this, although I have several ideas.

For example, their behavior reminds me of the behavior of cancerous cells - killing neighbors even though this would eventually kill the body.
But then the behavior sometimes seems more similar to the behavior of white blood cells - going berserk on a particular "invader", for the good of the body.
-----------------

Under your system, why help the child if this may injure or kill us or even endanger our families?
Nowhere357 is offline  
Old 07-22-2003, 06:31 PM   #16
Veteran Member
 
Join Date: Mar 2002
Location: 920B Milo Circle Lafayette, CO
Posts: 3,515
Default

Quote:
Originally posted by Nowhere357
Your post makes your same points, again without directly responding to my arguments.
In the standard course of logic, responding to an "Occham's Razor" rebuttal requires showing something that can be explained with the hypothetical entity that cannot be explained in any other way.

The point that I have raised is that everything you describe can be accounted for more easily simply by talking about beliefs and desires. The response to this would be something of the form "beliefs and desires cannot handle X. For X you need a 'moral sense'."

No 'X' means no reason to believe that there is a 'moral sense'. That's my rebuttal.

If you are suggesting that there is an X that beliefs and desires alone cannot handle, then I am still at a loss as to what that X might be.


Quote:
Originally posted by Nowhere357
Empathy and reason are sufficient and necessary to develop a sound moral system.
I am unable to clearly see where we are talking about epistemology, ontology, or axiology.

On the question of what IS right and wrong (ontology), I do not think that empathy or reason are either sufficient or necessary. Right and wrong is simply a description of relationships between desires and other desires. A good desire is one that tends to fulfill other desires; an evil desire is one that tends to thwart other desires.

On the question of how we know what is right and wrong, reason is certainly required -- this is almost axiomatic. Empathy may be useful (since empathy reveals information about how certain desires relate to the desires of others). But it is not necessary. We can understand the relationship without empathy.

On the question of how we get people to do that which is right and not do that which is wrong, reason is again required, and empathy is extremely useful. It is far easier not to thwart the desires of others if you feel their frustration, share their pain. It is still not necessary -- a person with good desires will do what the empathatic person will do.


Quote:
Originally posted by Nowhere357
I have the same problem. It's not merely that some people don't see with moral clarity - they actually seem to see things in a way that opposes what I would call moral clarity. Instead of being attracted to the light, they seem attracted to the dark. I don't understand this, although I have several ideas.
Perhaps THEY are the ones with the correctly tuned moral sense, and WE are the ones with a defective moral sense? You must not only explain the difference, but explain the higher value that one option has over the other. Yet, I sense a problem in explaining this value in terms of a 'moral sense' -- because this would be begging the question in picking which 'moral sense' is correct.


Quote:
Originally posted by Nowhere357
Under your system, why help the child if this may injure or kill us or even endanger our families?
Because one wants to. It is the only reason people do anything.

Mathematically.

C > (I * Ri) + (D * Rd) + (F * Rf)

where:

C = the strength of the desire that the child is safe,
(I * Ri) = the strength of the aversion to injury times the risk of injury
(D * Rd) = the strength of the aversion to death times the risk of death
(F * Rf) = the strength of the aversion to harm to one's family times the risk of harm

Morally, a desire that the child be safe counts as a good desire, because it tends to fulfill other desires. It may thwart desires in certain specific instances, but this does not change the fact that it fulfills desires generally. Because it fulfills desires generally, it is a desire that we ought to have.
Alonzo Fyfe is offline  
Old 07-22-2003, 10:57 PM   #17
Veteran Member
 
Join Date: Mar 2003
Location: Grand Junction CO
Posts: 2,231
Default

Quote:
Alonzo Fyfe
In the standard course of logic, responding to an "Occham's Razor" rebuttal requires showing something that can be explained with the hypothetical entity that cannot be explained in any other way.
The point that I have raised is that everything you describe can be accounted for more easily simply by talking about beliefs and desires. The response to this would be something of the form "beliefs and desires cannot handle X. For X you need a 'moral sense'."
No 'X' means no reason to believe that there is a 'moral sense'. That's my rebuttal.
If you are suggesting that there is an X that beliefs and desires alone cannot handle, then I am still at a loss as to what that X might be.
I've tried to show something which is unnaccounted for by talking only about beliefs and desires, and again I feel that you haven't addressed it.

One of our most basic desires is to avoid risk to life and limb.
Your system does not explain why we then desire to risk life and limb in the name of morality. Clearly, something about moral awareness can suppress ordinary - personal perspective - desires.

Quote:
On the question of what IS right and wrong (ontology), I do not think that empathy or reason are either sufficient or necessary. Right and wrong is simply a description of relationships between desires and other desires. A good desire is one that tends to fulfill other desires; an evil desire is one that tends to thwart other desires.
I agree. You are talking about the individual.

Quote:
On the question of how we know what is right and wrong, reason is certainly required -- this is almost axiomatic. Empathy may be useful (since empathy reveals information about how certain desires relate to the desires of others). But it is not necessary. We can understand the relationship without empathy.
You've lost me. You said right and wrong is based on desire, and does not require reason or empathy. Now you say it needs reason, and maybe empathy. But now you also include interrelatedness with others, maybe that explains it.

And it is necessary - "understanding" a relationship doesn't indicate any motivation to risk life and limb. It takes emotion to do that - feeling.

Quote:
On the question of how we get people to do that which is right and not do that which is wrong, reason is again required, and empathy is extremely useful. It is far easier not to thwart the desires of others if you feel their frustration, share their pain. It is still not necessary -- a person with good desires will do what the empathatic person will do.
I fail to see how a person could have "good desires" which lead to empathic behavior, unless empathy was involved in developing those "good desires". Again, people are motivated by emotion, not logic.

Quote:
Perhaps THEY are the ones with the correctly tuned moral sense, and WE are the ones with a defective moral sense? You must not only explain the difference, but explain the higher value that one option has over the other. Yet, I sense a problem in explaining this value in terms of a 'moral sense' -- because this would be begging the question in picking which 'moral sense' is correct.
Sure, all good people may be insane. But there is a profound difference which should be easy to see: if everyone goes around killing everthing all the time, then everthing would tend to be killed. If everything is killed, there is no life. No life, means we fail to meet our most basic desire. Life wants to live - I want to live - so a morality system which opposes life is inferior, self-destructive, cancerous.

Quote:
Because one wants to. It is the only reason people do anything.
But they also want to approach pleasure and avoid pain. We know why we want to do that. Why should that desire be overridden or suppressed? What possible reason is there for C to outweigh the immediate factors of personal danger?

This is the element missing from your system, which the moral sense provides.

Quote:
Morally, a desire that the child be safe counts as a good desire, because it tends to fulfill other desires.
But your formula C>(I*Ri)+(D*Rd)+(F*Rf) shows strong, immediate value to each of the right hand terms. Where does the value for the left hand term come from? When would C ever be greater?

Without moral awareness, we have only the stick and carrot to try and get people to behave morally.
Nowhere357 is offline  
Old 07-23-2003, 07:20 AM   #18
Veteran Member
 
Join Date: Mar 2002
Location: 920B Milo Circle Lafayette, CO
Posts: 3,515
Default

I would like to focus on the core point.


Quote:
Originally posted by Nowhere357
But your formula C>(I*Ri)+(D*Rd)+(F*Rf) shows strong, immediate value to each of the right hand terms. Where does the value for the left hand term come from? When would C ever be greater?
The statement, 'Agent desires that his child is well" is just a statement about how the brain is wired, and is qualitatively no different than the stagement 'Agent desires that he not be in pain," and 'Agent desires that he is having sex with Jenny.'

As to how a brain can be wired that way -- an argument can be made that evolution favored a particular type of wiring. Whatever genetic influences that might exist on brain development caused it to be wired in a particular way such that people have a desire that their children are well.

Brain structure can also be modified by experience -- and through this new desires can be acquired and existing desires can be modified. This is no different from the way that experience modifies our brain structure causing new beliefs to form and existing beliefs to be modified.

Our 'moral education' consists in parents and others in society giving us experiences that aim at causing (what are believed to be) good desires to form and strengthen, and bad desires to weaken -- in the same way that general education aims at altering our brain structures in such a way that we acquire true beliefs and to be rid of false beliefs.

If I were to make a guess, I would say that the desire that a child be well is an inherited desire (and, like all inherited traits, not necessarily universal) augmented by social conditioning (which, itself, has different levels of effectiveness).

And, still, what makes a particular desire 'good' or 'bad' is its capacity to fulfill or thwart other desires directly or indirectly.
Alonzo Fyfe is offline  
Old 07-23-2003, 08:38 AM   #19
Junior Member
 
Join Date: Feb 2003
Location: Chicago
Posts: 28
Default

Quote:
Without moral awareness, we have only the stick and carrot to try and get people to behave morally.
Yup. What's your point? We might not like the idea, but this doesn't mean that the idea is false.
tudal is offline  
Old 07-23-2003, 09:48 AM   #20
Veteran Member
 
Join Date: Mar 2002
Location: 920B Milo Circle Lafayette, CO
Posts: 3,515
Default

We actually have two methods at our disposal for causing people to do the right thing, neither of which involves a 'moral sense' per se.

And, the distinction between these two methods matches the distinction between morality and law.

The first method is to cause people to have good desires and aversions -- those desires and aversions that are compatible with the fulfillment of the desires and aversions of others. Such as, an aversion to taking property without permission, an aversion to killing, and a desire to help those in need.

If we cause a person to have good desires, then that person will do the right thing "because I want to," and will do the right thing even under circumstances where there is nobody to watch over him and there are no poor consequences to suffer.

We have a reason to cause people to acquire good desires, and to encourage our neighbors to participate, because we are the 'others' whose desires that the individual with good desires will be compatible with.

Furthermore, when that person grows up, his 'good desires' will be the desires that he will teach others to have compatible desires with.

However, the process of teaching people to acquire 'good desires' is not perfect, and some individuals acquire bad desires anyway. For these people, we have a second method available -- the criminal law. "You may not want to do the right thing, but I am certain you do not want what we will do to you if we catch you doing the wrong thing."

There is a link between morality and law in that 'the right thing' is the thing that a person with good desires would do. And "what we will do to you if we catch you doing the wrong thing" is ultimately to be understood as "what a person with good desires will do to you if he catches you doing the wrong thing."

Those 'good desires' would include an aversion to harming the innocent, an aversion to inflicting either too little or too much harm on the guilty (i.e., a desire for proportionality), and the like. These are the moral limits of the criminal law, and distinguishes just law from unjust law.

Morality, then, is concerned with identifying and creating the proper internal constraints (good desires), while criminal law is concerned with identifying and creating the proper external constraints (sticks and carrots). Both aim at creating people who will do what a good person (a person with good desires -- a person with desires that tend to fulfill other desires either directly or indirectly) would do.

There is a great deal of room here for debate over what counts as a good desire and what a person with a good desire would do under particular circumstances. This is no different than saying that there is a great deal of room for moral debate. But there is something real to debate about. This 'something real' does not require a moral sense, or any type of value other than 'tendency to fulfill (other) desires either directly or indirectly.'

Then, it comes as no surprise that there would be a great deal of moral debate. The above account not only correctly predicts that a great deal of debate would exist, but the nature of that debate (e.g., the types of claims made, the nature of relevant evidence, and the validity of the implications from moral claims).
Alonzo Fyfe is offline  
 

Thread Tools Search this Thread
Search this Thread:

Advanced Search

Forum Jump


All times are GMT -8. The time now is 05:54 PM.

Top

This custom BB emulates vBulletin® Version 3.8.2
Copyright ©2000 - 2015, Jelsoft Enterprises Ltd.