FRDB Archives

Freethought & Rationalism Archive

The archives are read only.


Go Back   FRDB Archives > Archives > IIDB ARCHIVE: 200X-2003, PD 2007 > IIDB Philosophical Forums (PRIOR TO JUN-2003)
Welcome, Peter Kirby.
You last visited: Today at 05:55 AM

 
 
Thread Tools Search this Thread
Old 02-26-2002, 09:37 PM   #71
Veteran Member
 
Join Date: Aug 2000
Location: Indianapolis area
Posts: 3,468
Post

bd-from-kg,

Sorry about the delay in responding.

First, your coment on my response to Dr. Retard:

I don’t understand the distinction you’re trying to make here. For consequentialist theories at least (including subjective theories) “right” means something like “producing the most good” or “producing the best state of affairs”. In this case defining “good” is tantamount to defining “right”.

Essentially, I am using the terms “good” and “right” as follows:

A state of affairs is “good,” from X’s point of view, if it leads to greater happiness for X (or: if X values it) (or: if the relationship between that thing and X has a positive CH, to use the term we jointly coined in this thread). “Good” is subjective, in my view.

An act is “right” is it is in accordance with rational normative principles (however we may arrive at these principles…I favor contractarianism, but that’s me). As you note, accordance with normative principles usually means that a “right” action will, indeed, lead to the greatest amount of “good,” taking the points of view of all agents into account. “Right” is objective in my view, in the sense that it represents compliance with what we might call an objective strategy. Whether or not this allows for "objective moral facts" is beyond me at this point.

The distinction, I think, between my view and most other consequentialist theories is that, under my system, it is sometimes, if rarely, true that an agent should not do the “right” thing.

Back to our discussion:

In the first place, when we set out to discuss something like “moral philosophy” (especially with a bunch of strangers) it is implicit in the project that we intend to use terms like “moral”, and by extension moral terms, in a more or less “standard” way, meaning the way most people use them.

Agreed. My point is that most people simply do not know what they mean when they use terms like “good” and “right.” Of course, I don’t have any formal polling data to back this assertion up, but I think it’s fairly obvious that most people will have great difficulty spelling out exactly what they mean when they use these terms. Hell, look at all the contention you and I have, and we obviously spend quite a bit of time thinking about these things.

If, for example, Jabberwocky were to make an appearance and make some seemingly nonsensical remarks, only to reveal that what he meant by saying “X ought to do Y” was that Y would enrich Joe Blow from Boise more than any other available choice, we would rightly conclude that he wasn’t “getting with the program”, but was just being disruptive.

Actually, wouldn’t that be a general egoist theory (which is very similar to my view), considered from Joe Blow’s point of view?

At any rate, I was probably out of line with my ranting post, so I’ll shut up about it now.

Next, you responded to my “proof” (not what I’d call it…), which went as follows:

Quote:
P1) Johnny is hungry.
P2) Johnny desires not to be hungry (or: Johnny values satiation of hunger) (or: Johnny’s relationship with his hunger possesses negative CH)
P3) Eating satiates hunger.
P4) All else being equal, agents will act so as to further their own values.
C1) All else being equal, Johnny would eat. (From P1-4)
P5) To say that an agent should do something is to say that that agent would do that something, given adequate information.
C2) All else being equal, Johnny should eat. (From C1 and P5)
"Ought" from "is" in seven easy steps.
I can see two problems here:

(1) C1 does not follow from P1-P4. What follows is “All else being equal, Johnny will eat.


You’re right. I already caught this, and fixed it in my next usage of the “proof,” but I forgot to come back and fix it here.

I left out the “all else being equal” clause because it seems problematic. It’s possible that all things could look equal to the agent in his current state of K&U, but wouldn’t it he had enough K&U.

I see what you’re saying. That’s not quite what I meant, though. By C1, I simply meant that Johnny would eat if he had no conflicting values which outweighed the value he placed upon having his hunger sated.

And of course, with increased K&U the agent’s values might change. (Both of these possibilities seem most unlikely in this case, but we’re looking at logical gaps in the “proof”. These gaps will look pretty important if you try use this form of argument to derive a conclusion that isn’t so trivially obvious.)

I agree. I simply threw that together to demonstrate that, in my view, you certainly can derive “ought” from “is,” at least in trivial cases. The form would have to be further refined to be of any use in a less obvious situation.

(2) P5 is not really an “is” statement; it’s a definition of what you mean by “should”.

Yes. I wasn’t sure how else to work that clearly into the “proof,” though. It might be fair to say that we can, indeed, derive “ought” from “is,” provided we are willing to use a somewhat specialized definition of “ought” and leave it at that.

As to why I don’t consider “All things being equal, Johnny should eat if he’s hungry” to be prescriptive if it is interpreted as meaning “Eating will satisfy Johnny’s value of not wanting to be hungry”, this should be obvious. A “prescriptive” statement “prescribes” some action. This sentence really doesn’t seem to prescribe anything.

As I understand it, it prescribes the most efficient means for Johnny to achieve his goals. I’ve held the position that prescriptive statements are statements about strategies for the achievement of goals since we started this discussion in the How Can… thread some months ago.

However, in less trivial cases conclusions based on the same definition of “should” would look more “prescriptive”: for example, “Johnny should stay out of trouble and stay in school.” So this is really not an important point.

I agree. We’re into semantics again.

But it isn’t really worth arguing about whether such a statement is “factual”, or whether it’s possible in principle to derive an “ought” from an “is”. My point, you may recall, was not even about whether this is possible. It was that principles of action like the Principle of Induction , Occam’s Razor, and the Golden Rule yield prescriptive statements because they are prescriptive statements. which is to say that they are not all that different from one another in their fundamental nature.

Bd, in all honesty, life has me slightly insane right now, and I’m not quite up to exploring this concept. I don’t even remember my original point. The weekend has wiped my brain clean. Sorry.

But everyone by definition prefers the satisfaction of his own values to the satisfaction of someone else’s; the question is the content of those values. If the content of someone’s values consist entirely of thing conducive to his leading a happy life, he is so emotionally and spiritually impoverished as to be pitied.

Well, I disagree. I am concerned with my own happiness, not yours or anyone else’s, except indirectly. I know that sounds cold, but there you have it. It so happens that I derive happiness from the happiness of others, for a variety of reasons, not the least of which is empathy. If, for whatever reason, the happiness of others made me miserable (and the long-term prospects for learning to derive happiness from the happiness of others were dim), I would seek to prevent others from being happy.

The primary difference between us, I think, is that I view empathy as the result of our gregarious evolutionary background, and you view it as a central component of rationality. This leads to the practical effect that I consider an empathic concern for others to be one value among many (although it does seem to be one of the more important ones, for many people) to be fulfilled when crafting life strategies while you seem to assert that it is, or should be, the primary motivating factor.

But if we are going to talk meaningfully (i.e., non-tautologically) about values, we have to distinguish between the values that involve achieving happiness for ourselves and values that involve achieving happiness for others.

I’m not sure what you’re trying to say here. I personally, when talking about values, refer to the values that will make a particular agent happy, unless I specify otherwise.

This hasn’t been my most coherent post. I hope it makes some degree of sense.
Pomp is offline  
Old 03-04-2002, 02:43 PM   #72
Veteran Member
 
Join Date: Jul 2000
Location: King George, VA
Posts: 1,400
Post

Pompous Bastard:

Sorry for the delay. Family responsibilities limited my internet time severely the last few days.

1. On the distinction between “right” and “good”

Quote:
A state of affairs is “good,” from X’s point of view, if it leads to greater happiness for X (or: if X values it) (or: if the relationship between that thing and X has a positive CH, to use the term we jointly coined in this thread).
OK, this is consistent with your subjectivist moral philosophy.

Quote:
An act is “right” is it is in accordance with rational normative principles (however we may arrive at these principles…). As you note, accordance with normative principles usually means that a “right” action will, indeed, lead to the greatest amount of “good,” taking the points of view of all agents into account. “Right” is objective in my view, in the sense that it represents compliance with what we might call an objective strategy.
Here you lose me. Surely to say that an act is “right” implies that the agent “ought” to do it? But if it results in a state of affairs that’s not “good” (i.e., one that the agent does not “value”) in what sense (in terms of your philosophy) “ought” he to do it?

Let’s take the classic test case. Say that, if X does Y, he will immediately be subjected to relentless torture resulting in prolonged agony and death. On the other hand, this is clearly the action that will lead to the greatest amount of “’good’, taking the points of view of all agents into account”. Moreover, it is unambiguously mandates by the “social contract” that he has committed himself to. (Of course, at the time it seemed very unlikely that he would find himself in his current situation.) Unless you are prepared to say that subjecting himself to prolonged agony and death will make him “happy”, in what sense “ought” he do the “right” thing?

Or are you going to say that he ought not do the “right” thing? And if so, what is the practical, operational significance of calling it the “right” thing to do? To avoid confusing and misleading people, perhaps you should call it the “compliant” or “mandated” act (on a contractarian view) or the “utilitarian” or “altruistic” choice rather than the “right” choice.

2. On “what most people mean” and moral philosophy

Quote:
My point is that most people simply do not know what they mean when they use terms like “good” and “right.”
True enough. But as I’ve argued before, the first task of the moral philosopher is to try to construe moral language in a way that does make sense and is as consistent as possible with the “logic of moral discourse”. Since this logic clearly implies an objective morality, objective theories are presumptively correct if any of them can be shown to be reasonable and self-consistent and if it can be plausibly argued that most people would decide that one of them is what they meant by moral terms like “right”, “wrong”, “ought”, “good”, etc. if they had enough time, understanding, and intelligence to think it through.

Here’s an analogy. If you ask most people what they mean by saying that 2 + 2 = 4, you will probably get a confused, incoherent response. But just because a person can’t give a clear account of what he means by saying that 2 + 2 = 4, it would be wrong to conclude that his statement is meaningless or incoherent. The job of the philosopher of mathematics is to find a reasonable, coherent account of what most people probably would agree that they mean by “2 + 2 = 4” if they had enough time and intelligence to think it through.

It seems to me that what advocates of subjective moral theories are doing is to say that, since most people can’t give a clear account of what they mean by saying that an act is “right” or that someone “ought” to do something, they don’t mean anything at all, and then to redefine such terms to mean something that has no relationship at all to the way most people use them. For example, they often define “X ought to do Y” as meaning “Y would be in X’s interest” in spite of the fact that people very often say things like “X is probably going to do Z because it’s in his best interests, but he ought to do Y”. This is the “in your face” style of moral “philosophy”, not a serious attempt to construe what people really mean when they say such things. You might just as well argue that, since most people can’t give a clear account of what they mean by “2 + 2 = 4”, it’s reasonable to say that it means that two 2-year-olds are the same as one 4-year-old.

3. Deriving an “ought” from an “is”

Quote:
bd:
P5 is not really an “is” statement; it’s a definition of what you mean by “should”.

PB:
Yes. I wasn’t sure how else to work that clearly into the “proof,” though. It might be fair to say that we can, indeed, derive “ought” from “is,” provided we are willing to use a somewhat specialized definition of “ought” and leave it at that.
Actually you can derive an “ought” from an “is” using any naturalistic definition of “ought”, if you’re willing to call your definition of “ought” a “factual” statement. For example, you can go through the same steps using the utilitarian definition. But this is generally considered illegitimate, because it’s not a “fact” that “ought” means such-and-such. Indeed, you can do the same for many non-naturalistic definitions. For example, if “X should do Y” is defined to mean “God approves of X doing Y”, the “ought” statement “X should do Y” follows immediately from this definition plus the factual premise “God approves of X doing Y”. The bottom line is that the oft-repeated statement that an “ought” cannot be derived from an “is” must be understood as disallowing a definition of “ought” as a “factual premise”; otherwise it’s trivially false, at least for many widely-held moral theories.

4. On self-interest as a “value”

I am concerned with my own happiness, not yours or anyone else’s, except indirectly.

First off, I doubt that your “happiness” is the only thing that you care about. For example, would you be untroubled by the prospect of becoming a perfectly happy drooling idiot? Or what about being joyful, even ecstatic, but stark raving mad?

Since I think it’s almost certain that you would agree on reflection that your happiness is not the only thing you’re concerned about, let’s say that the only thing you’re concerned about is your self-interest.

But what does this really mean? What do we mean in general when we say that something is in X’s self-interest? Well, we observe X’s actions, draw reasonably inferences about what goals he’s pursuing, and then say that anything that tends to further those goals is in X’s self-interest. (If you can think of some other operational – which is to say, meaningful - definition of what it means to say that something is in a person’s self-interest, let me know.)

But of course, since a person’s self-interest is defined by the goals that he in fact pursues, it is a tautology to say that everyone always pursues his self-interest. Jeffrey Daumer pursued his self-interest; Alfred Einstein pursued his self-interest; Albert Schweitzer pursued his self-interest; you pursue your self-interest. Such statements have no factual content; they tell us nothing about the person involved.

But it’s often the case that one of the goals that a person pursues is the well-being of other people. (Not necessarily all other people; perhaps just some specific ones.) And it seems odd and pointless to say that in pursuing the goal of making someone else happy a person is “really” pursuing his own self-interest, especially once we realize that this statement is just an empty tautology. So it seems more productive (and more conducive to clear thinking) to talk about a person’s values, with the understanding that by a person’s values we mean the goals that he pursues, with the understanding that we do not including the “goal” of “pursuing his “self-interest” in the tautological sense, but rather are including the goals that constitute his self-interest.

With “values” defined in this reasonable, non-tautological sense, we can distinguish between the Jeffrey Daumers and the Albert Schweitzers. Daumer pursued his happiness at the expense of the happiness of others, while Schweitzer found his happiness in making others happy.

Now let’s look at your case. You say that you derive happiness from the happiness of others. What is this saying if not that the happiness of these others is one of your values? Aren’t you saying that some of your values, at least, are directed toward the welfare of other people?

So I conclude that when you say “I am concerned with my own happiness, not yours or anyone else’s, except indirectly”, the first part of your statement is just an unthinking repetition of the commonplace tautology that everyone always pursues his own self-interest, while that final “except indirectly” is a clear indication that your values really include the well-being of other people. Why are you so intent on trying to deny this? Why do you keep insisting that, even though you help other people out of empathy, you’re “really” a completely selfish person?

Quote:
If, for whatever reason, the happiness of others made me miserable ..., I would seek to prevent others from being happy.
Perhaps so, but it doesn’t and you don’t. Unlike you, I don’t think this is just a lucky accident. It’s a natural, predictable result of K&U.

Quote:
The primary difference between us, I think, is that ...you view [empathy] as a central component of rationality.
Not so. I consider the desire to do what one would choose to do if one had enough K&U to be a central component of rationality. I also think that there are rational reasons to prefer the welfare of everyone equally, and that, while empathy is not in itself a rational reason to do so, one of its important roles is to remind us of these rational reasons.
bd-from-kg is offline  
Old 03-04-2002, 05:47 PM   #73
Veteran Member
 
Join Date: Aug 2000
Location: Indianapolis area
Posts: 3,468
Post

bd-from-kg,

Sorry for the delay. Family responsibilities limited my internet time severely the last few days.

No problem.

On my usage of “right:”

Surely to say that an act is “right” implies that the agent “ought” to do it?

Not necessarily. In my view, to say that an act is “right” merely means that it is in accordance with ethical principles that would be agreed upon in a contractarian model. I don’t think that an agent ought to perform an act for the sole reason that it is the “right” act, although that can be a contributing factor.

But if it results in a state of affairs that’s not “good” (i.e., one that the agent does not “value”) in what sense (in terms of your philosophy) “ought” he to do it?

He ought not. As I explicitly stated in my last post, I think that there are certain cases where an agent ought not do the “right” thing. Roughly, an agent ought to “cheat” if the benefit of cheating outweighs the penalty of being caught multiplied by the probability of being caught.

Note that this rough model does not take into account the “penalty” of the stress caused by the necessity of hiding the cheating, guilt, etc. A complete model would take these things into account, but I’m not going to bother in this thread. Note, also, that the difficulty of determining the exact probability of being caught cheating makes doing the “right” thing a safer bet in most cases.

Let’s take the classic test case. Say that, if X does Y, he will immediately be subjected to relentless torture resulting in prolonged agony and death. On the other hand, this is clearly the action that will lead to the greatest amount of “’good’, taking the points of view of all agents into account”…Unless you are prepared to say that subjecting himself to prolonged agony and death will make him “happy”, in what sense “ought” he do the “right” thing?

In no sense. To use the rough model outlined above, the benefit of “cheating” (the avoidance of torture and death) is sufficiently high to outweigh the penalty of being caught (being reviled as an unethical person) in most cases, even assuming that the probability of being caught is P=1.

It should be noted that the rough model, not to mention the test case, does not take specific details into account. For example, if “cheating” would involve betraying some person or ideal that X holds dear, X may well find that death (CH = 0) is preferable to life after betrayal (CH < 0).

Moreover, it is unambiguously mandate[d] by the “social contract” that he has committed himself to. (Of course, at the time it seemed very unlikely that he would find himself in his current situation.)

Bear in mind, of course, that the “social contract” is not something that X has ever personally agreed to. It is simply a theoretical model for discovering what ethical principles work well.

And if so, what is the practical, operational significance of calling it the “right” thing to do? To avoid confusing and misleading people, perhaps you should call it the “compliant” or “mandated” act (on a contractarian view) or the “utilitarian” or “altruistic” choice rather than the “right” choice.

Perhaps. I’m not too concerned with what we call it.

On “what most people mean” and moral philosophy

{snipped some non-controversial stuff}

It seems to me that what advocates of subjective moral theories are doing is to say that, since most people can’t give a clear account of what they mean by saying that an act is “right” or that someone “ought” to do something, they don’t mean anything at all, and then to redefine such terms to mean something that has no relationship at all to the way most people use them.


I see what you’re saying, but disagree. What we subjectivists are trying to do, IMO, is reinterpret what “most people” mean by such language so that it does not suggest queer ontology. Most people seem to subscribe to some sort of deontological morality (some things are just right!), which we agree is intellectually bankrupt. I think our task, as armchair moral philosophers, is to salvage some
meaning from such language.

As I’ve said before, I lump you in with “us” on this issue because, objective, subjective, or whatever, your theory is based on what agents do, in fact, happen to value or, at least, what they would value in your hypothetical situation as they approach perfect K&U.

For example, they often define “X ought to do Y” as meaning “Y would be in X’s interest” in spite of the fact that people very often say things like “X is probably going to do Z because it’s in his best interests, but he ought to do Y”.

I would say that the “proper” interpretation of the statement “X is probably going to do Z because it’s in his best interests, but he ought to do Y,” to be: “X is probably going to do Z because he thinks it is in his best interests, but it is really in his best interests to do Y. Under your model, I think, the “proper” interpretation would be “X is probably going to do Z because it serves the interests he has now, but Y would better serve the interests he would have, given greater K&U.” I don’t think there is any great difference between the two interpretations.

Deriving an “ought” from an “is”

The bottom line is that the oft-repeated statement that an “ought” cannot be derived from an “is” must be understood as disallowing a definition of “ought” as a “factual premise”; otherwise it’s trivially false, at least for many widely-held moral theories.


We don’t have to include the definition as a factual premise, provided it’s understood that we are using the specialized definition, thus making the proof valid to all who accept the definition.

This topic is almost as semantically confusing as the objective-subjective distinction. Much as it’s possible to understand subjective statements as objective statements about a particular subject, it’s possible to understand prescriptive statements as descriptive statements about strategies. The subjective “Ice cream is good!” can be interpreted as the objective “Pompous Bastard likes ice cream.” The prescriptive “X ought to do Y” can be interpreted as the descriptive “Y is the most efficient means for X to attain his or her goals.”

On self-interest as a “value”

First off, I doubt that your “happiness” is the only thing that you care about. For example, would you be untroubled by the prospect of becoming a perfectly happy drooling idiot? Or what about being joyful, even ecstatic, but stark raving mad?


That depends…would I ever be aware that I was deluded? If I had some sort of guarantee that I would never suspect that my happiness was not “genuine,” for lack of a better word, then I might willingly accept either of the scenarios you describe.

Digressions aside, this is a good point. The use of “self-interest” instead of “happiness,” as you suggest, is probably preferable.

But what does this really mean? What do we mean in general when we say that something is in X’s self-interest? Well, we observe X’s actions, draw reasonably inferences about what goals he’s pursuing, and then say that anything that tends to further those goals is in X’s self-interest. (If you can think of some other operational – which is to say, meaningful - definition of what it means to say that something is in a person’s self-interest, let me know.)

I tend to advocate a more direct method of determining X’s self interest: ask X what he values, assume that he is telling the truth when he answers, and then say that anything that furthers those values is in X’s self-interest. I believe this method to be superior because…

But of course, since a person’s self-interest is defined by the goals that he in fact pursues, it is a tautology to say that everyone always pursues his self-interest.

…it avoids this problem. We might observe X’s actions and quite reasonably conclude that X values Y when, in fact, X values Z and makes very poor decisions when pursuing Z. It is quite possible that X, through a poor application of reason, is not pursuing his self interest. This is why we have grounds to suggest that X ought to take different action. If X’s self-interest were defined as whatever X’s actions were actually in aid of, then we would never have grounds to suggest that X ought to have done something else.

But it’s often the case that one of the goals that a person pursues is the well-being of other people. (Not necessarily all other people; perhaps just some specific ones.) And it seems odd and pointless to say that in pursuing the goal of making someone else happy a person is “really” pursuing his own self-interest, especially once we realize that this statement is just an empty tautology.

As I’ve pointed out before, there is nothing unreasonable about the proposition that X values Y’s happiness because Y‘s happiness feels good to X, either via empathic identification or simply because Y is pleasant to be around when Y is happy.

So it seems more productive (and more conducive to clear thinking) to talk about a person’s values, with the understanding that by a person’s values we mean the goals that he pursues, with the understanding that we do not including the “goal” of “pursuing his “self-interest” in the tautological sense, but rather are including the goals that constitute his self-interest.

If I understand you, you’re saying that we simply define “self-interest” as the sum of all the values pursued by an agent. Is that correct? That’s fine with me and is, in fact, how I usually think of “self-interest.” My apologies if I muddled that up somewhere.

With “values” defined in this reasonable, non-tautological sense, we can distinguish between the Jeffrey Daumers and the Albert Schweitzers. Daumer pursued his happiness at the expense of the happiness of others, while Schweitzer found his happiness in making others happy.

I don’t understand what you’re getting at. We could have made that distinction even under the loose terminology you’re cleaning up here.

Now let’s look at your case. You say that you derive happiness from the happiness of others. What is this saying if not that the happiness of these others is one of your values? Aren’t you saying that some of your values, at least, are directed toward the welfare of other people?

Yes, to the extent that their welfare makes my world a more pleasant place to live.

Why are you so intent on trying to deny this? Why do you keep insisting that, even though you help other people out of empathy, you’re “really” a completely selfish person?

The distinction is trivial in most cases. As you note, whether I’m helping people out of altruism (your model) or self-interest (my model), there is usually no practical difference in my actions. I think it is an important theoretical point, however, as it does make an appreciable difference when considering less mundane cases, such as the hypothetical “torture and death” case discussed above.

Perhaps so, but [other people’s happiness] doesn’t [make PB miserable] and you don’t [try to make others unhappy]. Unlike you, I don’t think this is just a lucky accident. It’s a natural, predictable result of K&U.

Well, to an extent, you’re correct. I think that a good case can be made for the evolutionary origin of empathy, so we can all be considered natural empathizers, to one degree or another, and the whole point of contractarianism is that principles you might consider “altruistic” tend to make everyone’s (and I am part of “everyone”) life better. In this light, it could certainly be argued that, given sufficient K&U, we would willingly choose to follow such principles in most cases. Where we diverge, however, is that I am not willing to state that such principles are always the best way for any given agent to pursue his or her values.

Not so. I consider the desire to do what one would choose to do if one had enough K&U to be a central component of rationality. I also think that there are rational reasons to prefer the welfare of everyone equally, and that, while empathy is not in itself a rational reason to do so, one of its important roles is to remind us of these rational reasons.

Fair enough. I agree that there are rational reasons to prefer something like the Principle of Equality generally, and have explained these reasons, and the rational reasons that might lead one to reject that principle situationally, in other threads.
Pomp is offline  
Old 03-14-2002, 08:30 AM   #74
Junior Member
 
Join Date: Nov 2000
Location: Ukraine
Posts: 13
Post

The morality of the people is the offspring of socialization and human ‘propaganda’. Morality is learnt, is ever changing and plastic, is carved by the persons who win, or hold power and depends on the sender and receiver of the message. It is as moral for a fanatic Muslim to blow up himself and thousands of workers in New York, as it is for mainstream Americans to justify the annihilation of thousands of Japanese. Your discussion is great but of no practical value. Knowing that morality is subjective, for example, does not help Milosevic at his trial. Believing that morality is objective, on the other hand, does not stop the American and British bombers from killing Iraqi civilians. Jehova’s chosen are killing Allah’s Palestinians on moral grounds every day. Morality is our wish, our fear, our hope, guilt, goal, law, misunderstanding, profit, religion, science and politics.
A. Milos is offline  
Old 03-17-2002, 04:49 PM   #75
Junior Member
 
Join Date: Feb 2002
Posts: 12
Post

Or, citing our elders:

"We all know what morality is: it is behaving as you were brought up to behave; that is, to think you ought to be punished for not behaving."
Chas. S. Peirce 1839-1914.

I'd say the "objectivity" of the "moral fact" is supplied by the society that an individual is a member of. If one's time-frame is "this moment--it feels good so do it" one may make a different choice than if one's time-frame is "my successful, future, life -- vocation, spouse, kids, grandkids -- how my DNA will fare into the future".

Each person appears to make his own choices via free will (or at least what one's ego perceives to be free will).

Just my 2cts...
hammegk is offline  
 

Thread Tools Search this Thread
Search this Thread:

Advanced Search

Forum Jump


All times are GMT -8. The time now is 09:40 PM.

Top

This custom BB emulates vBulletin® Version 3.8.2
Copyright ©2000 - 2015, Jelsoft Enterprises Ltd.