FRDB Archives

Freethought & Rationalism Archive

The archives are read only.


Go Back   FRDB Archives > Archives > IIDB ARCHIVE: 200X-2003, PD 2007 > IIDB Philosophical Forums (PRIOR TO JUN-2003)
Welcome, Peter Kirby.
You last visited: Today at 05:55 AM

 
 
Thread Tools Search this Thread
Old 04-29-2002, 04:46 PM   #81
Veteran Member
 
Join Date: Mar 2002
Location: 920B Milo Circle Lafayette, CO
Posts: 3,515
Post

bd-from-kg

I need you to explain something to me.

I was reading through some old posts of yours on another thread (as you recommended), and came across the following:

"Utilitarianism, for example, is considered an objective moral theory by all moral philosophers that I know of."

My question: Do you consider it an objective moral theory? Or is every moral philosopher you know of wrong on this matter?

The reason that I ask: no utilitarian that I know of uses "should" (and comparable terms) the way that you say it must be used to count as a moral theory.

When an act-utilitarian says "X is right" their claims carry no implication about what a person would do if a person had sufficient knowledge and understanding. "X is right" is true if and only if "X maximizes utility based on any alternative to X."

If asked whether they believed that a person with sufficient knowledge and understanding would always do X, most utilitarians would laugh. Bentham and Mill both said that a person would act to maximize their own pleasure and minimize their own pain. Legal and moral sanctions, not 'sufficient knowledge and understanding,' were required to get them to do what they ought.

More contemporary utilitarians have abandoned this simplistic psychological theory, but they still almost entirely argue that there is no link between what is right and what a person with sufficient knowledge and understanding would do.

And so it seems you must say that all utilitarians are trying to hijack moral terms and apply them to a nonmoral purpose, and every philospoher who calls utilitarianism a moral theory is unwittingly propagating a mistake.

Either that, or you must allow that the common usage of the word "moral" is more elastic than your theory allows.

[ April 30, 2002: Message edited by: Alonzo Fyfe ]</p>
Alonzo Fyfe is offline  
Old 04-29-2002, 05:58 PM   #82
Veteran Member
 
Join Date: Mar 2002
Location: 920B Milo Circle Lafayette, CO
Posts: 3,515
Post

Just a note:

In spite of whatever differences may have appeared in debate, I am having the pleasure of debating with some of the most intelligent people I have had contact with in a long time.
Alonzo Fyfe is offline  
Old 04-30-2002, 07:05 AM   #83
Veteran Member
 
Join Date: Mar 2002
Location: 920B Milo Circle Lafayette, CO
Posts: 3,515
Post

BD

I shall seek to eliminate some confusion here.

Some of my statements appear contradictory only in the light of shifting assumptions.

(1) If the assumption is that no theory can properly be called a moral theory unless it contains an element whereby a person who understands that he morally should do something will actually do it (moral internalism) -- then I am a moral reductionist. Moral ought reduces to practical ought.

(2) If the assumption is that no theory can properly be called a moral theory unless it conforms to common practice (intuitionism), then I am a moral eliminativist. Those practices only make sense under an assumption of intrinsic values, and intrinsic values do not exist.

(3) If the assumption is that the term 'moral' is flexible enough to allow utilitarian theories to be called moral, even though they do not comform strictly to common usage and reject the proposition that those who understand that they should do something would actually do it, then I defend the moral theory best known as motive- (desire-) utilitarianism.

I try to be flexible and not argue over assumptions. Rather, I prefer to allow my opponent to pick whatever assumptions they like best and to go from there. Where I do not know, or where my opponent seems flexible enough to allow me to set the assumptions, I tend to go with (3).

You seem to be insisting that we use option (1) (though apparently denying that this is reductionist and claiming instead it is compatible with (2) -- with which I disagree).

Either way, statements that I make under one assumption do not necessarily contradict statements that I make under another assumption. For example, the fact that I argue for moral eliminativism under assumption (2) and propose a moral theory under assumption (3) is not a contradiction because the background assumptions are different.

I can well understand the confusion. I have tried, from time to time, to specify the assumptions, but I do sometimes take the assumptions for granted and even guess wrongly as to which assumptions we are working under.

But the distinctions above express my overall view.
Alonzo Fyfe is offline  
Old 05-01-2002, 10:36 AM   #84
Veteran Member
 
Join Date: Mar 2002
Location: 920B Milo Circle Lafayette, CO
Posts: 3,515
Post

Originally posted by bd-from-kg:

(1) We want (in a very real sense) to do what we would choose to do if we had enough knowledge and understanding (K&U).

(2) What we would do if we had enough K&U would be to act altruistically – i.e., to take everyone’s interests into account equally.

(3) An act is right if any fully rational agent with enough K&U would do it, and wrong if no such agent would do it.



I would like to note that (1) and (2) are parts of a psychological theory, not a moral theory.

This is not a criticism – every moral theory needs to be linked in some way to a psychological theory.

(Note 1: It does mean that the thesis needs to be stated in such a way that it is falsifiable so that it can be compared to other psychological theories.)

When I started my investigation into ethics, I began to look at several psychological theories and discovered I could easily get lost in the field and never find my way back out to study ethics. Therefore, I took a shortcut. I decided to work with the most common and pervasive theory within the field, under the assumption that professionals within the field generally know what they are talking about and the dominant position should be taken the most seriously.

That theory is BDI psychology – the idea that Beliefs + Desires form Intentions which, in turn, lead to action. Desires provide the motivating force and selects the ends of human action, while beliefs are motivationally inert but useful in selecting the means to an end. Each person acts so as to maximize fulfillment of his existing desires and, given his beliefs, selects the best means to that end.

(Note 2: I actually found one of the challenging theories much more to my liking – script theory. But I had no reason to trust that my opinion based on limited research was better than the dominant opinion of professionals in the field.)

If I may, I would like to call your alternative theory, KU theory. KU theory differs from BDI theory in that, within BDI theory, understanding refers to collecting more and truer beliefs about a subject. However, because beliefs are motivationally neutral, understanding does not influence an agent’s desires. It simply allows an agent to more efficiently select a means to his desires.

KU theory, on the other hand, holds that understanding causes a change in desires so that the individual, though motivated by his own desires, also internalizes the desires of others that he sufficiently understands.

I asked you what you would do if it turned out that KU theory was false. You answered: “Of course, what I’d really do in that case is to conclude that my theory is wrong.” (Wrong in what way? You did not specify. Of course, proof that (2) was false would cause you to conclude that (2) was false, but would you also conclude that (3) was wrong? Would you no longer say that rape is permissible if even one person exists who would want to commit rape even with perfect knowledge and understanding?)

It seems fair that I should also answer what I would do if I discovered that BDI theory had to yield to the superiority of KU theory.

In fact, I would need to do nothing. I hold that what a person should do is what a person with good desires would do, and good desires are evaluated by how those desires stand in relationship to all other desires.

In practice, you seem to assert something every similar. KU theory simply adds to BDI theory that there is a mechanism whereby agents can directly internalize the desires of others and to consider all desires at once. The conclusion that he should do that which considers all desires at once remains the same.

What remains different on our two accounts, then, would be a purely intellectual difference. Your definition of ‘moral-should’ (and corresponding moral terms) still implies that it is fortunate that KU theory is true, because if KU theory is false – or ever becomes false – for even one person, then morality goes out the window.

While my definition of ‘moral-should’ (and corresponding moral terms) would continue to hold that, even if KU theory is false, people should still do what they would do if KU theory was true. Though we may still need legal and moral sanctions in order to get even the person with sufficient knowledge and understanding to do what he morally should.
Alonzo Fyfe is offline  
Old 05-02-2002, 06:03 AM   #85
Veteran Member
 
Join Date: Mar 2002
Location: 920B Milo Circle Lafayette, CO
Posts: 3,515
Post

Originally posted by bd-from-kg:

(1) We want (in a very real sense) to do what we would choose to do if we had enough knowledge and understanding (K&U).

(2) What we would do if we had enough K&U would be to act altruistically – i.e., to take everyone’s interests into account equally.



I would like to post another problem I have encountered with BD's theory (even if there is nobody out there; I'm saving this thread and would like to have all of my notes in one place).

The truth of (1), I realized, is theory-dependant.

Within BDI moral theory, knowledge and understanding are motivationally neutral. Yet, they are useful in selecting the most efficient means of fulfilling our desires. Thus, it makes sense to say that we at least hope that our actions are those that fulfill condition (1).

However, the cost of obtaining an additional piece of information can often be expected to exceed the benefit, in which case we rationally choose to remain ignorant and hope for the best.

But KU theory changes this and says there is a species of understanding that will actually change our desires -- whereby we internalize the desires of others by coming to a proper understanding of them.

Once we make this basic change, we need to revisit the other parts of the theory to see whether and to what degree they remain true. We need to ask if this altered conception of understanding effects the role of understanding. What is the personal value (in terms of existing desires) of having our desires changed?

The justification for (1) within BDI theory no longer applies. It is no longer the case that the most efficient means of fulfilling our present desires is that which we would identify if we had perfect (or sufficient) understanding of the facts. A new justification (consistent with KU theory) is needed here.

Or, if we take (1) as a given, this implies that BDI theory trumps KU theory, because BDI theory provides a ready account of why (1) is true, while KU theory does not.

[ May 02, 2002: Message edited by: Alonzo Fyfe ]</p>
Alonzo Fyfe is offline  
Old 05-02-2002, 09:32 AM   #86
Veteran Member
 
Join Date: Jul 2000
Location: King George, VA
Posts: 1,400
Post

Alonzo Fyfe:

No I haven’t abandoned this thread. But if you continue the practice of stringing together five posts to my one I’ll have to. The normal policy is to allow someone time to reply to one post before making another. Not everyone has unlimited time on their hands, or is willing to devote what time they have to one thread.

This is a reply to your first April 29 post.

Quote:
... one can ... raise legitimate objections against a stipulative definition
Objecting to a definition isn’t the same thing as disputing it. But this is truly a verbal quibble not worth arguing about.

Quote:
Against a normative definition, it is legitimate to raise objection that the definition fails to provide an accurate description of the way the term is used in society in fact.
I think you mean “descriptive” rather than “normative” here.

The problem here is that there is an ambiguity in phrases like “common usage” and “the way this term is commonly used”. In both cases one could be referring either to what most people understand themselves to be saying, or to a reasonable interpretation of the language in question. Of course this ambiguity doesn’t arise when what people understand themselves to be saying makes sense. But when it doesn’t (for example, when what they understand themselves to be saying involves beliefs about the existence of nonexistent entities), before concluding that they’re just talking meaningless nonsense (or simply that everything they say is necessarily false, which comes to much the same thing) one should analyze what they are saying, and the contexts in which they say it, to determine whether there is a meaningful, reasonable interpretation of their statements. This is what my theory purports to be.

It’s possible that there are other meaningful, reasonable interpretations as well, but the existence of even one is sufficient to show that moral language is not inherently nonsensical, as Mackie apparently claims. If it is possible to understand moral language in a way that does not require the existence of a mysterious property of “rightness” or “goodness”, then Mackie is wrong; there is no need to “reinvent right and wrong”; one need only understand moral language in a reasonable way. The fact that most people don’t understand it in that way is irrelevant. What people “understand” by moral language is all over the lot anyway. What I’m saying is “Here’s a way to understand moral language as it is actually used which is logically coherent, makes moral statements meaningful, and preserves the link between “rightness”, motivation of the agent, and the effects of the act on other people.”

So: yes, my theory purports to be a reasonable interpretation of moral language as it is actually used by most people. But no, it does not purport to be a description of what most people mean when they use moral language.

Quote:
Now, one of the implications of (3) is that those who use the term in this way must be saying, "When I tell you that you should not rape this person, I am saying that no person sufficient knowledge and understanding would commit rape. If it turns out that I am wrong in this, that there exists even one person who would rape even with full knowledge and understanding, then my statement that you should not rape this woman is false, and you may proceed."
First, as we have seen, I don’t claim to be construing what people actually mean when they use moral language, but to be construing the moral language itself in what seems to me to be a reasonable and meaningful way. Second, your statement is a bit loose. Even if some fully rational person with enough K&U would rape someone under some circumstances, it may be that no such person would rape this woman under these circumstances. And of course, even if some such person would rape this woman under these conditions, it doesn’t follow that the agent would have my permission; that he “may” proceed as far as I’m concerned. It only follows that my (extreme) disapproval is not the only possible reaction that a rational person with full K&U might have.

There would still be two reasonable interpretations of “You shouldn’t rape that woman” that are roughly along the lines I suggest. The first is “If you were fully rational and had enough K&U, you wouldn’t rape that woman”. Basically this is an appeal to his “better nature” (assuming that he has one). The second is “Not only do I disapprove of your doing it, but I’m convinced that I would disapprove of it even if I were fully rational, and no matter how much K&U I had.” Essentially this is a warning. To oversimplify a bit, it means, “If you try to do this I intend to stop you or punish you, and there are no reasons you can possibly give that would persuade me not to.”

But IMO these are not moral interpretations. I think that it is inherent in the logic of moral discourse that moral statements involve a claim of universality. The statement “A should not do X” (when “should” is used in the moral sense) implies that reasons exist for doing (or not doing) the act in question that any rational (or at least rational human) agent would find compelling if he fully understood them. If no such reasons exist, the moral claim is false, and if no such reasons ever exist, the claim of universality inherent in the logic of moral discourse cannot be sustained. In that case our standards for what constitutes a “reasonable” interpretation of moral statements would have to be relaxed: any interpretation in which some positive moral statements are true would not involve the claim to universality (in the sense that I mean it). Of course they could be interpreted as universal in a different sense; for example, “It would be wrong for A to do X” might mean that doing X violates a “moral principle” that I am willing to apply to all cases without exception, even when I am the agent. But this is a subjective interpretation, and as such violates the logic of moral discourse in other ways. In particular, if I say “It would be wrong for A to do X” and you say, “No, it wouldn’t be wrong”, on this interpretation we are not disagreeing any more than we would be if I said “Melons taste awful” and you replied, “No, they’re delicious”. Perhaps the simplest and most satisfactory modification would be to limit the “universe” of persons to whom moral statements are (implicitly) claimed to be applicable. (Yes, I’m familiar to the objections to this: it violates the logic of moral discourse in certain ways. But on your hypothesis all moral theories that are not based on false factual assumptions violate this logic one way or another.)

A final note: if I’m right about what the logic of moral discourse implies, my interpretation is “minimalist” in that any “reasonable” interpretation – i.e., one that is faithful to the logic of moral discourse – must entail that moral statements imply what I say they mean. That is, any reasonable interpretation must imply that any sufficiently rational person with enough K&U would do what it says is “right”. (For example, theistic moral theories, IMO, entail that if one is perfectly rational and fully understands God’s nature, purposes, and intentions, as well as one’s own nature, one will do what’s “right”. [I know that some theists deny this, but I have never been able to follow their reasoning.]) Other moral theories are possible, but they cannot claim to be reasonable interpretations of moral language as it is commonly used. Even in this case, I think that the proper function of the moral philosopher is to find an interpretation that is as faithful as possible to the logic of moral discourse rather than ignoring this logic and redefining moral statements in a way that has no relationship to their function and purpose.

Quote:
In other words, you are offering an error theory.
Not having read Mackie himself, I’m not entirely clear about what he means by an “error theory”, so I won’t comment on this. But it should be clear that what I’m doing is quite different from what he was doing.
bd-from-kg is offline  
Old 05-02-2002, 01:29 PM   #87
Veteran Member
 
Join Date: Jul 2000
Location: King George, VA
Posts: 1,400
Post

Alonzo fyfe:

This is a reply to your second April 29 post.

Quote:
My question: Do you consider utilitarianism an objective moral theory?
Yes, some versions of utilitarianism are moral theories.

Quote:
The reason that I ask: no utilitarian that I know of uses "should" (and comparable terms) the way that you say it must be used to count as a moral theory.

When an act-utilitarian says "X is right" their claims carry no implication about what a person would do if a person had sufficient knowledge and understanding.
Good heavens. If a theory had to be my theory to qualify as a moral theory at all, moral philosophy would be very simple indeed.

But I think your point is that an act-utilitarian would say that there is no reason to suppose that knowing that an act is “right” would be regarded by a rational agent as a reason for doing it. This is simply false, as I’ll show later.

Quote:
[For a utilitarian] "X is right" is true if and only if "X maximizes utility based on any alternative to X."
Yes, but there are two possible meanings of “P if and only if Q”. One is that P and Q logically entail one another. for example, P and Q might be “X is a polygon with three sides”, and “X is a polygon with three vertices”. The other meaning is that as a matter of fact P is true in just those cases where Q is true. For example, P and Q might be “The sun is on the other side of the earth”, and “It’s the dead of night”. As a matter of fact, P is true just when Q is, but neither statement logically implies the other.

Now some utilitarians have written as though they believed that “X is right” means "X maximizes utility based on any alternative to X" (or whatever their exact criterion is). If this was actually their position, they were making an elementary error, which G.E. Moore famously referred to as the “naturalistic fallacy”. I do hold that such a theory is not a “moral theory”.

Moore described the problem very nicely in Principia Ethica. In one passage he comments regarding one version of this fallacy:

Quote:
[Exponents of naturalistic Ethics] are all so anxious to persuade us that what they call the good is what we really ought to do. “Do, pray, act so, because the word ‘good’ is generally used to denote actions of this nature”: such, on this view, would be the substance of their teaching. And in so far as they tell us how we ought to act, their teaching is truly ethical, as they mean it to be. But how perfectly absurd is the reason they would give for it! ‘You are to do this, because most people use a certain word to denote conduct such as this.’ ‘You are to say the thing which is not, because most people call it lying.’ That is an argument just as good!
In your case the argument would be even more absurd: “Do, pray, act so, because I use the word ‘good’ to denote actions of this nature.” But you avoid this fallacy by explaining that you aren’t using the word “good” in a moral sense at all: you do not mean to suggest by calling an action “good” that the agent has any reason to do it.

In either case, we do not have anything that can be meaningfully called a “moral theory”. Morality is about giving reasons for doing things (though not all reasons qualify as moral reasons). If a so-called “moral theory” implies that there is no rational reason for someone to regard either the “rightness” of a proposed action, or the reasons given for calling it “right”, as reasons for doing it; if there are no grounds for expecting that a rational agent would be more likely to do something if he considered it “right” than if he considered it “wrong”, it’s not a moral theory at all.

Another indication that there is a fallacy in defining “right” in terms of some state of affairs in the “natural world” is that people who do this often argue about whose “definition” is correct. As Moore puts it:

Quote:
They not only say that they are right as to what the good is, but they endeavor to prove that other people who say it is something else, are wrong. One, for instance, will affirm that good is pleasure, another, perhaps, that good is that which is desired; and each of them will argue eagerly to prove that the other is wrong. But how is that possible?
Such disputes are necessarily meaningless, yet advocates of such “moral theories” typically do not understand their pointlessness. They think that it is somehow possible to show that their definition of “good” or “right” is somehow “better” or more “correct” than some other one, without realizing that the only possible criterion of what’s “better” or “correct” in this context is their definition of “right” or “good”. And of course their definition agrees with their definition better than any other definition does.

Moore gives this illustration:

Quote:
It is absolutely useless, so far as ethics is concerned, to prove, as Mr. [Herbert] Spencer tries to do, that increase of pleasure coincides with increase of life, unless good means something different from either life or pleasure. He might just as well try to prove that an orange is yellow by shewing that it always is wrapped up in paper.
Yet another way (the standard one, in fact) to make the problem clear is the “open question”. If one asks of any natural property X, whether any action that has this property is right, one is obviously asking a significant question; one is not asking whether any action with property X has property X. It is meaningful – i.e., it is not self-contradictory – to assert that not all actions that have property X are right, or that some actions are right that do not have property X. And therefore to say that an action is “right” cannot mean that it has property X, even though it might be the case that all actions, and only actions, that have property X are right.

To illustrate this, suppose that I argue that actions with the “maximal utility” property (or whatever property it is that you propose) are not necessarily right, on the grounds that it is possible to imagine an action that made everyone stark raving mad but perfectly happy, and quite obviously (in my opinion) such an action would not be right. You might reply that this isn’t really possible, or that such an action really would be right (with some arguments to support this position), or decide to modify your definition of “right”. But what you would not do, I suspect, is to reply “That’s nonsense. Such an act would be right by definition, and that’s all there is to it.” But that’s just what you should say if your position really is that what it means to say that an act is right is that it satisfies your criterion.

As for whether utilitarians in general hold that “right” is defined as that which conduces to the greatest happiness, or maximum utility, or whatever, let’s look at the philosopher who is generally considered the quintessential utilitarian, John Stuart Mill. The very first sentence of Utilitarianism remarks on:

Quote:
...the little progress which has been made in the decision of the controversy respecting the criterion of right and wrong.
In Chapter 2 he refers to utilitarians as:

Quote:
...those who stand up for utility as the test of right and wrong...
And later:

Quote:
It is the business of ethics to tell us what are our duties, or by what test we may know them
Later in Chapter 4 he says:

Quote:
... happiness is the sole end of human action, and the promotion of it the test by which to judge of all human conduct; from whence it necessarily follows that it must be the criterion of morality...
It’s also worth noting that Chapters 3 and 4 are titled “Of the Ultimate Sanction of the Principle of Utility” and “Of what sort of Proof the Principle of Utility is Susceptible” respectively.

So according to Mill, utility is not the definition of right and wrong, but the criterion or test of it. The “principle of utility has a “sanction” and is “susceptible of proof” – neither of which can be said of a definition. It seems quite clear that Mill’s position is that as a matter of fact right actions are always those which conduce to the greatest happiness, not that he simply defines “right” actions to be those which conduce to the greatest happiness. In short, Mill (unlike you) avoids the naturalistic fallacy.

Quote:
Bentham and Mill both said that a person would act to maximize their own pleasure and minimize their own pain.
Here is Mill again in Chapter 2 of Utilitarianism:

Quote:
Though it is only in a very imperfect state of the world's arrangements that any one can best serve the happiness of others by the absolute sacrifice of his own, yet so long as the world is in that imperfect state, I fully acknowledge that the readiness to make such a sacrifice is the highest virtue which can be found in man.
And a few paragraphs later:

Quote:
The utilitarian morality does recognize in human beings the power of sacrificing their own greatest good for the good of others.
He also comments that education should directed to the goal of cultivating sentiments that will produce such altruistic acts.

Thus Mill not only does not say that a person will always act to maximize his own pleasure, but emphatically contradicts this claim. Indeed, he argues that the will is not always directed toward what one desires.

Quote:
More contemporary utilitarians have abandoned this simplistic psychological theory, but they still almost entirely argue that there is no link between what is right and what a person with sufficient knowledge and understanding would do.
I can’t really say whether this is true, but given your misunderstanding of Mill I see no reason to accept your characterization of what “most contemporary utilitarians” hold.

Quote:
And so it seems you must say that all utilitarians are trying to hijack moral terms and apply them to a nonmoral purpose, and every philosopher who calls utilitarianism a moral theory is unwittingly propagating a mistake.
As we have seen, this is simply false.
bd-from-kg is offline  
Old 05-02-2002, 03:32 PM   #88
Veteran Member
 
Join Date: Jul 2000
Location: King George, VA
Posts: 1,400
Post

Alonzo Fyfe:

Quote:
I would like to note that (1) and (2) are parts of a psychological theory, not a moral theory.
(2) seems to be tied to a psychological theory (although that isn’t entirely clear), but (1) isn’t. On analysis, one finds that (1) is equivalent to saying that it is rational to seek to do what one would do if one had enough K&U. This way of putting it makes it clear that it makes no empirical claims about what people actually do want.

However, I think it’s clear that one of the things that people want is to be rational. And this is the sense of “want” in the original formulation of (1). Moreover, to say that someone is more rational means (among other things at least) that this desire is stronger relative to other desires. In a perfectly rational person, it would control all other desires. (It wouldn’t override other desires, because its function would essentially be that of an arbitrator or mediator of other desires.)

As for the “psychological theories” you mention, they sound more philosophical than psychological to me. Be that as it may, while the claim that abstract knowledge is “motivationally neutral” is somewhat plausible, the idea that understanding is always neutral in this sense is patently absurd. See my example of learning to play chess at the end of my April 27 post to PB. See also the last part of my April 18 reply to PB on the <a href="http://iidb.org/cgi-bin/ultimatebb.cgi?ubb=get_topic&f=52&t=000137" target="_blank">Moral Subjectivism: One View</a> thread.

Quote:
Wrong in what way? You did not specify.
AF, it really isn’t reasonable to ask someone what theory he would adopt if he became convinced that the theory he currently holds is false. Who’s prepared to answer such a question? Do you have a “theory B” lined up to replace every theory, in every field, that you hold, just in case that theory turns out to be untenable?

Quote:
In fact, I would need to do nothing.
Of course. That’s because your “moral theory” isn’t a moral theory. There is no link between what’s “right” and motives for action. Thus it has no “psychological component”. It’s rather as though you had a “car-repair manual” that made no claim that a car was actually more likely to run if you “repaired” it following its instructions. Naturally, there would be no need to update such a “manual” just because of new information about the conditions under which a car will actually run.

Quote:
The conclusion that he should do that which considers all desires at once remains the same.
I’m not sure that you understand what it means to “internalize” other people’s desires. It means that they become your desires, not merely that you’re aware of them. I assert (and I suspect that KU does as well) that a fully rational agent who is intimately aware (in the sense I explained earlier) of other people’s desires will take them into account in his actions. Not “could” or “should”, but will.

Quote:
Your definition of ‘moral-should’ (and corresponding moral terms) still implies that it is fortunate that KU theory is true, because if KU theory is false – or ever becomes false – for even one person, then morality goes out the window.
First, “fortunate” is not the right word. For beings for which KU theory was false, morality would be pointless. Indeed, they would be unable to form societies; they would be completely nonhuman, even if they had human form. If we were like that, it would not be “unfortunate” that morality had “gone out the window”, it would simply be a fact.

As for your “even one person” point, which is getting rather tiresome, if my theory is right, the human psyche has a certain intrinsic nature, and this nature is the ultimate foundation of morality. If one person were found to not have this nature, the sensible thing would be not to consider him a person rather than to “throw morality out the window”. Alternatively, one could declare that such a person was not part of the “universe” about which moral statements make “universal” claims. This would mean that he would not be subject to praise or blame for his actions, but neither would he have any rights. The rest of us would treat him pretty much the way we would treat a dangerous wild animal.

(By the way, so far as I know the only individuals who even come close to matching this description are pure psychopaths. They are apparently incapable of any degree of empathetic understanding (not just feelings like compassion, but understanding how other people tick). As my theory would suggest, the are also totally impervious to moral suasion. However, they are also plainly irrational in a number of ways, and there is significant evidence that their brains are all defective in essentially the same way.)

Quote:
While my definition of ‘moral-should’ (and corresponding moral terms) would continue to hold that, even if KU theory is false, people should still do what they would do if KU theory was true.
Sure, but your sense of “should” is an empty formality. In your theory, to say that A “should” do X means that X satisfies a certain formal criterion, nothing more. As you said yourself, you’ve abandoned traditional usage. It’s easy to forget this and imagine that “should” in your usage still has something like the meaning that it has in traditional usage, and that saying “A should do X” has something to do with A’s doing X. But it doesn’t. Your “should” is neither prescriptive nor predictive. There is no more reason for anyone to care whether A “should” or “should not” do X in your sense than there is to care whether a particular sample of water contains 1561 paramecia or 1562.

[ May 02, 2002: Message edited by: bd-from-kg ]</p>
bd-from-kg is offline  
 

Thread Tools Search this Thread
Search this Thread:

Advanced Search

Forum Jump


All times are GMT -8. The time now is 10:55 PM.

Top

This custom BB emulates vBulletin® Version 3.8.2
Copyright ©2000 - 2015, Jelsoft Enterprises Ltd.