Freethought & Rationalism ArchiveThe archives are read only. |
04-29-2002, 04:46 PM | #81 |
Veteran Member
Join Date: Mar 2002
Location: 920B Milo Circle
Lafayette, CO
Posts: 3,515
|
bd-from-kg
I need you to explain something to me. I was reading through some old posts of yours on another thread (as you recommended), and came across the following: "Utilitarianism, for example, is considered an objective moral theory by all moral philosophers that I know of." My question: Do you consider it an objective moral theory? Or is every moral philosopher you know of wrong on this matter? The reason that I ask: no utilitarian that I know of uses "should" (and comparable terms) the way that you say it must be used to count as a moral theory. When an act-utilitarian says "X is right" their claims carry no implication about what a person would do if a person had sufficient knowledge and understanding. "X is right" is true if and only if "X maximizes utility based on any alternative to X." If asked whether they believed that a person with sufficient knowledge and understanding would always do X, most utilitarians would laugh. Bentham and Mill both said that a person would act to maximize their own pleasure and minimize their own pain. Legal and moral sanctions, not 'sufficient knowledge and understanding,' were required to get them to do what they ought. More contemporary utilitarians have abandoned this simplistic psychological theory, but they still almost entirely argue that there is no link between what is right and what a person with sufficient knowledge and understanding would do. And so it seems you must say that all utilitarians are trying to hijack moral terms and apply them to a nonmoral purpose, and every philospoher who calls utilitarianism a moral theory is unwittingly propagating a mistake. Either that, or you must allow that the common usage of the word "moral" is more elastic than your theory allows. [ April 30, 2002: Message edited by: Alonzo Fyfe ]</p> |
04-29-2002, 05:58 PM | #82 |
Veteran Member
Join Date: Mar 2002
Location: 920B Milo Circle
Lafayette, CO
Posts: 3,515
|
Just a note:
In spite of whatever differences may have appeared in debate, I am having the pleasure of debating with some of the most intelligent people I have had contact with in a long time. |
04-30-2002, 07:05 AM | #83 |
Veteran Member
Join Date: Mar 2002
Location: 920B Milo Circle
Lafayette, CO
Posts: 3,515
|
BD
I shall seek to eliminate some confusion here. Some of my statements appear contradictory only in the light of shifting assumptions. (1) If the assumption is that no theory can properly be called a moral theory unless it contains an element whereby a person who understands that he morally should do something will actually do it (moral internalism) -- then I am a moral reductionist. Moral ought reduces to practical ought. (2) If the assumption is that no theory can properly be called a moral theory unless it conforms to common practice (intuitionism), then I am a moral eliminativist. Those practices only make sense under an assumption of intrinsic values, and intrinsic values do not exist. (3) If the assumption is that the term 'moral' is flexible enough to allow utilitarian theories to be called moral, even though they do not comform strictly to common usage and reject the proposition that those who understand that they should do something would actually do it, then I defend the moral theory best known as motive- (desire-) utilitarianism. I try to be flexible and not argue over assumptions. Rather, I prefer to allow my opponent to pick whatever assumptions they like best and to go from there. Where I do not know, or where my opponent seems flexible enough to allow me to set the assumptions, I tend to go with (3). You seem to be insisting that we use option (1) (though apparently denying that this is reductionist and claiming instead it is compatible with (2) -- with which I disagree). Either way, statements that I make under one assumption do not necessarily contradict statements that I make under another assumption. For example, the fact that I argue for moral eliminativism under assumption (2) and propose a moral theory under assumption (3) is not a contradiction because the background assumptions are different. I can well understand the confusion. I have tried, from time to time, to specify the assumptions, but I do sometimes take the assumptions for granted and even guess wrongly as to which assumptions we are working under. But the distinctions above express my overall view. |
05-01-2002, 10:36 AM | #84 |
Veteran Member
Join Date: Mar 2002
Location: 920B Milo Circle
Lafayette, CO
Posts: 3,515
|
Originally posted by bd-from-kg:
(1) We want (in a very real sense) to do what we would choose to do if we had enough knowledge and understanding (K&U). (2) What we would do if we had enough K&U would be to act altruistically – i.e., to take everyone’s interests into account equally. (3) An act is right if any fully rational agent with enough K&U would do it, and wrong if no such agent would do it. I would like to note that (1) and (2) are parts of a psychological theory, not a moral theory. This is not a criticism – every moral theory needs to be linked in some way to a psychological theory. (Note 1: It does mean that the thesis needs to be stated in such a way that it is falsifiable so that it can be compared to other psychological theories.) When I started my investigation into ethics, I began to look at several psychological theories and discovered I could easily get lost in the field and never find my way back out to study ethics. Therefore, I took a shortcut. I decided to work with the most common and pervasive theory within the field, under the assumption that professionals within the field generally know what they are talking about and the dominant position should be taken the most seriously. That theory is BDI psychology – the idea that Beliefs + Desires form Intentions which, in turn, lead to action. Desires provide the motivating force and selects the ends of human action, while beliefs are motivationally inert but useful in selecting the means to an end. Each person acts so as to maximize fulfillment of his existing desires and, given his beliefs, selects the best means to that end. (Note 2: I actually found one of the challenging theories much more to my liking – script theory. But I had no reason to trust that my opinion based on limited research was better than the dominant opinion of professionals in the field.) If I may, I would like to call your alternative theory, KU theory. KU theory differs from BDI theory in that, within BDI theory, understanding refers to collecting more and truer beliefs about a subject. However, because beliefs are motivationally neutral, understanding does not influence an agent’s desires. It simply allows an agent to more efficiently select a means to his desires. KU theory, on the other hand, holds that understanding causes a change in desires so that the individual, though motivated by his own desires, also internalizes the desires of others that he sufficiently understands. I asked you what you would do if it turned out that KU theory was false. You answered: “Of course, what I’d really do in that case is to conclude that my theory is wrong.” (Wrong in what way? You did not specify. Of course, proof that (2) was false would cause you to conclude that (2) was false, but would you also conclude that (3) was wrong? Would you no longer say that rape is permissible if even one person exists who would want to commit rape even with perfect knowledge and understanding?) It seems fair that I should also answer what I would do if I discovered that BDI theory had to yield to the superiority of KU theory. In fact, I would need to do nothing. I hold that what a person should do is what a person with good desires would do, and good desires are evaluated by how those desires stand in relationship to all other desires. In practice, you seem to assert something every similar. KU theory simply adds to BDI theory that there is a mechanism whereby agents can directly internalize the desires of others and to consider all desires at once. The conclusion that he should do that which considers all desires at once remains the same. What remains different on our two accounts, then, would be a purely intellectual difference. Your definition of ‘moral-should’ (and corresponding moral terms) still implies that it is fortunate that KU theory is true, because if KU theory is false – or ever becomes false – for even one person, then morality goes out the window. While my definition of ‘moral-should’ (and corresponding moral terms) would continue to hold that, even if KU theory is false, people should still do what they would do if KU theory was true. Though we may still need legal and moral sanctions in order to get even the person with sufficient knowledge and understanding to do what he morally should. |
05-02-2002, 06:03 AM | #85 |
Veteran Member
Join Date: Mar 2002
Location: 920B Milo Circle
Lafayette, CO
Posts: 3,515
|
Originally posted by bd-from-kg:
(1) We want (in a very real sense) to do what we would choose to do if we had enough knowledge and understanding (K&U). (2) What we would do if we had enough K&U would be to act altruistically – i.e., to take everyone’s interests into account equally. I would like to post another problem I have encountered with BD's theory (even if there is nobody out there; I'm saving this thread and would like to have all of my notes in one place). The truth of (1), I realized, is theory-dependant. Within BDI moral theory, knowledge and understanding are motivationally neutral. Yet, they are useful in selecting the most efficient means of fulfilling our desires. Thus, it makes sense to say that we at least hope that our actions are those that fulfill condition (1). However, the cost of obtaining an additional piece of information can often be expected to exceed the benefit, in which case we rationally choose to remain ignorant and hope for the best. But KU theory changes this and says there is a species of understanding that will actually change our desires -- whereby we internalize the desires of others by coming to a proper understanding of them. Once we make this basic change, we need to revisit the other parts of the theory to see whether and to what degree they remain true. We need to ask if this altered conception of understanding effects the role of understanding. What is the personal value (in terms of existing desires) of having our desires changed? The justification for (1) within BDI theory no longer applies. It is no longer the case that the most efficient means of fulfilling our present desires is that which we would identify if we had perfect (or sufficient) understanding of the facts. A new justification (consistent with KU theory) is needed here. Or, if we take (1) as a given, this implies that BDI theory trumps KU theory, because BDI theory provides a ready account of why (1) is true, while KU theory does not. [ May 02, 2002: Message edited by: Alonzo Fyfe ]</p> |
05-02-2002, 09:32 AM | #86 | ||||
Veteran Member
Join Date: Jul 2000
Location: King George, VA
Posts: 1,400
|
Alonzo Fyfe:
No I haven’t abandoned this thread. But if you continue the practice of stringing together five posts to my one I’ll have to. The normal policy is to allow someone time to reply to one post before making another. Not everyone has unlimited time on their hands, or is willing to devote what time they have to one thread. This is a reply to your first April 29 post. Quote:
Quote:
The problem here is that there is an ambiguity in phrases like “common usage” and “the way this term is commonly used”. In both cases one could be referring either to what most people understand themselves to be saying, or to a reasonable interpretation of the language in question. Of course this ambiguity doesn’t arise when what people understand themselves to be saying makes sense. But when it doesn’t (for example, when what they understand themselves to be saying involves beliefs about the existence of nonexistent entities), before concluding that they’re just talking meaningless nonsense (or simply that everything they say is necessarily false, which comes to much the same thing) one should analyze what they are saying, and the contexts in which they say it, to determine whether there is a meaningful, reasonable interpretation of their statements. This is what my theory purports to be. It’s possible that there are other meaningful, reasonable interpretations as well, but the existence of even one is sufficient to show that moral language is not inherently nonsensical, as Mackie apparently claims. If it is possible to understand moral language in a way that does not require the existence of a mysterious property of “rightness” or “goodness”, then Mackie is wrong; there is no need to “reinvent right and wrong”; one need only understand moral language in a reasonable way. The fact that most people don’t understand it in that way is irrelevant. What people “understand” by moral language is all over the lot anyway. What I’m saying is “Here’s a way to understand moral language as it is actually used which is logically coherent, makes moral statements meaningful, and preserves the link between “rightness”, motivation of the agent, and the effects of the act on other people.” So: yes, my theory purports to be a reasonable interpretation of moral language as it is actually used by most people. But no, it does not purport to be a description of what most people mean when they use moral language. Quote:
There would still be two reasonable interpretations of “You shouldn’t rape that woman” that are roughly along the lines I suggest. The first is “If you were fully rational and had enough K&U, you wouldn’t rape that woman”. Basically this is an appeal to his “better nature” (assuming that he has one). The second is “Not only do I disapprove of your doing it, but I’m convinced that I would disapprove of it even if I were fully rational, and no matter how much K&U I had.” Essentially this is a warning. To oversimplify a bit, it means, “If you try to do this I intend to stop you or punish you, and there are no reasons you can possibly give that would persuade me not to.” But IMO these are not moral interpretations. I think that it is inherent in the logic of moral discourse that moral statements involve a claim of universality. The statement “A should not do X” (when “should” is used in the moral sense) implies that reasons exist for doing (or not doing) the act in question that any rational (or at least rational human) agent would find compelling if he fully understood them. If no such reasons exist, the moral claim is false, and if no such reasons ever exist, the claim of universality inherent in the logic of moral discourse cannot be sustained. In that case our standards for what constitutes a “reasonable” interpretation of moral statements would have to be relaxed: any interpretation in which some positive moral statements are true would not involve the claim to universality (in the sense that I mean it). Of course they could be interpreted as universal in a different sense; for example, “It would be wrong for A to do X” might mean that doing X violates a “moral principle” that I am willing to apply to all cases without exception, even when I am the agent. But this is a subjective interpretation, and as such violates the logic of moral discourse in other ways. In particular, if I say “It would be wrong for A to do X” and you say, “No, it wouldn’t be wrong”, on this interpretation we are not disagreeing any more than we would be if I said “Melons taste awful” and you replied, “No, they’re delicious”. Perhaps the simplest and most satisfactory modification would be to limit the “universe” of persons to whom moral statements are (implicitly) claimed to be applicable. (Yes, I’m familiar to the objections to this: it violates the logic of moral discourse in certain ways. But on your hypothesis all moral theories that are not based on false factual assumptions violate this logic one way or another.) A final note: if I’m right about what the logic of moral discourse implies, my interpretation is “minimalist” in that any “reasonable” interpretation – i.e., one that is faithful to the logic of moral discourse – must entail that moral statements imply what I say they mean. That is, any reasonable interpretation must imply that any sufficiently rational person with enough K&U would do what it says is “right”. (For example, theistic moral theories, IMO, entail that if one is perfectly rational and fully understands God’s nature, purposes, and intentions, as well as one’s own nature, one will do what’s “right”. [I know that some theists deny this, but I have never been able to follow their reasoning.]) Other moral theories are possible, but they cannot claim to be reasonable interpretations of moral language as it is commonly used. Even in this case, I think that the proper function of the moral philosopher is to find an interpretation that is as faithful as possible to the logic of moral discourse rather than ignoring this logic and redefining moral statements in a way that has no relationship to their function and purpose. Quote:
|
||||
05-02-2002, 01:29 PM | #87 | |||||||||||||||
Veteran Member
Join Date: Jul 2000
Location: King George, VA
Posts: 1,400
|
Alonzo fyfe:
This is a reply to your second April 29 post. Quote:
Quote:
But I think your point is that an act-utilitarian would say that there is no reason to suppose that knowing that an act is “right” would be regarded by a rational agent as a reason for doing it. This is simply false, as I’ll show later. Quote:
Now some utilitarians have written as though they believed that “X is right” means "X maximizes utility based on any alternative to X" (or whatever their exact criterion is). If this was actually their position, they were making an elementary error, which G.E. Moore famously referred to as the “naturalistic fallacy”. I do hold that such a theory is not a “moral theory”. Moore described the problem very nicely in Principia Ethica. In one passage he comments regarding one version of this fallacy: Quote:
In either case, we do not have anything that can be meaningfully called a “moral theory”. Morality is about giving reasons for doing things (though not all reasons qualify as moral reasons). If a so-called “moral theory” implies that there is no rational reason for someone to regard either the “rightness” of a proposed action, or the reasons given for calling it “right”, as reasons for doing it; if there are no grounds for expecting that a rational agent would be more likely to do something if he considered it “right” than if he considered it “wrong”, it’s not a moral theory at all. Another indication that there is a fallacy in defining “right” in terms of some state of affairs in the “natural world” is that people who do this often argue about whose “definition” is correct. As Moore puts it: Quote:
Moore gives this illustration: Quote:
To illustrate this, suppose that I argue that actions with the “maximal utility” property (or whatever property it is that you propose) are not necessarily right, on the grounds that it is possible to imagine an action that made everyone stark raving mad but perfectly happy, and quite obviously (in my opinion) such an action would not be right. You might reply that this isn’t really possible, or that such an action really would be right (with some arguments to support this position), or decide to modify your definition of “right”. But what you would not do, I suspect, is to reply “That’s nonsense. Such an act would be right by definition, and that’s all there is to it.” But that’s just what you should say if your position really is that what it means to say that an act is right is that it satisfies your criterion. As for whether utilitarians in general hold that “right” is defined as that which conduces to the greatest happiness, or maximum utility, or whatever, let’s look at the philosopher who is generally considered the quintessential utilitarian, John Stuart Mill. The very first sentence of Utilitarianism remarks on: Quote:
Quote:
Quote:
Quote:
So according to Mill, utility is not the definition of right and wrong, but the criterion or test of it. The “principle of utility has a “sanction” and is “susceptible of proof” – neither of which can be said of a definition. It seems quite clear that Mill’s position is that as a matter of fact right actions are always those which conduce to the greatest happiness, not that he simply defines “right” actions to be those which conduce to the greatest happiness. In short, Mill (unlike you) avoids the naturalistic fallacy. Quote:
Quote:
Quote:
Thus Mill not only does not say that a person will always act to maximize his own pleasure, but emphatically contradicts this claim. Indeed, he argues that the will is not always directed toward what one desires. Quote:
Quote:
|
|||||||||||||||
05-02-2002, 03:32 PM | #88 | ||||||
Veteran Member
Join Date: Jul 2000
Location: King George, VA
Posts: 1,400
|
Alonzo Fyfe:
Quote:
However, I think it’s clear that one of the things that people want is to be rational. And this is the sense of “want” in the original formulation of (1). Moreover, to say that someone is more rational means (among other things at least) that this desire is stronger relative to other desires. In a perfectly rational person, it would control all other desires. (It wouldn’t override other desires, because its function would essentially be that of an arbitrator or mediator of other desires.) As for the “psychological theories” you mention, they sound more philosophical than psychological to me. Be that as it may, while the claim that abstract knowledge is “motivationally neutral” is somewhat plausible, the idea that understanding is always neutral in this sense is patently absurd. See my example of learning to play chess at the end of my April 27 post to PB. See also the last part of my April 18 reply to PB on the <a href="http://iidb.org/cgi-bin/ultimatebb.cgi?ubb=get_topic&f=52&t=000137" target="_blank">Moral Subjectivism: One View</a> thread. Quote:
Quote:
Quote:
Quote:
As for your “even one person” point, which is getting rather tiresome, if my theory is right, the human psyche has a certain intrinsic nature, and this nature is the ultimate foundation of morality. If one person were found to not have this nature, the sensible thing would be not to consider him a person rather than to “throw morality out the window”. Alternatively, one could declare that such a person was not part of the “universe” about which moral statements make “universal” claims. This would mean that he would not be subject to praise or blame for his actions, but neither would he have any rights. The rest of us would treat him pretty much the way we would treat a dangerous wild animal. (By the way, so far as I know the only individuals who even come close to matching this description are pure psychopaths. They are apparently incapable of any degree of empathetic understanding (not just feelings like compassion, but understanding how other people tick). As my theory would suggest, the are also totally impervious to moral suasion. However, they are also plainly irrational in a number of ways, and there is significant evidence that their brains are all defective in essentially the same way.) Quote:
[ May 02, 2002: Message edited by: bd-from-kg ]</p> |
||||||
Thread Tools | Search this Thread |
|