FRDB Archives

Freethought & Rationalism Archive

The archives are read only.


Go Back   FRDB Archives > Archives > IIDB ARCHIVE: 200X-2003, PD 2007 > IIDB Philosophical Forums (PRIOR TO JUN-2003)
Welcome, Peter Kirby.
You last visited: Today at 05:55 AM

 
 
Thread Tools Search this Thread
Old 04-14-2002, 03:39 PM   #1
Veteran Member
 
Join Date: Aug 2000
Location: Indianapolis area
Posts: 3,468
Post Moral Subjectivism: One View

In this thread, I intend to present as complete a description of my own view of morality as I am able. My intent is, first, to provide a fleshed-out example of subjectivist moral thought for the several posters who have expressed confusion regarding how such a theory would work and, second, to expose my own thinking to constructive criticism in the hopes of further refining it. This is going to be a long post and I understand that few people will find it interesting enough to read in full, but the process of typing it has helped me immensely in organizing my own thinking, so it’s been worthwhile to me even if it generates no discussion.

Obligatory disclaimer: I am but one moral subjectivist. My views should not be taken to represent any sort of subjectivist consensus. Although I believe that my basic interpretation of morality is fairly uncontroversial among subjectivists, there are many points on which my interpretation may be highly idiosyncratic.

In this first post, I intend to define the terms I will be using and describe my basic metaethical thinking and interpretation of agents’ behavior. At least one, and possibly two posts will follow, dealing with explicitly moral decision making and the role of contract theory in my moral thought. I will use boldface for important terms in the sentences in which I define them. Some terms may be defined more than once, to incorporate ideas discussed since the first definition was given. Such redefinitions will not contradict the original definitions, but restate them in new terms. Here goes nothing.

First, I need to explain what I mean by “morality,” or “ethics,” two terms that I use interchangeably. Morality, or ethics, is the study of what moral agents ought to do. A moral theory, or an ethical theory, is any theory that generates prescriptive statements for moral agents.

In order to explain what I mean by “moral agent,” I must first explain what I mean by “agent:”

An agent is any entity that possesses both values and the volition to pursue them. To say that an entity possesses values is to say that that entity prefers certain states of affairs to certain other states of affairs. For example, A may prefer the state of affairs where (s)he possesses a new car to the state of affairs where (s)he does not possess a new car. In this case we would say that A values a new car. To say that an entity has volition is to say that that entity’s actions are determined, at least in part, by processes internal to the agent, rather than solely by external forces, as in the case of a rock rolling down a hillside. Most, if not all, earthly animals can be considered agents under this definition.

A subset of agents can be referred to as rational agents. A rational agent is any agent that routinely uses reasoning and abstract thought as tools in the pursuit of its values. Human beings are the only rational agents that I am aware of.

A subset of rational agents can be referred to as moral agents. A moral agent is any rational agent whose actions have the potential to affect other rational agents’ pursuit of their values, and whose own pursuit of values is potentially affected by the actions of other rational agents. I would like to point out several implications of this definition:

First, only rational agents can be moral agents. A moral theory is useless if those agents to whom it applies are incapable of applying the prescriptive statements it generates to the modification of their behavior. In order to do so, those agents must be able to understand the statements, which requires rationality.

Second, social interaction is an essential component of what we call morality. Interaction with other rational agents both defines and necessitates morality.

Third, all known rational agents are also moral agents. In order for a rational agent not to be a moral agent, that agent would have to either a) live in absolute isolation, so that it’s actions would never affect other rational agents and the actions of other rational agents would never effect it, b) interact with the world so ineffectually (in other words, be so “weak”) that its actions never have the potential to effect others, or c) interact with the world in such an effective manner (in other words, be so “strong”) that the actions of others never have the potential to affect it.

Building on those definitions already given, morality, in my view, can be more precisely described as the study of what entities who use rational thought in the pursuit of their values ought to do in social interactions with other such entities when all involved have the ability to affect each others’ pursuit of values.

What, then, do I mean by “ought?” This is one of the more difficult questions that any moral theory must answer. I will describe here my view on the sensible interpretation of prescriptive statements. I’d like to take a moment here to thank bd-from-kg who, although he disagrees with me on nearly every topic related to morality, has been indispensable, as a debate antagonist, in refining this view.

A prescriptive statement is any statement that prescribes some course of action for an agent. In other words, any statement that may be sensibly formulated as “A ought to do X,” or “A should do X,” is a prescriptive statement.

How do we know which prescriptive statements are true and which are false? An important clue lies in our answers to questions such as “Why ought A do X?” Such answers are usually of the form “A ought to do X because X will lead to Y,” where Y is assumed to be a desirable state of affairs.

Before I continue, I would like to address the observation that some answers to such questions are not of the form “A ought to do X because X will lead to Y.” Some answers are of the form, “A ought to do X because X is the right thing to do.” The proper response to such answers is “Why is X the right thing to do?” Answers to this question are usually of the form “X is the right thing to do because X leads to Y,” where Y is assumed to be a desirable state of affairs, or else of the form “X is the right thing to do because X is the right thing to do (or because X is intrinsically right, or virtuous, or whatever).” Answers of the first type are equivalent to “A ought to do X because X will lead to Y,” so we can continue down our original line of reasoning. Answers of the second type are, in my view, invalid, as they presuppose some intrinsic property of “rightness” that is inherent in certain acts and Occam’s Razor cuts that presupposition away. Additionally, it is very unsatisfying to answer with what amounts to “Just because” when other avenues of explanation are available. Theories which rely on these sorts of answers, deontological moral theories, are not usually taken very seriously by modern mainstream philosophers, at any rate.

My stance is that prescriptive statements, if they are to be meaningful at all, must reference, explicitly or implicitly, some end, a state of affairs that the prescribed course of action will bring about. Without such an end, any answer to “Why should A do X?” will amount to “Just because.”

Now, how do we know whether the statement “A ought to do X because X will lead to Y,” is true or not? First of all, we can obviously ask if X will really lead to Y. If not, then the statement is surely false. But, even if X will certainly lead to Y, is it true to say that A ought to do X? What if Y is not a desirable state of affairs for A?

As stated previously, an agent prefers certain states of affairs to others and has the capability to modify its own behavior in order to bring those states about. Further, a rational agent has the additional capability to use reasoning and abstract thought as tools to guide this behavioral self-modification. How, then, does a rational agent determine how to behave? Ideally, (s)he will consider the consequences of every possible behavior rationally and modify his/her own behavior to match whichever behavior will best transform the current state of affairs into the state of affairs that most closely resembles the state of affairs that (s)he prefers.

Now, the ideal case does not always obtain. In many situations, a rational agent may not have enough information to determine what behavior is ideal, or an agent may be imperfectly rational, resulting in a less than ideal conclusion’s being reached. In addition, time constraints prevent rational agents from considering all possible behaviors. Nevertheless, it remains a fact that the agent, ideally, will behave in the manner which will bring about the state of affairs preferred by that agent.

My stance is that this ideal is the ideal by which prescriptive statements are measured. In other words, when faced with the choice between a number of behaviors, I assert that an agent should behave in such a way as to bring about its preferred state of affairs. I maintain, then, that a prescriptive statement is true if, and only if, the course of action it prescribes is the course of action that will bring about the state of affairs that, out of all feasible states of affairs, most closely resembles that state of affairs preferred by the subject of the prescriptive statement. “A ought to do X,” is true if, and only if, X is the course of action that will bring about Y, where Y is the state of affairs, out of all feasible states of affairs, that most closely resembles the state of affairs preferred by A.

As limited and imperfectly rational beings, we are obviously unable to say with certainty which courses of action will lead to which states of affairs, particularly to ideal states of affairs, consisting of many complexly interacting variables. Further, few of us can say with certainty exactly what state of affairs we would prefer to all others. As such, prescriptive statements can also be interpreted in a more limited fashion: To truthfully say that “A ought to do X,” is to say that X will lead to Y, where Y is some state of affairs that A prefers to the current state of affairs, even if Y is not ideal for A. This more limited sense of “A ought to do X,” is the sense that I will use, unless I explicitly note otherwise, when discussing applied ethics.

Another way to think about prescriptive statements, consistent with my interpretation, is to consider them to be a special case of descriptive statement. By this interpretation, the prescriptive statement “A ought to do X,” is simply an alternative way to phrase the descriptive statement “X is the most efficient means to Y, which is valued by A.”

Combining everything I have said so far, we are now ready to redefine morality yet again. Morality is the study of the sorts of behavior which will allow a rational entity to bring the existing state of affairs closer to the state of affairs that (s)he prefers when other such entities are also attempting to bring the existing state of affairs closer to the states of affairs that they prefer.

I’m going to stop at this point to allow for criticism of my thoughts so far. My next post will specifically discuss moral decision-making or, that is to say, decision-making when the values of two or more individuals conflict.
Pomp is offline  
Old 04-14-2002, 04:49 PM   #2
Veteran Member
 
Join Date: Jun 2001
Location: my mind
Posts: 5,996
Thumbs up

Very interesting. I basically agree with everything you said so far. I am looking forward to your next installment
99Percent is offline  
Old 04-14-2002, 08:37 PM   #3
Veteran Member
 
Join Date: Mar 2002
Location: 920B Milo Circle Lafayette, CO
Posts: 3,515
Post

First, I do not know why you use the use the term 'moral' in this. Everything you say provides a reasonable account of practical-ought. You can eliminate a lot of unnecessary confusion simply by tossing away any use of moral terms. They seem to serve no practical purpose.

Second, it still seems, on your account, that a person may -- and perhaps should -- advance their own interests at the expense of others whenever they encounter a situation where they may do so with impunity. And that, in the real world (as opposed to some hypothetical iterated prisoner's dilemma) these instances are quite common.

By way of translation, I can accept your account that the value of a proposition X can be known relative to the desires of a single person P in a society with others who have different desires. And yet it remains the case that X also has a value relative to the desires of all people. That these are not always the same. That the cases where they differ are precisely those cases where P can pursue X -- which thwarts the desires of others -- with little or no risk of being made to suffer ill consequences. That, in such instances, your theory would count the pursuit of X moral (perhaps even obligatory) in virtue of its being the case that X fulfill's P's desires -- while I would count X (or, more precisely, P's pursuit of X) as morally bad in virtue of the fact that X thwarts far more desires than it can fulfill.

Third, I hold that morality is ultimately concerned with what people ought to desire -- and that the morality of actions, laws, social customs, and the like are all derived from an evaluation of desires.

The right act, for example, is that act which a properly motivated person would perform, where a properly motivated person is a person with good desires, where desires evaluated according to their tendancy to thwart or fulfill other desires, either directly or through their effects (the actions they tend to cause). Your account, as I read it, does not seem to have much to say about what a person should want.
Alonzo Fyfe is offline  
Old 04-14-2002, 09:39 PM   #4
Veteran Member
 
Join Date: Aug 2000
Location: Indianapolis area
Posts: 3,468
Post

Alonzo Fyfe,

First, I do not know why you use the use the term 'moral' in this. Everything you say provides a reasonable account of practical-ought. You can eliminate a lot of unnecessary confusion simply by tossing away any use of moral terms. They seem to serve no practical purpose.

As noted in the original post, I haven’t really begun to address actual morality as such. My first post was meant to define some terms and lay down a foundation for a future discussion of morality, which I intend to post in a day or two, once anyone who cares to has had a chance to criticize my groundwork (and, incidentally, once I’ve had a chance to finish typing it).

I agree that my theory has, thus far, dealt exclusively with what you refer to as “practical-ought.” Lest there be any doubt, I will assure you up front that every line of moral reasoning I pursue once I begin down that path will rest firmly upon a practical-ought foundation. As I have explained in the past, I find this to be the greatest strength of my own moral view, and of the subjectivist view in general: every “moral-ought,” to use your terms, is, indeed, based firmly on a “practical-ought.” As a result, this view of morality is equipped with a built-in compelling reason for a moral agent to do what (s)he ought to do, even if that reason does not apply in all cases (it is sometimes possible to act unethically and avoid consequence). As I find the effectiveness with which a moral theory handles questions such as “Why ought A do X?” a critical metaethical standard, this puts my view, which has a built-in answer to that question in most cases, head and shoulders, in my own estimation, above views that do not have such an answer in any case.

Second, it still seems, on your account, that a person may -- and perhaps should -- advance their own interests at the expense of others whenever they encounter a situation where they may do so with impunity. And that, in the real world (as opposed to some hypothetical iterated prisoner's dilemma) these instances are quite common.

Alonzo, if you can give A some compelling reason not to advance his own interest at the expense of B’s interest if he has the opportunity and the desire to do so, then I invite you to present it. A foundation based on self-interest is the only feasible foundation for a moral theory, in my view, because it is the only foundation that relies on motivations that an agent does, in fact, hold. You can no doubt give me half a dozen reasons for A to sacrifice his own good for the good of others, but if A does not actually hold any of those reasons, they amount to nothing more than wishful handwaving.

That, in such instances, your theory would count the pursuit of X moral (perhaps even obligatory) in virtue of its being the case that X fulfill's P's desires

It is certainly not obligatory for P to pursue X. P is free to pursue some goal other than X, but if X is what P truly prefers, then P would have to be irrational or deluded to do so. My theory does not create any obligation for P to pursue X. My theory simply notes that, if P is acting rationally and is cognizant of the situation, P will pursue X.

-- while I would count X (or, more precisely, P's pursuit of X) as morally bad in virtue of the fact that X thwarts far more desires than it can fulfill.

And why on earth is P concerned with the thwarting of values that are not his own?

Incidentally, I have said nothing that would preclude your calling P’s actions “morally bad.” It is, indeed, possible that P ought to do things that others would call “morally bad” if they knew about them. I’ll deal with this in more detail in my second or third expositional post, depending on where I end up putting the break between them.

Third, I hold that morality is ultimately concerned with what people ought to desire -- and that the morality of actions, laws, social customs, and the like are all derived from an evaluation of desires.

Can you provide some convincing argument that anyone ought to objectively desire anything other than they actually do desire, except possibly as a means to some more fundamental desire?

Your account, as I read it, does not seem to have much to say about what a person should want.

No, because I see no way to support the assertion that any individual shouldwant anything in particular except, as noted, in those cases where one desire acts as a means to another.

It seems that your objections all reduce to one root objection: this is not what I think morality should be. I understand that, but I’m primarily interested in objections that address my view more directly. Can you point to any logical inconsistencies in the theory itself? Can you point to any observed facts that, if taken into account, would invalidate the theory?
Pomp is offline  
Old 04-16-2002, 08:43 AM   #5
Veteran Member
 
Join Date: Jul 2000
Location: King George, VA
Posts: 1,400
Post

Pompous Bastard:

1. I’m not at all sure that “deontological theories are not taken very seriously by modern mainstream philosophers” as you claim. Theories with a deontological element, at any rate, are certainly taken seriously by the great majority of people, including many serious thinkers. For example, imagine a situation in which the public welfare would be best served by deliberately convicting an innocent man and sentencing him to death through excruciating torture. The great majority of people (and many moral philosophers, I suspect) would say that this would be morally wrong regardless of the “good” results it would have.

2. “A ought to do X” certainly does not mean that “X will lead to Y, where Y is some state of affairs that A prefers to the current state of affairs”. This is very easy to show. For example, suppose that A has two choices, X and Y, both of which will lead to a situation preferable to the current situation, but also such that the situation resulting from Y is one that almost anyone (including A) would prefer to the situation resulting from X. No reasonable person would say that A ought to do X under these conditions. A better try would be: “A ought to do X” means “X will lead to a state of affairs that A prefers to any of those that would result from any possible alternative.”

3. While this revised interpretation might seem plausible when A stands for oneself, it doesn’t seem at all plausible when A stands for someone else. For example, if Jones is thinking of breaking his promise to repay a large loan that you made to him, and it will clearly be in his interest to do so (i.e., he will prefer the consequences of not paying to the consequences of paying), according to your interpretation you must say that it would be “right” for him to do so. But even according to most subjective theories, while from Jones’s point of view welshing on his debt might be “right”, from yours it would clearly be “wrong”. To avoid this, you need to say something like “A ought to do X” means “I would prefer the consequences of A doing X to the consequences of any possible alternative.”

4. You say “answers of the second type [‘A ought to do X because X is the right thing to do’] are, in my view, invalid, as they presuppose some intrinsic property of ‘rightness’ that is inherent in certain acts and Occam’s Razor cuts that presupposition away.” This statement has several problems.

(A) The statement “A ought to do X because X is the right thing to do” is a tautology. No serious person offers a tautology as a substantive answer to a question. This is a straw man.

(B) Objective theories do not necessarily presuppose some intrinsic property of “rightness”. This is false for all consequentialist theories – for example, any version of utilitarianism. The consequences of an act (under a given set of circumstances) cannot reasonably be called an “intrinsic property” of the act.

(C) Occam’s Razor is really irrelevant here (it doesn’t “cut away” any supposed presupposition; in fact it’s completely out of place in this context). But there is an interesting point relating to it - namely that it has interesting parallels with moral principles. Thus, there is no reasonable way to construe Occam’s Razor as a proposition. (What proposition does it state? How do you determine whether it’s true or false?) The only reasonable interpretation is that it is a valid principle of action. In other words, a proper statement of Occam’s Razor looks something like “Do not multiply entities needlessly” or “Choose the simplest explanation consistent with the facts”. But there is no way to show that this principle is valid. In fact, it’s difficult to say just what it means to say that it’s valid. Yet many people assert that it is objectively justified, or rational, to act in accordance with it anyway. According to many objective moral theories, all of these things are true of certain moral principles. For example most such theories say that “Do unto others” is not a proposition, but a principle of action, which is valid even though there’s no way to show that it’s valid. In fact, it’s difficult to say just what it means to say that it’s valid. Yet many people assert that it is objectively justified, or rational, to act in accordance with it anyway.

Thus it would seem to be difficult to find grounds for dismissing the claim that "do unto others" is an objectively valid principle of action which are not also grounds for dismissing the claim that Occam's Razor is an objectively valid principle of action.

5. As always, you stubbornly ignore the fact that one’s goals or preferences might change. Thus (assuming that you accept the technical correction in point 2) you say that “A ought to do X” means that “X will lead to a state of affairs that A prefers [or that I prefer] to any of those that would result from any possible alternative.” But this ignores the fact that A might prefer the consequences of some other choice if he had more knowledge and understanding.

Here’s an example. Given your current K&U, you might find the consequences of killing Smith preferable to the consequences of leaving him alone and going on your way. But if you had more K&U – specifically, if you knew Smith and his family, and had a deep, empathetic understanding of them, so that you could imagine perfectly the suffering that murdering Smith would cause, and the future joys and satisfactions that you would be depriving him of, you might very well find the consequences of killing him vastly inferior to the consequences of leaving him alone. Would you still say that it’s right to kill Smith?

To put a finer point on this, suppose that on Monday you contemplate killing Smith and clearly prefer the state of affairs that would result from killing him to the one that would result from not killing him. According to your theory, it would be “right” to kill Smith – you “ought” to do it. So you do, and you “get away with it”; the consequences are exactly what you anticipated. But by Friday you’ve learned a lot more about Smith and his family, and no longer find the state of affairs that has resulted from killing him nearly so appealing. In fact, you wish you hadn’t done it. According to your theory, it is now “wrong” for you to have killed him. But surely this is a very odd way to use this kind of language? Doesn’t it make a lot more sense to say that on Monday you thought that killing Smith was the right thing to do, but by Friday, with your increased knowledge and understanding, you realized that it wasn’t? But in that case, to be consistent you have to say that it was wrong to kill Smith regardless of whether you later come to prefer that you hadn’t. All that matters is that you would come to prefer the consequences of not killing him to the consequences of killing him if you were to gain enough K&U.
bd-from-kg is offline  
Old 04-16-2002, 01:45 PM   #6
Veteran Member
 
Join Date: Apr 2002
Location: California
Posts: 2,029
Post

Quote:
Originally posted by bd-from-kg:
<strong>
Here’s an example. Given your current K&U, you might find the consequences of killing Smith preferable to the consequences of leaving him alone and going on your way. But if you had more K&U – specifically, if you knew Smith and his family, and had a deep, empathetic understanding of them, so that you could imagine perfectly the suffering that murdering Smith would cause, and the future joys and satisfactions that you would be depriving him of, you might very well find the consequences of killing him vastly inferior to the consequences of leaving him alone. Would you still say that it’s right to kill Smith?

To put a finer point on this, suppose that on Monday you contemplate killing Smith and clearly prefer the state of affairs that would result from killing him to the one that would result from not killing him. According to your theory, it would be “right” to kill Smith – you “ought” to do it. So you do, and you “get away with it”; the consequences are exactly what you anticipated. But by Friday you’ve learned a lot more about Smith and his family, and no longer find the state of affairs that has resulted from killing him nearly so appealing. In fact, you wish you hadn’t done it. According to your theory, it is now “wrong” for you to have killed him. But surely this is a very odd way to use this kind of language? Doesn’t it make a lot more sense to say that on Monday you thought that killing Smith was the right thing to do, but by Friday, with your increased knowledge and understanding, you realized that it wasn’t? But in that case, to be consistent you have to say that it was wrong to kill Smith regardless of whether you later come to prefer that you hadn’t. All that matters is that you would come to prefer the consequences of not killing him to the consequences of killing him if you were to gain enough K&U.

</strong>

Here is another example.

You have the opportunity to kill Smith because he mite possibly pose a threat to you or to others. But you ultamitly choose not to kill Smith because, under your current understanding of the situation, the negatives of killing Smith out way the good.

But later on you find out Smith had already murdered many people, and many more after you chose not to kill him.

So bd-from-kg, are you saying it was wrong NOT to kill Smith, purely on the basis of an incomplete understanding of the situation?
vixstile is offline  
Old 04-16-2002, 02:25 PM   #7
Veteran Member
 
Join Date: Jul 2000
Location: King George, VA
Posts: 1,400
Post

vixstile:

Quote:
So bd-from-kg, are you saying it was wrong NOT to kill Smith, purely on the basis of an incomplete understanding of the situation?
Under the circumstances you stipulate, yes.

I'm not sure what your point is, but it might be that I couldn't have been justified in killing Smith on the basis of information I didn't have. The question of whether the rightness of an action depends on its actual consequences or on the consequences that could reasonably have been anticipated by the agent is very old, and arguments can be made both ways. I personally prefer the approach of saying that an action of the sort you describe (which turns out to be for the best although you didn't know it at the time) is right, but that it is also right to blame (and punish) the agent for doing it because he had every reason to think it wrong.

Anyway, this is PB's thread, and we're wandering pretty far from its subject (I think).
bd-from-kg is offline  
Old 04-16-2002, 04:45 PM   #8
Veteran Member
 
Join Date: Aug 2000
Location: Indianapolis area
Posts: 3,468
Post

bd-from-kg,

1. I’m not at all sure that “deontological theories are not taken very seriously by modern mainstream philosophers” as you claim.

I don’t want to argue about this. I retract my statement. Many thinkers do take deontological ethics very seriously. However, I am presuming consequentialism here. If you, or anyone, has a serious objection to consequentialism in general, as opposed to my particular theory, please take it up in a new thread. I personally find deontological theories undeserving of serious consideration, for reasons that I will explain shortly.

2. “A ought to do X” certainly does not mean that “X will lead to Y, where Y is some state of affairs that A prefers to the current state of affairs”. This is very easy to show. For example, suppose that A has two choices, X and Y, both of which will lead to a situation preferable to the current situation, but also such that the situation resulting from Y is one that almost anyone (including A) would prefer to the situation resulting from X. No reasonable person would say that A ought to do X under these conditions. A better try would be: “A ought to do X” means “X will lead to a state of affairs that A prefers to any of those that would result from any possible alternative.”

(I am going to use X’ and X’’ instead of X and Y to represent A’s possible choices, because I have already established a convention in this thread of using X to represent actions and Y to represent goals.)

This is a bit of a quibble, isn’t it?

First of all, the interpretation of “A ought to do X,” you’re criticizing, that “A ought to do X because X will lead to Y, where Y is some state of affairs that A prefers to the current state of affairs,” is one that I explicitly indicated as a practical limitation on the more general interpretation that “A ought to do X because X will lead to Y, where Y is the state of affairs, out of all feasible states of affairs, that most closely resembles the state of affairs preferred by A,” necessitated by the fact that A has limited knowledge of all feasible states of affairs. As such, I would think that a charitable interpretation of my statements, bearing in mind the general case where A wants to achieve a state of affairs as close to his ideal as possible, would lead one to conclude that, if A’s two choices, X’ and X’’, lead to two states of affairs, Y’ and Y’’, and Y’’ more closely resembles the ideal state of affairs preferred by A, then A ought to do X’’. Your suggested alteration of my limited interpretation suffers from the same practical problem as my general interpretation: A rarely has the necessary information to choose from among all possible alternatives.

Further, it could easily be argued that, if A prefers Y’’ to Y’, and to the current state of affairs, then A would not really prefer Y’ to the current state of affairs, as Y’, presumably, does not include the opportunity to attain Y’’, while the current state of affairs does.

3. While this revised interpretation might seem plausible when A stands for oneself, it doesn’t seem at all plausible when A stands for someone else…To avoid this, you need to say something like “A ought to do X” means “I would prefer the consequences of A doing X to the consequences of any possible alternative.”

For the purposes of this theory, the relevant point of view is always A’s. The idea is to ask what reasons A might have for acting in a particular fashion. B’s opinion is irrelevant, except insofar as it motivates B to interfere with A’s actions.

4. You say “answers of the second type [‘A ought to do X because X is the right thing to do’] are, in my view, invalid, as they presuppose some intrinsic property of ‘rightness’ that is inherent in certain acts and Occam’s Razor cuts that presupposition away.” This statement has several problems.

(A)The statement “A ought to do X because X is the right thing to do” is a tautology. No serious person offers a tautology as a substantive answer to a question. This is a straw man.


Do you honestly mean to tell me that you have never heard anyone respond to such a question with “It’s the right thing to do,” or, “It’s just right,” or some similar statement? Perhaps this would be a strawman if I was addressing your view in particular, but these are very common response to such questions, in my experience.

(B) Objective theories do not necessarily presuppose some intrinsic property of “rightness”. This is false for all consequentialist theories – for example, any version of utilitarianism. The consequences of an act (under a given set of circumstances) cannot reasonably be called an “intrinsic property” of the act.

When did I say that all objective theories proposed some intrinsic rightness? I am fully cognizant of the fact that many objective theories are consequentialist, something that they have in common with my theory. To clarify:

My theory is consequentialist because it maintains that an act is to be judged based on its consequences, rather than on any intrinsic property of the act itself.

My theory is subjectivist because it maintains that the standard by which to judge those consequences is the preference of the agent performing the act, rather than any external standard.

(C) Occam’s Razor is really irrelevant here (it doesn’t “cut away” any supposed presupposition; in fact it’s completely out of place in this context).

I’m not sure how this is true. Deontological theories uniformly presuppose unevidenced metaphysical properties or entities, whether those be the intrinsic goodness of an act, rights that exist without being granted by any being, etc. Occam’s Razor states that unevidenced entities are not to be held to exist. Occams’s Razor, therefore, defeats deontological ethical theories, QED.

But there is an interesting point relating to it - namely that it has interesting parallels with moral principles. Thus, there is no reasonable way to construe Occam’s Razor as a proposition. (What proposition does it state? How do you determine whether it’s true or false?)

We’ve had this discussion before. Occam’s Razor can be restated as the following proposition: “Multiplying entities unnecessarily is an inefficient and unreliable means by which to attain knowledge.” We determine this to be true by pragmatic means. Abiding by Occam’s Razor seems to be an efficient and reliable means by which to attain knowledge. Multiplying entities unnecessarily seems to lead to increased confusion and ignorance. Without some standard to judge our knowledge outside of our knowledge, this is the best means of judgment available to us.

According to many objective moral theories, all of these things are true of certain moral principles. For example most such theories say that “Do unto others” is not a proposition, but a principle of action, which is valid even though there’s no way to show that it’s valid. In fact, it’s difficult to say just what it means to say that it’s valid. Yet many people assert that it is objectively justified, or rational, to act in accordance with it anyway.

The so-called Golden Rule can also be restated as a proposition: “Treating others as one would like to be treated is an effective means by which to foster mutual respect and cooperation.” We can also judge this to be true by pragmatic means but, unlike Occam’s Razor, we can verify empirically that our judgments are sound. We can directly observe that others are much more likely to treat us with respect and much more willing to cooperate with us when we treat them as the Golden Rule advises.

One can, of course, object that the Golden Rule is not an means to any end at all, and this does leave one without any means by which to judge the truth of the Rule. I’m not sure what would inspire one to do this.

Thus it would seem to be difficult to find grounds for dismissing the claim that "do unto others" is an objectively valid principle of action which are not also grounds for dismissing the claim that Occam's Razor is an objectively valid principle of action.

I would dismiss neither. The Golden Rule is an objectively valid means by which to attain the end of respect and cooperation. Occam’s Razor is, to the best of our ability to judge it, an objectively valid means by which to attain knowledge of the Universe. Asserting that the two principles are somehow universal, that is, asserting that someone who does not value respect or cooperation ought to obey the Rule, or that someone who desires ignorance ought to obey the Razor, is just silly.

5. As always, you stubbornly ignore the fact that one’s goals or preferences might change. Thus (assuming that you accept the technical correction in point 2) you say that “A ought to do X” means that “X will lead to a state of affairs that A prefers [or that I prefer] to any of those that would result from any possible alternative.” But this ignores the fact that A might prefer the consequences of some other choice if he had more knowledge and understanding.

I am not ignoring the fact that one’s values may change at all.

First, it is quite possible that, when A is not entirely sure which of the states of affairs he is able to choose between most closely resembles his ideal preferred state of affairs, A will prefer that possible state of affairs in which he makes no concrete decision but, rather, elects to gather more information with which to guide his decision.

Second, although I have not explicitly stated as much, the fact that A’s future preference may differ from A’s current preference is a factor that A must consider when making decisions, just as A must consider future reward in addition to immediate reward when deciding where to invest his money.

Third, it is simply unreasonable to expect A to make decisions based entirely on future preferences that he may someday hold. Unless A, who prefers chocolate ice cream to vanilla, has some good reason to suspect that he will prefer vanilla to chocolate tomorrow, it is irrational for A to purchase vanilla ice cream instead of chocolate based on the possibility that his preference will change. Likewise, to use an example from another thread, unless A has some good reason to suspect that he will come to have greater empathy for Smith and his family in the future, it is irrational for A to elect not to kill Smith based on this possibility when A clearly values Smith’s money more than he devalues the suffering of Smith’s family.

I realize that your position is that 1) rational agents always seek greater and greater information and that 2) such information always leads inexorably toward a greater and greater preference for perfect altruism. Please refer to the thread <a href="http://iidb.org/cgi-bin/ultimatebb.cgi?ubb=get_topic&f=52&t=000138" target="_blank">here</a> for a brief critique of 1). Regarding 2), while I don’t have any specific objection that I wish to air in this thread, I feel that the case you have made in the past is weak and speculative.

Here’s an example. Given your current K&U, you might find the consequences of killing Smith preferable to the consequences of leaving him alone and going on your way. But if you had more K&U – specifically, if you knew Smith and his family, and had a deep, empathetic understanding of them, so that you could imagine perfectly the suffering that murdering Smith would cause, and the future joys and satisfactions that you would be depriving him of, you might very well find the consequences of killing him vastly inferior to the consequences of leaving him alone. Would you still say that it’s right to kill Smith?

With the caveat that I am claiming that it is rational to kill Smith, not that it is necessarily “right.,” yes, unless I have some good reason to suspect that I will later develop the sort of intimate relationship with Smith’s family necessary to possess such deep empathy with them and to regret having killed Smith. It is irrational to make decisions to pursue preferences that I do not have and never expect to have.

To put a finer point on this, suppose that on Monday you contemplate killing Smith and clearly prefer the state of affairs that would result from killing him to the one that would result from not killing him. According to your theory, it would be “right” to kill Smith – you “ought” to do it. So you do, and you “get away with it”; the consequences are exactly what you anticipated. But by Friday you’ve learned a lot more about Smith and his family, and no longer find the state of affairs that has resulted from killing him nearly so appealing. In fact, you wish you hadn’t done it.

You’ve contradicted yourself. If the consequences were “exactly what (I) anticipated” then I would have anticipated this enhanced knowledge, and this regret, and I would not have killed Smith. You’re attempting to introduce negative consequences into a hypothetical situation that was stipulated to have no negative consequences. I would kill Smith if I had reasonable assurance that there would be no negative consequences to the act. Moving the negative consequence from Monday to Friday does not change my answer.

According to your theory, it is now “wrong” for you to have killed him. But surely this is a very odd way to use this kind of language? Doesn’t it make a lot more sense to say that on Monday you thought that killing Smith was the right thing to do, but by Friday, with your increased knowledge and understanding, you realized that it wasn’t?

If my increased K&U had led me to the realization that there were negative consequences to my action that I hadn’t previously been aware of then, yes, I would agree with you that an action I thought was right for me was actually wrong for me. In your example, however, the negative consequences are a direct result of the increased K&U I have gained. In this case, I maintain that I ought to have killed Smith and I ought not have gotten involved with his family, leading to increased empathy and, thus, regret afterwards.

But in that case, to be consistent you have to say that it was wrong to kill Smith regardless of whether you later come to prefer that you hadn’t. All that matters is that you would come to prefer the consequences of not killing him to the consequences of killing him if you were to gain enough K&U.

You’re confusing the role of information that simply leads me to reach new conclusions and information that actually changes me. The former reveals pitfalls that I wasn’t previously aware of. The latter actually creates its own pitfalls.

I hope I've expressed my views clearly.
Pomp is offline  
Old 04-18-2002, 01:19 PM   #9
Veteran Member
 
Join Date: Jul 2000
Location: King George, VA
Posts: 1,400
Post

Pompous Bastard:

Quote:
For the purposes of this theory, the relevant point of view is always A’s.
This is one of the logically available options, but it’s not a very attractive one. For one thing, it makes a complete hash of a good bit of what is ordinarily regarded as “moral discourse”. Thus, suppose that A says “Jones should not have sold out his friends to save his own butt”. B replies: “No, I think he should have. The criminal justice system would break down entirely if the police didn’t get confessions that way”. A responds, “Yes, but in this case his friends were actually innocent”. On your interpretation A is saying that it was not in Jones’s interest to save his own butt by selling out his friends, Jones is saying that it wasn’t in his interest because the criminal justice system, etc. And Jones is saying that just the same it wasn’t in his interest because his friends were innocent. (To make this interpretation even more absurd, we can suppose that Jones got a lighter sentence for the crime he was charged with by telling the police that his friends were guilty of crimes that he had also committed, thus avoiding punishment for those crimes altogether.) This interpretation turns the whole conversation into total nonsense. But surely it isn’t total nonsense. It was about something meaningful. The job of the moral philosopher is not to arbitrarily assign novel meanings to moral terms, but to analyze what conversations of this sort are actually about.

Quote:
My theory is subjectivist because it maintains that the standard by which to judge those consequences is the preference of the agent performing the act, rather than any external standard.
On the contrary, strange as it may seem, according to most definitions of an objective moral theory your theory is objective! That is, if A really does prefer the consequences of X to the consequences of X', then it really is true that he ought to do X rather than X'. It’s true for you or me or anyone else. It always was true and always will be true.

Quote:
Occam’s Razor ... defeats deontological ethical theories.
Well, I don’t really want to waste much time disputing this since I don’t think much of deontological theories anyway. But many philosophers reject the notion that Occam’s Razor can properly be applied to philosophical theories. In particular, the idea that it “refutes” idealism is flatly rejected by idealists. And the idea that you can show that an alleged property doesn’t exist by applying Occam’s Razor is ridiculous. Properties aren’t “entities” – at least not according to anyone but Platonists – they’re concepts.

One reason Occam’s Razor is widely considered to be inapplicable to such things is precisely that its “usefulness” in this kind of context cannot be “determined” by “pragmatic means”. This may be a justification for using it in scientific and other empirical contexts, but it simply doesn’t apply in other contexts.

But discussing whether adopting Occam’s Razor as a “principle of action” can be justified is unnecessarily complicated. To illuminate the key issues more clearly, I propose to consider the Principle of Induction instead.

You say:

Quote:
Abiding by Occam’s Razor seems to be an efficient and reliable means by which to attain knowledge.
Presumably you would say the same about the Principle of Induction. But the problem here was explained admirably by David Hume: the evidence that induction is a “reliable means to attain knowledge” itself depends on the Principle of Induction. That is, the argument is that using it seems to have “worked” pretty reliably in the past, and so on the basis of the Principle of Induction we are justified in supposing that it will work pretty reliably in the future. This is obviously circular.

Quote:
Without some standard to judge our knowledge outside of our knowledge, this is the best means of judgment available to us.
That is, we believe without proof or evidence that it’s the best means of judgment available to us.

In short, we all accept the Principle of Induction as a principle of action and apply it constantly, but we do so without any logical basis for believing it to be valid or useful.

Quote:
The so-called Golden Rule can also be restated as a proposition: “Treating others as one would like to be treated is an effective means by which to foster mutual respect and cooperation.”
No, it cannot be restated in this way. The Golden Rule is a principle of action, not an observation or factual claim. One might choose to follow this principle based on a belief that it “fosters mutual respect and cooperation”, or one may choose not to, or one might choose to follow it for some other reason. The Golden Rule cannot be identified with one possible reason for following it.

More to the point, I was using it as shorthand for “always take the interest of everyone who might be affected by your actions equally into account”; that is, as a statement of the “Principle of Altruism”. Obviously you do not regard this as a valid principle of action, but many people do. My point is that it is no more possible to show whether the Principle of Induction is a valid principle of action than it is possible to show whether the Principle of Altruism is a valid principle of action. If you reject the Principle of Altruism because it can’t be shown to be valid, to be consistent you must discard the Principle of Induction for the same reason.

Quote:
I am not ignoring the fact that one’s values may change at all.

... the fact that A’s future preference may differ from A’s current preference is a factor that A must consider when making decisions, just as A must consider future reward in addition to immediate reward when deciding where to invest his money...

[but] it is simply unreasonable to expect A to make decisions based entirely on future preferences...
As you know (or should know) very well, my point has nothing to do with whether or not you actually will have different “values” in the future; it has to do with whether you would have different values now if you had more knowledge and information.

Quote:
I realize that your position is that 1) rational agents always seek greater and greater information ...
It would be quite irrational to “always seek greater and greater information”. My claim is that rational agents always want (in an important sense) to make the choice that they would make if they had sufficient K&U. Since the critique you cite is based on your misunderstanding of this principle, it is irrelevant.

Quote:
bd:
... suppose that on Monday you contemplate killing Smith and clearly prefer the state of affairs that would result from killing him to the one that would result from not killing him. According to your theory, it would be “right” to kill Smith – you “ought” to do it. So you do, and you “get away with it”; the consequences are exactly what you anticipated. But by Friday you’ve learned a lot more about Smith and his family, and no longer find the state of affairs that has resulted from killing him nearly so appealing. In fact, you wish you hadn’t done it.

PB:
You’ve contradicted yourself. If the consequences [of killing Smith] were “exactly what (I) anticipated” then I would have anticipated this enhanced knowledge, and this regret, and I would not have killed Smith.
I said nothing about this enhanced knowledge being a consequence of your having killed Smith. Actually what I had in mind is that it was sheer coincidence. So there’s no contradiction.

Quote:
bd:
According to your theory, it is now “wrong” for you to have killed him. But surely this is a very odd way to use this kind of language? Doesn’t it make a lot more sense to say that on Monday you thought that killing Smith was the right thing to do, but by Friday, with your increased knowledge and understanding, you realized that it wasn’t?

PB
If my increased K&U had led me to the realization that there were negative consequences to my action that I hadn’t previously been aware of then, yes, I would agree with you that an action I thought was right for me was actually wrong for me.
So we’re agreed? By Friday your increased K&U has led you to the realization that the action you had thought was right was actually wrong[/i]? (To keep to your terminology, I suppose that I should put it this way: Would you say that on Wednesday (after the murder but before acquiring the increased K&U) you thought that you should have done it, but by Friday you realized that you should not have done it?

Quote:
You’re confusing the role of information that simply leads me to reach new conclusions and information that actually changes me... The latter actually creates its own pitfalls.
So your position is that you should beware of obtaining knowledge and information that might actually change your goals and values? You should deliberately choose to remain ignorant lest you learn something that might change you? And you call this rational?

Your attitude is reminiscent of the boy who said “It’s a good thing I don’t like spinach, because if I liked it I’d eat it, and I hate spinach!” But actually it’s even more irrational than that. It’s more like saying “I’m not going to try eating spinach. I’m sure I don’t like it, but if I tried it I might find out that I do like it, and then I’d eat a lot of it. And I’m sure I’d hate spinach.”

Let’s look at how your point of view plays out in real life.

1. Suppose that you are in the habit of mowing your lawn every Sunday afternoon. One day you visit the neighbors across the street, and get to know their child quite well. You become good friends. Eventually you discover that your habit of mowing the lawn on Sunday is causing him a good bit of distress (from the noise or pollen – whatever). Since you like him very well, you switch to mowing the lawn on Saturday, when he’s at his Grandma’s, even though this is somewhat less convenient for you.

Your analysis: Dumb! It was a mistake to get to know this boy because it changed your values. Because you now value the boy’s welfare and are aware of the effect your mowing was having on him, you changed your behavior in ways that impacted your old values negatively. Bummer!

2. You live in the antebellum South. You get to know some slaves and discover that, contrary to what you’ve been taught, they’re ordinary people like yourself, and are very unhappy about being slaves. After a time, you realize that you have come to hate the institution of slavery. So, at considerable risk and expense, you try to help some of them escape.

Your analysis: what an idiot! You should never have gotten to know those slaves. You could have lived out your life in the blissful illusion that slavery was a benign institution and that in any case the slaves were subhuman, so it really didn’t matter. Instead, you’re giving up valuable time and resources – severely impacting your “old” values – and risking your life to boot. What were you thinking?

3. You get to know a woman, fall in love, and get married. True, you now have deep satisfactions of a kind you didn’t even know existed before. but you also have a lot less money to spend on things you want, a lot less free time, a lot less freedom in general.

Your analysis: Fool! How could you? You had to know that getting to know that woman might lead to this. Now your old values are being given short shrift; they’re being crowded out by all of these pesky new values that you’ve acquired.

My point is that this way of thinking is deeply, profoundly irrational. Acquiring new values is an important part of what we do as human beings. No one who has actually acquired new values based on an increase in knowledge and understanding wishes that he didn’t have them. Your point of view gives preference to values that are the product of ignorance over values that are the product of knowledge and understanding.

The final point is that, if a fully rational person knows that he would have a certain value if he had enough K&U, he will seek to act on that value. More concretely, suppose that you would choose to do X except for the fact that you happen to know that (for reasons you don’t fully understand) you would choose to do X' instead if you had enough K&U. I contend that under these conditions it is rational to choose X' and irrational to choose X. Thus, if you know that you would mow the lawn on Saturday if you knew enough about the negative effects you were causing by doing it on Sunday (even if you don’t know what these negative effects are) it is rational to mow the laws on Saturday and irrational to continue mowing it on Sunday. If you know that you would hate slavery if you knew and understood enough about it, it is rational to oppose slavery and irrational not to. If you know that you would get to know a woman if you fully understood the consequences, it is rational to do it and irrational not to.

It seems to me that this principle is universally valid. Thus if your position is essentially that one should always do what is rational, in light of this principle you must change your definition of “A should do X” to “A would prefer X over all possible alternatives if he had enough knowledge and understanding of the consequences of all alternatives”.

It’s true, of course, that one does not ordinarily know what one would do if one had more K&U. But I contend that one of the major purposes of a moral system is to provide guidance on this. Thus, the point of “thou shalt not steal” is that in almost all cases, if you had sufficient knowledge and understanding of the consequences, you would choose not to steal – and therefore you should not – i.e., it is irrational to do so - even though you do not have this K&U. Of course, in this case you have only a pretty good assurance (rather than definite knowledge) of what you would do if you had enough K&U, but this doesn’t change the principle. The rational thing to do is not just to figure out what you want to do, or which consequences you prefer, but to make your best guess as to what you would want to do, or which consequences you would prefer, if you had enough K&U.

[ April 19, 2002: Message edited by: bd-from-kg ]</p>
bd-from-kg is offline  
 

Thread Tools Search this Thread
Search this Thread:

Advanced Search

Forum Jump


All times are GMT -8. The time now is 08:26 PM.

Top

This custom BB emulates vBulletin® Version 3.8.2
Copyright ©2000 - 2015, Jelsoft Enterprises Ltd.