FRDB Archives

Freethought & Rationalism Archive

The archives are read only.


Go Back   FRDB Archives > Archives > IIDB ARCHIVE: 200X-2003, PD 2007 > IIDB Philosophical Forums (PRIOR TO JUN-2003)
Welcome, Peter Kirby.
You last visited: Today at 05:55 AM

 
 
Thread Tools Search this Thread
Old 04-17-2002, 06:23 AM   #21
Veteran Member
 
Join Date: Mar 2002
Location: 920B Milo Circle Lafayette, CO
Posts: 3,515
Post

Quote:
Originally posted by turtonm:
<strong>I don't think any distinctions between practical-oughts and moral-oughts are defensible, because of the subjectivity of making a determination about what acts are moral and which are not. I do not see any exit from this web of subjectivity.</strong>
Distinctions are defensible if they refer to different things and everything said about those things are true.

All distinctions are subjective -- that is an unavoidable fact of language -- because language is an invention. There is no natural law governing particular set of squiggles on a page or sounds or flags or dots and dashes refer to which things.

But it is a mistake to confuse the subjectivity of language with the subjectivity of the thing to which the term refers. The subjectivity of the word 'atom' (and the fact that it originally meant 'thing without parts' and people later -- subjectively -- decided to use the term to refer to something that does have parts) does not make atomic theory subjective.

I understand your objection. You fear that in my using the term 'moral' I am sneaking in some parts of the definition used in common discourse that is not true of the things to which I refer with this term. It is a legitimate concern. But if I am doing this, then please identify the property that I am falsely attributing.

Besides, I handle this objection by saying, "if that is your fear, then call the things to which I refer 'moral repoints' or 'interpersonal relpoints' if it makes you feel more comfortable.

Like I said, language is subjective. We can call things whatever we like -- as long as we use our terms consistently.

Quote:
Originally posted by turtonm:
<strong>What reasons exist independent of the person who has them?</strong>
Perhaps not the most clearly stated sentence in my essay. But, as illustrated by example, the reasons that Smith has not to be shot and robbed are reasons that exist -- they are a part of the real world, nobody is denying their existence. And they are independent of the reasons that the assailant has for or against shooting Smith for his money.

I distinguish the moral-ought of the assailt's actions based on all of the reasons that exist (whether they belong to Smith or the agent), and distinguish this from the practical-ought the assailant's actions, which are based on only the assailant's reasons and ignores Smith's reasons.

[ April 17, 2002: Message edited by: Alonzo Fyfe ]</p>
Alonzo Fyfe is offline  
Old 04-17-2002, 09:50 AM   #22
Veteran Member
 
Join Date: Jul 2000
Location: King George, VA
Posts: 1,400
Post

turtonm:

Quote:
Well, in your previous posts it seems you've decided that there is some "objective" or "default" position of "self-interest" which qualifies as a moral ought. Except that you have subtly shifted the goalposts here.
On this thread my only concern is to point out that the way people like tron and PB use moral language is so nonstandard, and specifically that it violates the logic of moral discourse so radically, as to be inherently misleading and confusing. I am not arguing here that there is an objective morality, only that the kind of “subjective morality” advocated by tron and PB cannot reasonably be called a “moral theory”. What PB, for example, is actually proposing is to use “ought” exclusively in the advisory sense (as in “You should change your car’s oil every 3000 miles”), which Alonzo calls the “practical-ought” sense, but to call this a “moral” sense of the word, or (what amounts to the same thing) to use it in this sense in contexts where the moral sense is clearly called for, expected, and understood. He also proposes to use “right” and “wrong” in a purely descriptive sense (i.e., “this act is right” means “this act violates some so-called moral code or other – which, by the way, I don’t subscribe to”).

I am not concerned here to dispute the position that moral statements do not express propositions; I am disputing that it is reasonable to use them to express the kinds of propositions that tron and PB use them to express, given they way these terms are used by almost everyone else. For example, I think it unreasonable to use “ought” and “wrong” (in moral contexts) in such a way that one can truly say “It would be wrong to do X, but you ought to do it”. This is a gross violation of the logic of moral discourse.

But finally, I have never suggested that “self-interest” has much to do with “moral ought”. It’s tron and PB who have done that.

Quote:
Now it is no longer a moral ought to screw people when you get the chance, but simply a blunt declaration on your part that it is nearly impossible to convince you not to screw people with impunity.
Wrong. It is nearly impossible to convince me that screwing people with impunity is never in my self-interest. But I have no intention of screwing someone even if I am absolutely certain that it is in my self-interest. And I believe that this is perfectly rational. Moreover, I don’t think that one must believe in an objective morality in order to believe that this is rational. One need only reject the unsupported claim that “rational action” is synonymous with “self-interested action”.

Quote:
I do not even see what this discussion is about; even if I put forth irrefutable evidence for an objective morality existing outside of our conversation, how would it stop you from killing Smith for the three dollars in his pocket?
Before producing “evidence for an objective morality” one would have to give a reasonably clear account of what it means to say “A ought to do X”. And IMHO any plausible account of what this means would have to have the property that, if A is sufficiently rational and understands what it means to say that he ought to do X, and knows that he ought to do X, he will do X. In other words, any valid account of objective morality must entail a motivation for “doing the right thing” that any sufficiently rational person would find compelling.

But since I’m not arguing here for the existence of an objective morality, this is pretty much beside the point.

Quote:
[By “A should do X”] I mean "It is in MY judgement of A's self-interest." I have no idea how A perceives her own self-interest; I only speak for myself.
If I understand you correctly, you are saying that by “A should do X” you mean “In my judgment it is in A’s self-interest to do X”. But I find it hard to believe that you really mean this. If A definitely prefers the consequences of doing Y to those of doing X, and would continue to prefer them no matter how much knowledge and understanding of these consequences he had, would you still say that he “should” do X if in your judgment the consequences of X are more in his self-interest than the consequences of X? If so, I have no idea what you mean by saying that doing X is in A’s self-interest.

Quote:
Only if you convert opportunities to inflict harm with impunity into moral oughts.
I’m not the one who’s doing this; tron and PB are. This is exactly the maneuver that I object to.

Quote:
So far you have not offered a single argument that says I should obey your construction of my self-interest.
I don’t say that you should. But you seem to be saying that I should obey your construction of mine. Otherwise what do you mean when you say that “A should do X means “In my judgment it is in A’s self-interest to do X”? (Unless, of course, I understood you wrong earlier.)

In any case, I don’t think that you should obey anyone’s construction of your self-interest – even your own. But what tron and PB are saying is that you should obey your construction of your self-interest.

Quote:
Further, you have offered no argument that says I am compelled to screw people whenever the opportunity presents itself.
Please try to get this straight. I am not “saying” any of these things; I am construing what tron and PB have said. And even tron and PB don’t say that you should screw people whenever the opportunity presents itself, but only that you should do it whenever you prefer the consequences of screwing them to the consequences of not screwing them, which is what they mean by “being in your self-interest”.

I also should note that PB definitely would not say that you are “compelled” to do so, only that it would be irrational not to. I’m not sure what tron would say on this point, but I suspect that he’d agree with PB.

Quote:
Just as you have averred that there is no way I can convince you not to take advantage of Smith -- back at you -- there is no way you can convince me that it is in my interests to kill Smith for the money in his pocket.
Ah, then perhaps we are merely saying the same thing in different ways after all. It’s quite clear that tron and PB believe that it is sometimes in their self-interest to knowingly harm other people, and that in those cases they ought to do it. And moreover, that regardless of what they perceive as being in their self-interest, if Jones perceives it to be in his interest in some cases to harm other people, then he ought to harm them in those cases. I disagree, saying that one shouldn’t harm other people even if you prefer the consequences of harming them to the consequences of not doing so, and express this by saying that you shouldn’t do it even if it’s in your self-interest. If I understand you, you’re agreeing that you shouldn’t do it, but express this by saying it’s never “really” in one’s self-interest to harm people, even if one prefers the consequences.

I would remind you at this point that the comment by Alonzo that you cited to kick off this thread was addressed to PB: “Second, it still seems, on your account, that a person may - and perhaps should - advance their own interests at the expense of others whenever they encounter a situation where they may do so with impunity.” There’s no question that PB does in fact say that such situations really do arise, and that when they do the person “should” (not just “may”) advance his interests at the expense of others. It seems reasonably clear that tron agrees with this. Apparently you don’t. In that case your quarrel is with tron and PB, not with Alonzo and me.

Quote:
No, it is never in my self-interest to knowingly harm others. That is my particular definition of my self-interest.
Good. Now if you could just persuade tron and PB...

The point is, morality isn’t all about you. What about those who have a different “definition” of self-interest? Do you say that they should harm others when they perceive it to be in their self-interest? If so, why all this harping on your definition of self-interest? Alonzo’s original comment wasn’t even addressed to you.

Quote:
Explain to me why I have to adopt your definition of my self-interest. On what grounds are you proposing this mysterious universal definition of "self-interest."
You don’t have to, and I propose no “universal” definition other than this: it’s in A’s self-interest to do X rather than Y if A prefers the consequences of X to those of Y.

Quote:
...the question is, given the massive benefits of cooperation, why should anyone ever defect?
No, the question is, why shouldn’t they? Look, the vast majority of the time it is not in my self-interest to take Continental Flight 757 to Cleveland. How does that imply that tomorrow, when for the first time in decades I have reasons for wanting to go to Cleveland, it will still not be in my self-interest to take this flight?

Similarly, it may well be that the benefits of social cooperation are so great in general that it is rarely in my interest to defect. But how does it follow that, on the rare occasions when it is in my interest to defect, I shouldn’t do so?

Quote:
turtonm:
A third problem is that it assumes, irrespective of other issues, that killing Smith has no effect on me.

bd:
Not so. All that’s needed is that the effect on me of that particular act does not outweigh the advantages of doing it.

turtonm:
BD, you seem to be under the impression that there is some objective way I can calculate costs and benefits.
Not at all. I assume that the “costs and benefits” will have to be “calculated” by the agent himself, and that we must simply accept his results. I meant only that, in order for killing Smith to be in my self-interest as I perceive it (which IMHO is the only reasonable way to define “self-interest”) all that is necessary is that I prefer the overall results of doing it, including the effects on me, to the overall results of not doing it.

Quote:
Further, you seem to be operating under the assumption that I have to hand you ONE ALL-ENCOMPASSING REASON not to take advantage of Smith...
You don’t have to give me any reason (or at any rate not a self-interested one) not to take advantage of Smith. But it appears that you have to give tron and PB a reason or reasons, and I seriously doubt (on the basis of my discussions with PB) that you can find any. Their position is that it is sometimes in their interest to do such things, and that when it is, it would be irrational not to do them. Since they both seem to be strongly committed to trying to act rationally, I can only conclude that either of them, if presented with a situation where it is in his interest (as he perceives it) to kill Smith, would kill him. Worry about them, not me.

Quote:
And if we take your swindling example, there are plenty of reasons aside from empathy not to do so.
There are also plenty of reasons to do so. Who are you to say that the “pro” reasons always outweigh the “con” reasons?

Quote:
In any case, I think taking a human life is a heinous act that will forever change me in a negative way, and so I will refrain from doing it.
All that you’re saying is that you personally would never find it in your self-interest to kill someone. Fine. Would you also never find it in your self-interest to steal, go to bed with someone else’s spouse, or lie under oath? If your answer to all such questions is “yes”, I can only conclude that you have very unusual values, which are shared by only a tiny minority. What about those who don’t share your admirable values? Would it be right for them to harm other people, to lie, steal, swindle, blackmail when they perceive it to be in their self-interest? If not, why not?
bd-from-kg is offline  
Old 04-17-2002, 11:57 AM   #23
Veteran Member
 
Join Date: Jul 2000
Location: King George, VA
Posts: 1,400
Post

Pompous Bastard:

Quote:
When I do say that an act is "wrong" I mean that it violates some normative ethical principle. This is roughly equivalent to how most people use the term.
Um, no. When most people say that an act is wrong they mean that you shouldn’t do it. (Ordinarily I wouldn’t put it that way because it looks tautological. But that’s the point; you deny that it’s tautological.) They do not mean, for example, that it violates the ethical principles that prevail in Pakistan. They don’t even mean that it violates the ethical principles most widely held in their own society. They mean that it violates their own ethical principles.

Quote:
Where I depart from most people, however, is that I do not hold that we always ought to obey normative ethical principles.
Yes, I remember now that you explained this in the past. But it’s really hard to keep track of all the weird nonstandard ways that various people use moral language once they depart from standard usage. That’s one reason why using these words in this kind of nonstandard way is inherently misleading and confusing.

You seem to be under the impression that in using these words in this way you are expressing a substantive disagreement with those who use them in the standard way. But you aren’t. To illustrate: suppose that I said “Where I depart from most people, however, is that I do not hold that merely because A is greater than B and B is greater than C, A must necessarily be greater than C.” Would you conclude that I had a substantive disagreement with those who say otherwise? Or would you conclude that I was using the phrase “greater than” in a nonstandard way?

Now when you say that you deny that we ought to obey normative ethical principles, you might conceivably mean that you deny that we ought to obey all such principles, even those you regard as invalid. But everyone denies this. So anyone reading this will naturally presume that you’re referring to normative ethical principles that you consider valid. But to say that you deny that we ought to obey normative ethical principles that you consider valid makes no sense; it is a gross violation of the logic of moral discourse, just as saying that you deny that, if A is greater than B and B is greater than C, A must be greater than C, is a gross violation of the logic of discourse about relative sizes. (Note: “Greater than” can mean different things in different contexts. But if the relationship involved is not transitive it is an abuse of language to use the term “greater than” to refer to it.) Most likely what you really mean is that you deny that there are any valid normative ethical principles. But in that case it would be far clearer if you just said so.

Quote:
I'm not sure how I can be any clearer.
I just explained how you could.

Quote:
Ordinary moral language assumes that one always "ought to" do the "right" thing.
I deny that assumption
Once again, it isn't a matter of denying an "assumption". It's a matter of refusing to use moral language in a way that is consistent with the fundamental logic of moral discourse. It is not an "assumption" that if A is greater than B and B is greater than C, A must be greater than C; it's part of the common understanding of the appropriate use of the phrase "greater than". In the same way, it is not an "assumption" that one ought to do the right thing. This is part of the common understanding of the appropriate use these words; in other words, part of the logic of moral discourse.

Quote:
I have chosen the latter and typed up an extensive post detailing the manner in which I use "ought to."
That's the problem with using such terms in such nonstandard ways. You have to use thousands of words (which may be read by perhaps a dozen or two people) to explain your idiosyncratic usage. Do you plan to refer everyone to this post every time you use these words in your own idiosyncratic way? What if everyone did this as a common practice? Communication would become impossible.

Quote:
I suppose to completely avoid abusing the language I could throw out moral language altogether and invent new terms out of whole cloth to describe my thoughts, but I'm sure that would cause an even greater degree of confusion.
Wrong. You wouldn't have to invent any new terms at all. There are perfectly good ways to say what you mean without hijacking moral language to do it. There's no need to mislead or confuse anyone at all.

Quote:
I'm not sure how denying that a moral theory creates any obligations that must be met constitutes inventing my own "personal, private language."
OK, let's take a simple example. Suppose you say to John, "If you loan me $5000 today, I promise to pay it back six months from now." John loans you the money. The six months elapse. You refuse to pay it back. John protests that you're morally obligated to repay him. You reply that he's talking nonsense; no one ever has a moral obligation to do anything. But, he says, it's wrong to extract money from someone by promising to repay and then breaking that promise when the time comes. You agree that it's "wrong", but deny that this implies that you shouldn't do it. But, he says, you ought to keep your promise. You reply that this is merely a claim on his part that it's in your interest to keep this promise. But you're the final arbiter of your self-interest, and you have decided, after due reflection, that it's in your interest not to repay.

I say that you've invented your own personal, private language, and that by using it you have deceived and defrauded John. The normal, accepted meaning of "I promise to do X" is that you are undertaking a moral obligation to do X. By denying that there is such a thing as a moral obligation you have made the word "promise" meaningless. But John, being unaware of this, loaned you the money on the understanding that you actually meant something when you said "I promise", and moreover that what you meant was what pretty much everyone means by it.

Let's take another example. Jerry keeps marijuana in his coat. One day he takes it to the Goodwill shop, neglecting to remove the weed. Jeff buys it. As it happens, the police find the marijuana and arrest Jeff. But Jeff is basically a lazy bum, while Jerry is an ambitious, hard-working fellow with a promising future. He asks you if you think he should confess that it was his marijuana and that Jeff knew nothing about it. You tell him that no, you don't think he should. Jerry feels better, because he is under the mistaken impression that you are making a moral judgment, but actually you were just saying that you thought it was in his interest to keep quiet and let Jeff take the fall. Oddly enough, if Jerry had asked you whether you thought it would be wrong to keep quiet, you would have said that you did. And if he did know what you meant by "should", "wrong", etc, he would never have bothered to ask. The questions that can be dealt with by moral language as you define it are simply not the ones that people generally use moral language to discuss. (More on this below).

Now of course, you can avoid all such misunderstandings by carefully explaining your "moral philosophy" to each and every individual before using moral terms in his presence. But what's the use of these definitions if you have to do that? The point of choosing this definition rather than that is to facilitate communication, not to impede it. How are you facilitating communication by using these highly nonstandard definitions, which are such that other people cannot safely draw even the most obvious, trivial inferences (such as that if you say that doing something is "morally right", you will also say that they "should" do it, or that if you say that you promise to do something, you understand yourself to have undertaken a moral obligation)?

Thus, I can see no practical use for such definitions other than to mislead and confuse people. Perhaps that was not your intention, but if not you should reconsider using these words in this way because it is inevitably the effect.

Quote:
It's not as though I'm keeping my views secret.
What percentage of the people who post to this form are familiar with the way you use moral terms? What percentage of the general public? It’s not as though they’re being broadcast on CBS.

As I pointed out above, you have two choices, neither of them attractive. You can be sure that everyone who hears or sees you using moral terms understands the highly idiosyncratic way that you use them, or you can mislead and confuse people.

Of course, this is not a problem when you're using them in the course of explaining what you mean. But this is an empty exercise unless you then propose to actually use these terms that way in other contexts. For example, if the discussion is about whether the President should sign a certain bill, when you say that he "should" you will mean that it's in his interest to do so regardless of its effects on the country, or on you for that matter. If, on the other hand, you say that it would be "wrong" for him to do so, you will mean that signing it would violate some “normative ethical principles” that you don't consider valid. Similarly, if you say that people should use pooper-scoopers when they walk their pets, you will mean that it's in their self-interest to do so (a highly implausible proposition unless there's a law requiring it where you live). To make sure that you are properly understood, you'll have to take up a lot of other people's time explaining your personal, private moral language. And once they understand, it's not likely that they'll be much interested in what you have to say about what various people "should" do or about what's "right" or "wrong". Chances are that they know at least as much about prevailing "normative principles" as you do, and they really won't be interested in discussing whether it's in the interest of pet owners to let their pets make a mess on other people's lawns, or whether it will benefit the city council to increase the budget for road building, or whether it's in the interests of the owner of the factory across town to keep polluting. To say anything of interest, you'll have to abandon moral language entirely.

Quote:
I'm not entirely convinced that there are no situations in which an agent would not be better off to temporarily abandon rationality.
You cannot “temporarily” abandon rationality. If you abandon it you have embraced insanity. There’s no telling what you’ll decide to do later. But that’s a subject for another day.

[ April 17, 2002: Message edited by: bd-from-kg ]</p>
bd-from-kg is offline  
Old 04-17-2002, 02:27 PM   #24
Contributor
 
Join Date: Jan 2001
Location: Barrayar
Posts: 11,866
Post

All that you’re saying is that you personally would never find it in your self-interest to kill someone. Fine. Would you also never find it in your self-interest to steal, go to bed with someone else’s spouse, or lie under oath? If your answer to all such questions is “yes”, I can only conclude that you have very unusual values, which are shared by only a tiny minority. What about those who don’t share your admirable values? cWould it be right for them to harm other people, to lie, steal, swindle, blackmail when they perceive it to be in their self-interest? If not, why not?

Hmmm. I'm a little bewildered. If you insist we are actually agreeing.....then we must be. But I certainly wasn't reading you that way.

Would it be right for them to harm other people, to lie, steal, swindle, blackmail when they perceive it to be in their self-interest?

No, it would not. That is where PB and I diverge. I do not think "self-interest," at least the way PB constructs it or the way it is constructed here, is the proper foundation for moral behavior. In fact I posted a question for PB on how his theory differs from the utility theory in economics, but the post disappeared.

I'd like to be fair and post my own "system," but I don't have one. Sorry, BD and PB.

Michael
Vorkosigan is offline  
Old 04-17-2002, 02:57 PM   #25
Veteran Member
 
Join Date: Jun 2001
Location: my mind
Posts: 5,996
Post

pug846: Do you really think it is irrational to kill someone merely because you are shutting them off as a source of knowledge?

Yes that would be the primary reason considering the situation of a pure cold blooded carried out in the most supposedly rational and controlled situation (say a state execution). Then there is the additional consideration of the irrationality of violence in itself, which cause an uncontrolled state of affairs and therefore less predictable results, being predictability an important feature of rationality.

What about cases where the person is mentally handicapped? If there IQ is sufficiently low-enough, it seems to reason that it would be highly probably to assume that you will never gain any bit of knowledge from them.

I admit that there is nothing objectively morally wrong in killing a mentally handicapped person, but take into consideration that this person would have to be severely retarded, so retarded to the point of not being able to communicate any humanly useful and meaningful information. However most retarded people are not killed because of empathy much like pets are not killed.

Your argument boils down to a cost-benefit analysis; You would have to assign a value to the potential knowledge you might gain from someone who you would kill. But if we are operating under such a model, how can you not take into consideration other factors? Are you going to claim that the value assigned to that potential will always necessarily be higher than the sum total of value for the killing of the person? Is that really a rational assumption? No matter how outlandish the individual situation and what we might gain from a persons death would have to be outweighed by potential knowledge we might gain?

Again, what is not important is the value of the possible information gained, but the act of intentionally shutting down this potential source of practically unlimited useful information that it is making murder irrational and therefore immoral.

Would you say that it is always wrong to shut off potential sources of information, whether they be moral agents, books or any object in the world?

Yes if you want to be rational it is always wrong to intentionally shut off sources of useful information.

Lets say you and I walk into a bar and have a drink. You unintentionally insult me. I get angry and push you down. I’ve initiated violence. Under what you described, whatever your reaction might be, it can’t be rational. If you were to walk away from the event or take a gun and blow my head off, both would be equally irrational? In the fact of violence, aren’t there some responses that are in fact rational? In this situation, assuming that you value my friendship and enjoy talking with me, from these premises it would be a better strategy to try and comfort me, calm me down, etc.

In fact you are realizing I am not being rational and therefore acting immorally when I get a gun a blow your head off instead of being reasonable with you and try to sort out our differences or severe breakdown of communication mainly because there is an intent behind my actions.

My whole argument is to prove through reason that objective morality does in fact exist. It exists because we are human beings who survive through our use of reason. When we intentionally betray our use of reason we put our life at risk, and therefore live less. Initiating violence by the murder of another rational human being is an intentional betrayal of reason because it is ultimately an irrational act no matter how much you want to falsely rationalize it.
99Percent is offline  
Old 04-17-2002, 04:04 PM   #26
Veteran Member
 
Join Date: Aug 2000
Location: Indianapolis area
Posts: 3,468
Post

99Percent,

Before all, I want to say that I really value your courteous discussion with me…I find it highly ironic that a person whose handle is Pompous Bastard is anything but.

I try. Thank you for your courtesy in return.

The question is not whether there is any consideration for proper values in the supposedly rational decision here, but rather if the decision is irrational or not to begin with.

I ‘m not sure that I see the distinction you’re making between the two. I think you’re saying that any decision to destroy a potential source of information is, by definition, irrational. An essential property of rational thought, then, is a priority for preserving information. Is this correct?

If so, I disagree, but I don’t think that disagreement important for my purposes. I view rational thought as a tool used by an agent to fulfill its ends, not as an end in itself. Even if the decision to destroy information were irrational, if it served one’s ends, it would still be the appropriate decision for one to make, in my view. Value fulfillment has a higher priority than rationality, in my scheme.

You may, of course, argue that rationality ought to have a higher priority than value fulfillment, but I’m not sure how you’d go about it.

I’m going to skip some of your remaining comments, as they are essentially further argument to establish that decisions that destroy information are irrational. As I’ve said, granting you that point for the sake of argument doesn’t do much to my theory, so I’m not particularly concerned with it. Let me know if I skip anything that you’d particularly like addressed.

I think I have gone over this a number of times, but in case I haven't: a situation where violence has been initiated (violence meaning an immediate threat to your own life or the life of someone you hold dearly) there can be no rational and therefore no moral decision.

Actually, this is the first time I’ve seen this sort of statement from you. I had previously thought that you held that violence was irrational and, therefore, immoral, unless one was responding to another’s initiation of violence, in which case a violent response was rational and, therefore, moral. Apparently, I’ve misinterpreted you. I suppose this makes sense, though, given that you hold all violence to be irrational. What about a nonviolent response? If you attack me and I don’t fight back, am I behaving rationally or has your initiation of violence robbed me of my ability to act rationally?

Your introduction of IA into this discussion is bizarre to say the least…

Sorry. AI and related topics have been on my reading list lately, so it’s been on my mind, and the topic seemed relevant.

I think the problem resides in your definition of "limitless amount of information". This basically equates to "noise".

Yes. My point was that most of the information out there to be had is noise from the perspective of any given agent. In the overwhelming majority of cases, I do not need to know what Smith knows in order to make an informed decision, and access to his information would actually hinder my decision making process by forcing me to waste time filtering it for useful information.

As you note, this is more or less tangential to our conversation anyway.

An undeniable source of already filtered and processed and therefore meaningful information.

I disagree. This was my point when I brought up the AI topic. Smith’s information is not filtered for my use. It’s filtered for his use. Most of what he knows has meaning for him, but not for me. I don’t need to know his wife’s middle name, his son’s shoe size, or where he left his car keys. The vast majority of Smith’s meaningful information is noise to me.

Information that can be very concrete in our human understanding, say an invention of cold fussion, or a cure to Parkinsons of which your grandmother is suffering.

Here, I think, you are veering away from whether or not a decision to destroy information is rational or not and into whether or not such a decision is in my best interests as I seek to fulfill my values, which include things like cheap energy and my grandmother’s health. This is certainly a valid point but, to my mind, the infinitesimal chance that any given individual (assuming that individual is not actively engaged in cold fusion or Parkinson’s research) might contribute to such an advance gives such considerations very low weight when I weigh my interests.
Pomp is offline  
Old 04-17-2002, 04:26 PM   #27
Veteran Member
 
Join Date: Aug 2000
Location: Indianapolis area
Posts: 3,468
Post

turtonm,

bd-from-kg: But that’s my point. why is self-interest self-evidently a more valid reason for doing something than other reasons? And if it isn’t, what sense does it make to erect a “moral system” on the assumption that it is?

Turtonm: You'll have to get Pompous to answer this one. My thinking doesn't work like his.


Briefly, because self-interest is the only motivating force that self-evidently exists and, as far as I can determine, the only motivating force that is observed to exist. It’s more valid than other reasons because it is the only reason that possesses the capacity to motivate agents to act, again, as far as I am able to determine.

An important point here is that, as you note, the vast majority of us define our self-interest to include the well-being of others. We don’t require any special moral reason not to kill Smith and take his money because, by some combination of genetics and socialization, we simply find such killing abhorrent. I think this point often gets lost in the consideration of bizarre hypotheticals. The agent in such hypotheticals, who would feel no remorse after killing Smith, would have to be a clinical psychopath. Am I prepared to say that, from such a rare agent’s perspective, killing Smith is the rational decision? Yes, without hesitation. I am also prepared to say, without hesitation, that from the perspective of almost any other agent in society, locking such psychotic agents up where they cannot harm us is
the rational decision.

In fact I posted a question for PB on how his theory differs from the utility theory in economics, but the post disappeared.

I’m sorry the post vanished. I’d welcome your criticism of my theory.

To answer your question, although I haven’t studied utility theory in depth, I don’t believe that my theory is all that different from it. If I have time later on, I’ll try to read up on the theory and answer your question more completely.
Pomp is offline  
Old 04-17-2002, 08:47 PM   #28
Veteran Member
 
Join Date: Aug 2000
Location: Indianapolis area
Posts: 3,468
Post

bd-from-kg,

I am going to quickly run through your conversation with turtonm and provide a few clarifications of my own moral view.

He also proposes to use “right” and “wrong” in a purely descriptive sense (i.e., “this act is right” means “this act violates some so-called moral code or other – which, by the way, I don’t subscribe to”).

You've misunderstood my position. This is understandable, as I have not yet reached the point in my description of my own moral theory where I explain fully what I mean by "right" and "wrong." Briefly, I believe that there is a moral code that all rational persons normally ought to abide by, and that that code can be discovered, in an approximate manner, through the contractarian model. When I say that an act is "right" or "wrong" I mean that it is right or wrong under the version of contractarianism that I subscribe to. In the vast majority of situations, an agent ought to do the right thing, as defined by the contract. There are situations, however, where I maintain that an agent ought to defect from the contract, and this is, apparently, the point you are stuck on.

To restate my position, when I say that an act is "right" I do not mean that it is in accord with some normative principle of some moral theory held by some unspecified individual, I mean that it is in accord with some normative principle in the moral theory that I subscribe to. I apologize if I have been unclear on this point in the past.

Good. Now if you could just persuade tron and PB...

He doesn't have to persuade me. Except in certain rare cases (self-defense being the most obvious) I don't consider harming others to be in my self-interest.

You don’t have to give me any reason (or at any rate not a self-interested one) not to take advantage of Smith. But it appears that you have to give tron and PB a reason or reasons, and I seriously doubt (on the basis of my discussions with PB) that you can find any.

What does this mean? In the real world, I have dozens of reasons not to kill Smith. You started this line of discussion by saying:

Quote:
It means that, if you can safely kill Smith to get his wallet and have no particular reason not to (such as empathy for Smith or his family), you should kill him.
You have stipulated, for the purposes of this hypothetical, that I have a reason to kill Smith and no reason not to kill Smith. Of course turtonm will be unable to give tron or I a reason not kill him! You've set up the hypothetical so that no such reason exists!

[ April 17, 2002: Message edited by: Pompous Bastard ]</p>
Pomp is offline  
Old 04-18-2002, 04:37 AM   #29
Veteran Member
 
Join Date: Mar 2002
Location: 920B Milo Circle Lafayette, CO
Posts: 3,515
Post

Quote:
Originally posted by bd-from-kg:
<strong>And IMHO any plausible account of what this means would have to have the property that, if A is sufficiently rational and understands what it means to say that he ought to do X, and knows that he ought to do X, he will do X. In other words, any valid account of objective morality must entail a motivation for “doing the right thing” that any sufficiently rational person would find compelling.</strong>
This goes to my comment that, sometimes, as there was with the concept of "atoms without parts", the language itself contains problems.

If you insist on this internalist definition of morality, it seems to me that you have two options.

Either you must hold that moral ought is intimately connected to an agent's desires, such that it is never the case that an agent ought not to do that which maximizes fulfillment of his own desires (in other words, you are forced into PB's camp and hold that, where it is sometimes the case that a rational agent with a full understanding of the situation WILL shoot Smith for his money, that morality must allow that there are cases where he SHOULD or at least MAY do so.)

Or, you must postulate a strange sort of entity that has magical influence over a person's actions independent of his desires -- some strange force of nature where Smith's being dead has the capacity to prevent the robber from killing Smith independent of Smith's desires. No such force of nature exists, so moral concepts have as much real-world relevance as the concept of Santa Clause and the Easter Bunny.

These are the only two options that I see if one accepts internalism. PB's type of theory, or moral eliminativism.

I hold an objective morality, but I distinguish between reasons that exist for the robber not to kill Smith, and the reasons that the robber has for and against killing Smith. That Smith's reasons not to be killed exist -- that the robber cannot rationally deny their existence even if he does not care about them.

Nor are we justified in saying that Smith's reasons not to be killed for his money are morally irrelevant just because the robber happens not to care about them or because they lack some magical power to influence the robber's action independent of what the robber does care about. Their existence is sufficient to moral relevance -- no other condition or criterion is to be added.

[ April 18, 2002: Message edited by: Alonzo Fyfe ]</p>
Alonzo Fyfe is offline  
Old 04-18-2002, 06:15 AM   #30
Veteran Member
 
Join Date: Mar 2002
Location: 920B Milo Circle Lafayette, CO
Posts: 3,515
Post

Objections to 'game theory' and contract moral theories.

For moral theorists who like to point out the benefits of cooperation, and talk of 'prisoners' dilemmas', I find those benefits to be greatly exaggerated.

For example, game theory contains a built-in assumption that all defections are immediately known to the other player, who can then retaliate. This exaggerates the costs of defection, which then exaggerates the relative benefits of cooperation.

If we change just this one rule, and allow for the possibility of undiscovered defection, a strategy of "cooperate when you must, defect when you can' wins out over 'tit for tat' for any player sufficiently good at determining when he can get away with defection -- particularly in a population of tit-for-tat players.

If we make game theory the model for morality, then "cooperate when you must, defect when you can" becomes our moral imperative.

A similar objection can be raised against most types of contract theory. When an agent finds himself in a position where he can act contrary to the contract without activating the noncompliance clauses, the rational agent should do so.

One type of contractarian theory that avoids this objection (and one in which I find some merit) is one in which the hypothetical agents are negotiating a contract that they will psychologically bind themselves to through guilt, price, shame, and the like. But this is not a contract that they accept independent of their desires and aversions, it is a contract to determine what desires and aversions they should adopt.

Still contract theory has a more serious problem. What relevance does it have what I would do in some imaginary world where people agreed to a social contract? To obey such a contract sounds to me like saying "because, in some alternative world where the building I am in is on fire I would flee the building as quickly as possible, I should flee this building in this world as quickly as possible even though there is no fire."

Which strikes me, ultimately, as quite irrational.

[ April 18, 2002: Message edited by: Alonzo Fyfe ]</p>
Alonzo Fyfe is offline  
 

Thread Tools Search this Thread
Search this Thread:

Advanced Search

Forum Jump


All times are GMT -8. The time now is 10:55 PM.

Top

This custom BB emulates vBulletin® Version 3.8.2
Copyright ©2000 - 2015, Jelsoft Enterprises Ltd.