FRDB Archives

Freethought & Rationalism Archive

The archives are read only.


Go Back   FRDB Archives > Archives > IIDB ARCHIVE: 200X-2003, PD 2007 > IIDB Philosophical Forums (PRIOR TO JUN-2003)
Welcome, Peter Kirby.
You last visited: Yesterday at 05:55 AM

 
 
Thread Tools Search this Thread
Old 03-29-2002, 08:59 PM   #21
Veteran Member
 
Join Date: Mar 2002
Location: 920B Milo Circle Lafayette, CO
Posts: 3,515
Post

Quote:
Originally posted by Malaclypse the Younger:
<strong>Moral questions frequently are answered by who has the largest army. This is a fact.</strong>
Moral questions typically are not understood as questions about who has the largest army. (That is, the content of the moral question is not often thought to be that of determining who has the greatest might.)

Yet, individual subjectivism (or limited subjectivism) often does require that this be the content of discussion, because nothing can be said against another other than "I have the power to defeat you unless you do as I say."

Recall, I am a subjectivist. I argued against the idea of intrinsic values here and elsewhere. But there are different types of subjectivism, and individual subjectivism turns all discussion of right and wrong into a discussion of dominance and weakness -- leaving nothing more to be said on any issue.

[ March 29, 2002: Message edited by: Alonzo Fyfe ]</p>
Alonzo Fyfe is offline  
Old 03-29-2002, 09:06 PM   #22
Regular Member
 
Join Date: Apr 2001
Location: nowhere
Posts: 416
Post

Well, there are ways to persuade others to change their values, or change the expression of their values.

<ol type="1">[*]You can show that their own values are internally contradictory.[*]You can show that their own values rest on assumptions that are false-to-fact.[*]You can show that others disapprove of their value; since social approval is usually itself a value, that is often enough to create a value conflict according to (1).[*]You can show that it is the best strategy to maximize their overall value system to abandon the fulfillment of a particular value.[*]You can explain a value, and hope they will admire it sufficiently to adopt it.[/list=a]

Or you can just coerce them.

[ March 29, 2002: Message edited by: Malaclypse the Younger ]</p>
Malaclypse the Younger is offline  
Old 03-29-2002, 09:09 PM   #23
Veteran Member
 
Join Date: Aug 2000
Location: Indianapolis area
Posts: 3,468
Post

Alonzo Fyfe,

And how is this different from saying that under subjectivism, moral questions are ultimately nothing more than questions about who has the largest army -- that people may morally do whatever they have sufficient force (or cunning) to get away with?

It's a fine point, but a significant one. You're hung up, I think, on the notion that the sole concern is telling an agent what (s)he can and cannot do. I'm more concerned with what an agent ought to do in order to maximize his/her happiness. The fact that Genghis has the power to rape and pillage his way across the steppes does not necessarily imply that he ought to do it. It simply means that he can do it, and that he will do it unless there exists sufficient reason for him not to.

I think we can probably agree that a person who actively values conflict as an end in itself to the extent that (s)he would be willing to forego all the advantages of social cooperation is an abberrat case. Even warmongers such as our friend Genghis generally enjoy the fruits of cooperation within their own societies, and some set of moral principles is necessary to enable that cooperation. Further, it is also the general historical case that nomadic raiding societies did not value conflict so much that they were not willing to give it up when the opportunity to settle down and rake in "protection" money instead presented itself.

At any rate, nearly everyone values the fruits of cooperation, and those who don't generally can't be reasoned with anyway. Hence, it is nearly always the case, under a subjectivist system, than any given agent ought to act "morally," at least most of the time.
Pomp is offline  
Old 03-29-2002, 09:09 PM   #24
Veteran Member
 
Join Date: Mar 2002
Location: 920B Milo Circle Lafayette, CO
Posts: 3,515
Post

Quote:
Originally posted by Malaclypse the Younger:
<strong> Prisoner's Dilemma analyses are about moral strategy theory, which is obviously objective. It doesn't speak to moral value theory.</strong>
I would hesitate to call it a moral strategy theory. It concerns practical reason only -- and yields the conclusion that it is often practical to act as a moral person would act.

It says nothing about what one should do in rare circumstances where the practical thing to do would go contrary to morality.

And there is nothing in game theory that rules out the possibility of such circumstances arising -- cases where one person can screw another over without that other ever finding out about it or getting a chance to retaliate in any meaningful way.

Indeed, the basic assumptions used in the study of the prisoner's dilemma say that, if presented with such an opportunity, one should take it. There is no reason not to.

[ March 29, 2002: Message edited by: Alonzo Fyfe ]</p>
Alonzo Fyfe is offline  
Old 03-29-2002, 09:26 PM   #25
Veteran Member
 
Join Date: Aug 2000
Location: Indianapolis area
Posts: 3,468
Post

Alonzo Fyfe,

How do you define the issues of "Cooperation" and "Defection" if one person obtains a positive value from the suffering of another? Assume that my end is to make sure that you end up with the lowest score possible. Then I have no reason to play the game by your rules.

You have to play by the rules, even if you have a non-standard end. You can't alter the rules of the game (by analogy, the rules of social interaction) on a whim.

The rational choice for me, once I realize that you are a consistent Defector, is to stop playing the game with you. In real-world terms, this means that either I will terminate my relationship with you or, if the circumstances of your "Defection" are extreme enough, I will contact a law-enforcement agency, or otherwise engage in forceful coercion.

Or what if I think that it is sick or perverse for you to have a score of "5" in a particular iteration of the game -- that you having such a score offends me and offends my God?

Again, the rational response is to stop playing the game with you, at least in the circumstances in which you have demonstrated that you find certain scores perverse.

Or, what if something that you think has a score of 5, I think that your having really has a score of 1 -- and something else has a score of 5? And, so, I hold that you suffer from some perversity -- some defect in sensitibility. Out of generosity I prevent you from obtaining what you desire because I believe that some other alterantive is better "for you"?

If you behave in a bizarre and unpredictable manner while playing the game, again, it is rational for me to stop playing the game with you.

Do you see a theme developing here?

Granted, the PD is a rather loose analogy for most social situations, but it makes the basic case. If you pursue unusual values in a manner that causes you to play the game so that my score is low, I will bow out of the game. You lose whatever benefit you may have derived from social interaction with me. If you behave similarly in your interactions with others, you will eventually lose all the benefits of social interaction.
Pomp is offline  
Old 03-29-2002, 09:39 PM   #26
Regular Member
 
Join Date: Apr 2001
Location: nowhere
Posts: 416
Post

[b]Alonzo Fyfe

Quote:
I would hesitate to call it {the Prisoner's Dilemma} a moral strategy theory. It concerns practical reason only -- and yields the conclusion that it is often practical to act as a moral person would act.
That's what I mean by moral strategy theory.

Quote:
It says nothing about what one should do in rare circumstances where the practical thing to do would go contrary to morality.
What is practical is practical only in the sense that it fulfills your values. Practicality is meaningless in the absence of goals, which are moral values. So there are, by definition, no logically possible cirucumstances where it is "practical" to contravene one's own moral values.

It is logically possible only that there are circumstances where one's own moral values are internally contradictory, or too vague to provide sufficient guidance to determine what is or is not practical.

Quote:
And there is nothing in game theory that rules out the possibility of such circumstances arising -- cases where one person can screw another over without that other ever finding out about it or getting a chance to retaliate in any meaningful way.
It should be noted that what constitutes "screwing another over" is a matter of (subjective) opinion; there is no objective definition for this term. And generally, that determination is made according to the moral beliefs of the victim, not the perpetrator.

Quote:
Indeed, the basic assumptions used in the study of the prisoner's dilemma say that, if presented with such an opportunity, one should take it. There is no reason not to.
That is correct. There isn't a reason not to, other than that you personally would disapprove. However it is stipulated that such a person would not value your approval.

You are noting a fact. It is a fact that if someone has the power and desire to do something, they will probably do it. Whether you like that fact is a matter of subjective opinion.

[ March 29, 2002: Message edited by: Malaclypse the Younger ]</p>
Malaclypse the Younger is offline  
Old 03-29-2002, 10:09 PM   #27
Contributor
 
Join Date: Jan 2001
Location: Barrayar
Posts: 11,866
Post

And there is nothing in game theory that rules out the possibility of such circumstances arising -- cases where one person can screw another over without that other ever finding out about it or getting a chance to retaliate in any meaningful way.

Well of course. Game theory says that cooperation is more common when participants will interact in the future. In any case where future interactions are unexpected, where there are great disparaties in information or power, etc, game theory says that one party will tend to screw the other. That is in fact what we see in real life.

Indeed, the basic assumptions used in the study of the prisoner's dilemma say that, if presented with such an opportunity, one should take it. There is no reason not to.

I do not believe this is correct. There is nothing normative about the Prisoner's Dilemma. All it can tell you is what the tendencies will be, or how behaviors might have arisen.

I'm not sure anymore what your objection is. Is it that you think subjectivism has no basis? Or that it is impotent? Or what?

Michael

[ March 29, 2002: Message edited by: turtonm ]</p>
Vorkosigan is offline  
Old 03-30-2002, 04:55 AM   #28
Veteran Member
 
Join Date: Mar 2002
Location: 920B Milo Circle Lafayette, CO
Posts: 3,515
Post

Quote:
Originally posted by turtonm:
<strong>[b] I'm not sure anymore what your objection is. Is it that you think subjectivism has no basis? Or that it is impotent? Or what?</strong>
My objection is not against subjectivism, because I am a subjectivist. My objection is on a type of subjectivism (individual-subjectivism) that allows each person to choose his or her own morality. My objection is that it is the same as saying that there is no morality at all -- that individual subjectivism eventually reduces to "You do what you want. I'll do what I want. And if you want to do something I don't like then we'll see who can raise the largest army."

Ultimately, individual subjectivists are moral eliminativists.

The problem with game theory as a way around this is that, while game theory may provide a good descriptive account of how cooperation evolved, if it is taken as a prescriptive theory then its presumption that each person always seek to maximize their own score makes it obligatory on participants to make others worse off whenever they can actually benefit from it -- when the other either will not know who to retaliate against or could not retaliate effectively if they did know.
Alonzo Fyfe is offline  
Old 03-30-2002, 07:33 AM   #29
Regular Member
 
Join Date: Apr 2001
Location: nowhere
Posts: 416
Post

Alonzo Fyfe

Quote:
My objection is not against subjectivism, because I am a subjectivist. My objection is on a type of subjectivism (individual-subjectivism) that allows each person to choose his or her own morality.
How do you disallow a person from choosing his own morality? You simply cannot make me think a thought I do not chose to think. Subjectivism is fundamentally personal. This is simply a fact, and the best you can do is state that you individually-subjectively dislike this fact.

Quote:
My objection is that it is the same as saying that there is no morality at all -- that individual subjectivism eventually reduces to "You do what you want. I'll do what I want. And if you want to do something I don't like then we'll see who can raise the largest army."
According to your definition, it might be true that there is indeed no morality at all, just like under certain definitions there is no god at all, however much one might desire the opposite.

Saying that a particular position does not provide the answer you individually-subjectively desire is not an argument against that position.

Quote:
Ultimately, individual subjectivists are moral eliminativists.
I am not sure what you mean by moral eliminativeism.

If you mean that the subjectivist asserts that there is no objective definition of what "morality" means, that's just linguistic intersubjectivism (language means what we agree it means; there is no objectively correct language), an obviously true position.

If you mean that there are no objective definitions of what is "good" under moral subjectivism, that's an inherent feature of moral subjectivism.

If you mean that it's logically contradictory for a moral subjectivist to object to any behavior, that's simply false.

Quote:
The problem with game theory as a way around this is that, while game theory may provide a good descriptive account of how cooperation evolved, if it is taken as a prescriptive theory then its presumption that each person always seek to maximize their own score makes it obligatory on participants to make others worse off whenever they can actually benefit from it -- when the other either will not know who to retaliate against or could not retaliate effectively if they did know.
Firstly, game theory is prescriptive only to strategy. It simply assumes that there are certain values which can be represented by scalar or vector quantities. If your values permit maximization by minimizing the values of others, then game theory prescribes a particular strategy that will indeed result in the minimization of the values of others. Indeed if you are hostile to the fulfillment of the values of others, then game theory will prescribe strategies that always (or almost always) minimize the values of others.

If you directly value the maximization of the values of others (partial or complete altruism), then game theory cannot, by definition, prescribe a course of action that minimizes the values of others, because such a strategy would thus minimize your own value. Under this value set, game theory by definition will always prescribe strategies to maximize the values of others as a consequence of maximizing your own values.

If you are neutral to the maximization of the values of others, the situation is more complex. The effects of game theory prescribed strategies on the fulfillment of the values of others depends strongly on your other personal values. It is possible that, while you do not value others' fulfillment per se, the strategic way to fulfill your other values might entail fulfilling others' values.

Indeed we can categorize value sets according to the presence or absence (or rather, relative strength) of the value held by the individual with regard to others' values. We will arbitrarily label these various sets as stances:
  • hostile: the individual directly values the minimization of others' values.
  • friendly: the individual directly values the maximization of others' values.
  • neutral/hostile: the individual has no direct value with regard to others, but the fulfillment of his own values tends to entail the minimization of others' values
  • neutral/friendly: no direct value wrt to others', but the fulfillment of his own values tends to entail the maximization of others'

Indeed, an individual might have different stances under different circumstances and contexts; indeed a person might employ more than one stance at the same time!

For instance, if I am playing a competitive game (e.g. Chess, Go, Parcheesi) with someone, I assume both he and I value winning. In order to win, the other person must lose. Therefore I adopt a hostile or neutral/hostile stance wrt my moves: I want to win (and maximize my own value), I want the other to lose (and minimize his value).

Even though I want to win, I want to know my inferiority/superiority at the game itself (and not just achieve a win or suffer a loss). Therefore I am neutral/friendly with regard to following the rules. If I cheat, I fail to fulfill my own value, proving myself better at the game. So the fulfillment of my own value directly irrespective of the other person's value tends to maximize the fulfillment of his value, presuming he also wants to know his relative ability.

Furthermore, we also enjoy playing a competitive game, for our mutual benefit and enjoyment, win or lose. We want to use the game to establish or reinforce our friendship, and I get genuine value from pleasing other people with a game played to the maximum of our abilities. In this regard, I have an explicitly friendly stance with regard to not only following the rules, but observing the etiquette and rituals of the game.

So a single activity can employ multiple stances on different levels of interaction.

Now it is an observed fact that different people have different stances with regard to each level of the game.

For instance, someone might have a friendly stance with regard to the game-play. For such a person, it is thus rational for him to intentionally lose. Again it is an observed fact that some people actually do this. I have an opinion about such a stance (I disapprove of it, and I won't play a game with such a person), but, given the value, it is irrational of such a person to play to win.

Another person might have a hostile or neutral/hostile stance towards the rules. It is rational for such a person to cheat if he believes he can get away with it. Again, I have an opinion about such a stance (again, I disapprove), but it is actually irrational for such a person not to cheat if he feels he can do it undetected. Of course, that person knows that he is not omniscient, and cannot know for certain his cheating will actually go undetected, therefore strategically he must weigh the non-zero risk of discovery and the consequent minimization of other values (i.e. once exposed as a cheater it is likely few will play with him in the future). Again, it is an observed fact that some people do indeed behave in this manner.

Game theory does not prescribe any particular stance. It merely prescribes strategies for fulfilling an arbitrary stance vs. another arbitrary stance.

But there is no objective way to compare the stances themselves. They simply exist as facts of particular individuals. The best we can do is hold opinions about various stances under various circumstances.

Indeed, under certain circumstances, I have a negative opinion about friendliness. When I play a game, I do not want a person who has a friendly stance towards winning, because that stance undermines the fulfillment of my "higher-level" value of knowing if I am actually better or worse than that person at that game. I insist upon an opponent who has a hostile stance towards winning or losing. But other people are different. Some might actually enjoy having an opponent they know will intentionally lose to them. Again, there is no objective way to compare these values; one can note only that they are different.
Malaclypse the Younger is offline  
Old 03-30-2002, 11:47 AM   #30
Veteran Member
 
Join Date: Mar 2002
Location: Planet Lovetron
Posts: 3,919
Post

Interesting stuff, guys. I apologize for my prolonged absence.

ReasonableDoubt:

Three questions for you:

1) Historically, which of the religious verses that you site on the first page came first?

2) Who exactly is Hillel?

3) I had always been lead to believe that the Confucionist rendering of this philosophy was "Do NOT do unto others as you would NOT have them do unto you" Is the passage you quoted an accepted interpretation?
luvluv is offline  
 

Thread Tools Search this Thread
Search this Thread:

Advanced Search

Forum Jump


All times are GMT -8. The time now is 04:12 AM.

Top

This custom BB emulates vBulletin® Version 3.8.2
Copyright ©2000 - 2015, Jelsoft Enterprises Ltd.