FRDB Archives

Freethought & Rationalism Archive

The archives are read only.


Go Back   FRDB Archives > Archives > IIDB ARCHIVE: 200X-2003, PD 2007 > IIDB Philosophical Forums (PRIOR TO JUN-2003)
Welcome, Peter Kirby.
You last visited: Yesterday at 05:55 AM

 
 
Thread Tools Search this Thread
Old 01-30-2002, 08:22 PM   #21
Veteran Member
 
Join Date: Sep 2000
Location: Massachusetts, USA -- Let's Go Red Sox!
Posts: 1,500
Post

Quote:
Originally posted by tronvillain:
<strong>God Fearing Atheist:


At most that means individual strategies must take into account the strategies of others. In a single encounter, if one can predict the opponent will cooperate, then defection could be a "superior" strategy to cooperation.

[ January 30, 2002: Message edited by: tronvillain ]</strong>
The argument given for that position is as follows: let u equal the utility gained by the actor if all individuals act on individual strategies, u' equal the utility gained should all act cooperatively in a joint strategy, and u'' equal the utility should everyone choose a joint strategy but himself. So suppose he adopt sa policy of straight maximization (one who seeks to maximize his utility using individual, as opposed to joint strategies). If others base their actions on a joint strategy, he expects utility u''. If he expects others to act on individual strategies, so does he, and expects u. If the probability that others will base their actions on a joint strategy is p, then his overall utility is [(pu'' + (1 - p)u]. And suppose he adopts constrained maximization. If others base their actions on a joint strategy, so does he, and expects utility u''. If others act on individuals strategies, so will he, and expect u. In this case, his expected utility will be [(pu' + (1 - p)u]. Since u'' is greater than u', he should adopt straight maximization.

This would be true if it was the case that the probability of others basing their actions on a joint strategy were independent of ones own disposition. But that is not correct. Only those disposed to keep agreements are suitable partners for cooperation. Thus, constrained maximizers have options open to them that straight maximizers simply dont.
God Fearing Atheist is offline  
Old 01-30-2002, 08:37 PM   #22
Veteran Member
 
Join Date: Sep 2000
Location: Massachusetts, USA -- Let's Go Red Sox!
Posts: 1,500
Post

Quote:
Originally posted by hedonologist:
<strong>(edited to add the sentence in bold)

What about people who practice forms of employment involving aesthetics or porn, have no relatives as beneficiaries, and aren't politically active, etc, so their only basic effect on society is trading aesthetics for money and money for what they need? Would watching one of them die, be something more than an "offence to the palette"? How about pro football players if all they do is provide a sort of aesthetic image of a game on a TV screen? Musicians? What music can stir the soul of some people (or induce production of certain hormones and brainwaves, if that is what you value) so much as those little bundles of cute?

We would have to consider what is MORE utilitarian, saving the baby or not, and see how much effort it is "worth" to whoever has the utilitarian value and is considering the question, to try to accomplish either the killing or the saving. Similarly, someone could benefit from a fridge MORE than a person would benefit from destroying a fridge for no reason, so doesn't a fridge have "rights" in the utilitarian sense? (hed crys, "VCR's are people too! Doesn't anyone care about them?") What about games (babies are also interactive)?

A lot of pain and maybe loss of work has already been "invested" in a baby who made it out of the womb-- the pregnancy and labor. Now they are autonomous and could be given or sold, without harming an unwilling mother. If you consider the above employees/people or products to be valuable wouldn't this mean the babies at least have a similar value to "society", if someone would want to adopt them even for just a little while?

[ January 30, 2002: Message edited by: hedonologist ]</strong>
Hedon, that entire post made absolutely no sense to me.
God Fearing Atheist is offline  
Old 01-30-2002, 08:50 PM   #23
New Member
 
Join Date: Jan 2002
Posts: 3
Post

Quote:
Originally posted by Polycarp:
<strong>[b]

While you may simply think of these people as “Johnny Brain-damaged”, I happen to think they have rights worth protecting just as much as your own. Would you take a bat to the head of a severely “Johnny Brain-damaged” individual and beat the hell out of his brains only if he was unable to reciprocate such an action?
</strong>
I believe that Polycarp answers his own query in this passage. The social contract is not only about being beaten up if you steal from people or murder them. It is more than that.

I will illustrate with an example. Polycarp believes that invalids have rights. Suppose one day, he is walking along and notices GFA beating the hell out of some brain dead child with a baseball bat. Polycarp is going to have some particularly strong feelings about this, which will have a particular bearing on his future conduct towards GFA...

The point is that the social contract is about what is most advantageous to the individual. Is GFA going to gain a great benefit from beating up invalids? Probably not. But he has much to lose, if all those who feel as Polycarp does (and let's face it -- that's most people) refuse to associate with him any more.

Social contract, in my understanding, is concerned with interpersonal relations. A big part of these relations is understanding that not everyone is going to be your kind of contractarian! However, if we WANT to deal with these people, we have to accept certain adjustments in our behavior. There is a certain disposition we must adopt. What that disposition might be depends mostly on what sort of people you're surrounded by, and what sort of dispositions THEY'VE adopted...
Freedom's Minion is offline  
Old 01-30-2002, 09:15 PM   #24
Veteran Member
 
Join Date: Oct 2000
Location: Alberta, Canada
Posts: 5,658
Post

God Fearing Atheist: Nothing you said contradicts the statement you quoted. Try harder.
tronvillain is offline  
Old 01-31-2002, 07:09 AM   #25
Banned
 
Join Date: Jul 2001
Location: South CA
Posts: 222
Post

(Edited to change "utilitarian 'ethic'" to just "'ethic'", and another thing. The edited parts are in bold.)
Quote:
Originally posted by God Fearing Atheist:
Hedon, that entire post made absolutely no sense to me.
Hehe. Sorry, I figured after I posted it I should have started with how I was interpreting your words. In this quote:
Quote:
Originally posted by God Fearing Atheist
If moral duties arise from the rationality of individual utility maximization, from the realization that your fellows can harm you if you them, we must also conclude that there are no moral duties toward infants.
I was basically inferring that you are suggesting an "ethic", where the way a person is treated is determined basically according to their power to repay you, ie "if your fellows can harm you, you had better be nice to them (assuming you don't want to harm yourself, etc)". And I would assume that would mean "if your fellow helps you, you had better be nice to them", too.

Then in this quote:
Quote:
Originally posted by God Fearing Atheist
The pain we feel when an infant is abused or killed is not an actual "pain"; it is not an act of force against us. It is, really, the statement of an aesthetic preference, a sign that our palate has been offended in some way.
Babies only provide an "aesthetic" contribution, thus you say that only our pallet is offended, correct?

You know, many people could die without exactly causing me a great deal of suffering, other than the empathy (but heck, I kinda enjoy that although it does persuade me to try to save them). It seems basically the only effect many people have on society as a whole, is "merely aesthetic". What more than *our* ("our" not including the aesthetic contributor like you don't include the baby's pain in your equation) aesthetic preferences are offended if an adult who contributes nothing more than aesthetics, is killed? (That is meant as a question not a statement.)

[ January 31, 2002: Message edited by: hedonologist ]</p>
hedonologist is offline  
Old 01-31-2002, 02:21 PM   #26
New Member
 
Join Date: Jan 2002
Posts: 3
Post

Quote:
Originally posted by hedonologist:
<strong>(Edited to change "utilitarian 'ethic'" to just "'ethic'", and another thing. The edited parts are in bold.)

Babies only provide an "aesthetic" contribution, thus you say that only our pallet is offended, correct?

You know, many people could die without exactly causing me a great deal of suffering, other than the empathy (but heck, I kinda enjoy that although it does persuade me to try to save them). It seems basically the only effect many people have on society as a whole, is "merely aesthetic". What more than *our* ("our" not including the aesthetic contributor like you don't include the baby's pain in your equation) aesthetic preferences are offended if an adult who contributes nothing more than aesthetics, is killed? (That is meant as a question not a statement.)

[ January 31, 2002: Message edited by: hedonologist ]</strong>
This is a good point and I have to agree. Even if a person's pain over the death of infants is merely a "an aesthetic preference" that is no reason to dismiss that preference as unimportant. The preferences of individuals -- all their preferences -- are what influence their conduct towards me. IF they prefer not to associate with a baby killer, I could find myself as vulnerable as a baby myself if I proceed to slaughter infants left and right.
Freedom's Minion is offline  
Old 01-31-2002, 07:32 PM   #27
Veteran Member
 
Join Date: Sep 2000
Location: Massachusetts, USA -- Let's Go Red Sox!
Posts: 1,500
Post

Tron:

You read the argument backward. In my post, I said that CMs will only cooperate with those similarly disposed. Obviously, given those circumstances, SMs lose....it doesnt matter if they're great at picking out CMs, as CMs will always defect.

But your *sort* of objection can be rephrased. Hobbes's Foole, or Machiavelli's Prince,under the weight of the argument, might conceed generally, but claim that the truely prudent utility-maximizer wont fully adopt CM...only make it appear to others that he has. This sidesteps your specific objection, but deserves serious consideration in its own right.

Ideally, all actors are "transparent", that is, their dispositions are known by all, and known by all to be known. Deception, in ideal situations, is impossible. But of course, this is a much stronger position than the facts of the matter warrent. We can downgrade this a bit, and submit that real-world people are "translucent", that their disposition cant be known in its entirety, but with more certainty than flipping a coin. If actors are translucent, then CMs sometimes fail to indetify each other, and act non-cooperativly. They will sometimes fail to detect SMs, and get taken advantage of. Translucent CMs must then expect to do less well than transparent CMs, and translucent SMs better than traparent SMs, so its not nessecarly always rational to adopt CM. This is the Foole's position.

But is this true? Even assuming translucency, do SMs come out on top? To illustrate, lets be clearer about whats being said. A CM can expect the utility u' unless she (1) indentifies and cooperate with another CM, or (2) is taken advantage of by an SM. So the probability of that is the combined probability that she A) comes into contact with a CM, r, and B) that they recognize each other as such, p, so rp. If both happen, she gains (u''-u') over the non-cooperation utility of u'. So, the effect of (1) is to increase her expected utility by [rp(u''- u'). The probability of (2) is the combined probability that she happens upon an SM, 1-r, and that she fails to recognize him, but is recognized herself, q, so (1-r)q. If this was the case, she loses the non-cooperation expectation of u' and gets nothing, 0. So in this case, the effect of (2) is the reduce her utility by [(1-r)qu']. Taking both possible outcomes together, she expects {u' + [rp(u''-u')]-(1-r)qu'}.

On the other hand, we have the SM. He can expect u' unless he exploits a CM, where the probability is interaction, r, and recognition of the CM but not being recognized himself, q...so rq. If both hold, he gains (1-u') over his non-cooperative gain, u', incresing his expectation to [rq(1-u')]. So this SM expects, in full, {u + [rq(1-u')]}.

So, its only rational to adopt CM if p/q is greater than {(1-u')/(u''-u')+[(1-r)u']/[r(u''-u')]} (the first term of the expression relating gains made by defection and cooperation, the second the probability of encountering either or). We see that, then, it is rational to adopt CM only to the extent that the ratio of p and q is greater than the ratio between defection and non-cooperation. Nearly as important, the ratio of p to q increases as CMs increase, making it increasingly more rational.

So suppose the population was divided 50/50, one half being CMs, the other SMs. They can expect successful cooperation between themselves 2/3 of the time, and defect with SMs 4/5 of the times. Even if the number of CMs is on the low side, its still going to be rational.
God Fearing Atheist is offline  
Old 01-31-2002, 11:04 PM   #28
Veteran Member
 
Join Date: Oct 2000
Location: Alberta, Canada
Posts: 5,658
Post

God Fearing Atheist, in what sense is it "ideal" for actors to be transparent?

Anyway, let's go back to game theory. We can set up a Prisoner's Dilemma with these conditions:

1)If you both cooperate, you each recieve the "reward" of three points.

2)If you both defect, you both each recieve the "punishment" of one point.

3)If one of you defects and one of you cooperates, the cooperator gets the "suckers pay-off" of nothing and the defector gets the "temptation" of five points.

Single Game:

1)If p is your estimation of the probability of your partner cooperating, then the expected utility of cooperation is 3p and the expected utility of defection is 5p+1[1-p]. You should defect, since 5p+1[1-p]&gt;3p.

2)If q is your estimation of the probability of your partner cooperating if you cooperate, and r is your estimation of your partner cooperating if you defect, then the utility of cooperation is 3q and the expected utility of defection is 5r+1[1-r]. You should only defect if q/r&lt;8/3.

Multiple Games:

Every game should be played as in a single game, unless past games affect the probabilities relating to the current game.

An example of this is playing against tit-for-tat for n games:

1)always defect utility: 5+1[n-1] which is 4+n
2)always cooperate utility: 3n
3)cooperate, but defect on n utility: 3[n-1]+5 which is 3n+2
4)any other strategy has a utility greater than or equal to 4+n but less than 3n.

So, the best strategy when playing against tit-for-tat is "cooperate, but defect on n". Of course, the more difficult n is to predict, the worse this strategy will be.

[ February 01, 2002: Message edited by: tronvillain ]</p>
tronvillain is offline  
Old 02-01-2002, 12:38 AM   #29
Veteran Member
 
Join Date: Oct 2000
Location: Alberta, Canada
Posts: 5,658
Post

I'm having a few problems with your last post:

1)You haven't really defined u, u', or u'', since you obviously can't be using the definitions from your previous post.

2)Attempting to figure out the definitions yields something this:
  • Let the utility of not playing during a given round be u'.
  • Let the utility of cooperating with a cooperator be u".
  • Let the utility of cooperating with a defector be zero.
  • Let the utility of defecting from a cooperator be u.

Assuming these are the definitions, can you explain the reasoning behind them? Specifically, why did you define one utility but leave the rest as variables? It seems that you should have at least defined them relative to each other (i.e. u is greater than u" and u" is greater than u').

3)The use of r and r-1 in your utility functions seems to indicate that a partner is chosen each round, but that a game isn't necessarily played. Is that correct?

4)It seems like your utility functions should be:

CM: (1-r)qu' (since SMs will be encountered with frequency 1-r, recognized with frequency q, and if recognized will yield a utility of u') plus rpu" (since CMs will be encountered with frequency r, recognized with frequency p, and if recognized will yield a utility of u").

SM: rqu (since CMs will be encountered with a frequency r, recognize an SM with frequency q, and if decieved will yield a utility of u) plus (1-r)pu' (since SMs will be encountered with frequency 1-r, recognized with frequency p, and if recognized will yield a utility u'). Of course, you could put in a different variable for the ability of SMs to recognize SMs, but why bother?

So, it would seem that it is rational to be an SM if rqu + (1-r)pu' &gt; (1-r)qu' + rpu".

What's going on here?

[ February 01, 2002: Message edited by: tronvillain ]</p>
tronvillain is offline  
Old 02-01-2002, 08:44 AM   #30
Veteran Member
 
Join Date: Sep 2000
Location: Massachusetts, USA -- Let's Go Red Sox!
Posts: 1,500
Post

Sorry. Double-post.

[ February 01, 2002: Message edited by: God Fearing Atheist ]</p>
God Fearing Atheist is offline  
 

Thread Tools Search this Thread
Search this Thread:

Advanced Search

Forum Jump


All times are GMT -8. The time now is 05:47 AM.

Top

This custom BB emulates vBulletin® Version 3.8.2
Copyright ©2000 - 2015, Jelsoft Enterprises Ltd.