FRDB Archives

Freethought & Rationalism Archive

The archives are read only.


Go Back   FRDB Archives > Archives > IIDB ARCHIVE: 200X-2003, PD 2007 > IIDB Philosophical Forums (PRIOR TO JUN-2003)
Welcome, Peter Kirby.
You last visited: Today at 05:55 AM

 
 
Thread Tools Search this Thread
Old 01-15-2002, 08:44 PM   #1
Veteran Member
 
Join Date: Sep 2000
Location: Massachusetts, USA -- Let's Go Red Sox!
Posts: 1,500
Post Gauthierian Contractarianism

Since the last time I posted to this forum, ive grown fond of a particular type of Social Contract Theory pioneered by David Gauthier, which, I believe, provides *the* best secular defense of moral norms. What follows is a quasi-technical acount.

(note: this is not intended as a formal argument. The numbering was just to help me maintain a sort of rough pattern for its tenants. Most of the example were taken from Morals By Agreement. And please forgive any spelling errors....im lazy.)

1. Preference is ones desire for a certain outcome:

This, I think, is pretty uncontroversial. To say one prefers an apple to an orange is to say one prefers the eating of an apple to an orange.

2. Utility is the measure of preference:

Utility is to preference what temperature is to heat. It is ascribed to states of affairs considered as preference relations, and ranked in such a way that we can infer an agents preference vis-a-vis different outcomes.

3. Value can be equated with utility:

In this conception, value, like utility, is a measure of preference. The value of a state of affairs is the extent to which one prefers it to others.

4. Value is subjective (dependent on affective relationships):

Value is not inherent to objects, or more precisely, to state of affairs involving those objects; is it not part of the "ontological furniture of the universe", as Gauthier says. Rather, it is created and determined via preference. This is not to be confused with the idea that value is arbitrary or unknowable.

5. Value is relative (dependent on each particular individuals affective relationships):

This is also fairly clear. People of different stripes accord different states of affairs different values. Just a cursory glace at history (or your friends and family) will show this.

6. While revealed preference is useful in economics, choice is not always rational:

Consider Queen Gertrude, who chooses the poisoned the cup. It would no doubt be wrong to suppose she chose that outcome because she perfered it. Although mistaken, the choice was not irrational because she did not have the relevant information. Had she known about it, and supposing her aim was not suicide, then it *would* be irrational. We might then speak of rationality concerning itself with "considered preference", preference with the relevant information, and due consideration.

7. Practical rationality is concerned with the maximization of individual utility, the measure of (considered) preference. In the real world, this often means choosing actions not only because of their outcome, but because of the probability of the outcome:

On this conception, a rational individual will take those actions toward a "lottery", the prize of which is a desired outcome. If Billy knows smoking might cause health complications down the road, and values his later life more than his current smoking, it would be rational for him to quit. If, in the same situation he continues, he is acting irrationally. It should be stressed that the *content* of the preference (as of now) doesnt mean anything for us. We will defend Hume when he says, its "not contrary to reason to prefer the destruction of the whole world to the scratching of my finger"

8. When interacting with others, that is, under conditions of strategic choice, the definition of outcome under conditions of parametric choice has to be altered as to say that an outcome is the product of several actions, one for each person involved, in a set of determinate circumstances:

This is simply to say, when dealing when you, outcomes I prefer are not solely to be determined what i do; it depends as much on what you do.

9. Strategic rationality itself rests on three conditions: A) each person choice must be a rational response to the choices she expects others to make, B) each person must expect every other person's choice to satisfy condition A, and C) each person must believe her choice and expectations to be reflected in the expectations of everyone else.

Condition A relates the rationality of the agent to the interactions, B makes explicit that all agents in the interaction are rational, and C makes explicit the assumption that each person views the situation as if her knowledge of the grounds for choice were complete (that is, shared by all and known by all to be shared).

10. A strategy is a lottery over the possible actions involving other actors. A "pure" strategy assigns a probability of 1 to one action, and a 0 to the rest. A "mixed" strategy assigns a non-zero probability to more than one action, the sum of which is, of course 1.

Another simple matter of definition. It should be noted that this is not, as before, a case of actions performed under risk, with outcomes as a reward. Rather, this is *choice* as a lottery with actions as the prize. They should not be confused.

11. The expected outcome is the sum of the strategies chosen by each agent, when interacting.

Another definition.

12.An expected outcome is in equilibrium if and only if it is the product of strategies that are mutually utility maximizing.

Yet another. This one is highly important.

13. Given the conditions for strategic rationality, and equating a rational response with a utility maximizing response, each actor must expect his outcome to be in equilibrium (from 7, 9 and 12):

To illustrate, lets suppose there are n number of people, and let person 1 choose strategy S1. By condition A, S1 maximizes his utility given the strategies (S2, S3......Sn) of the other actors, which he expects them to choose. According to B, S2 must maximize person 2's utility given the strategies of S1, S3....Sn, which person 1 expects person 2 to expect. And S3 must maximize person 3's utility given the other strategies which person 1 expects 3 to expect, and so on. By C, person 1's choice and expectations must be reflected in his beliefs about others expectations, so that S1=S1'=S1''...and S2=S2'....and S3=S3'...and so on. Hence each strategy must be utility-maximizing given the others and so the expected outcome must be in equilibrium. This is true of all situations, as Nash discovered. Note that this does leave room for incorrect expections, in which case the solution could not be an equilibrium, but thats the fault of the expectations, and not the choice.

This can be illustrated again with a concrete, real world example. Suppose there are two people, Smith and Jones. Smith wants to go to Fisher's party, but even more, wants to avoid Jones who may be there. Jones wants to avoid the party, but would love to see Smith. If Smith expects Jones to go the party, Smith will stay home. If Jones expects Smith to go, so will he. They face, as it where, a problem of interaction. None of the possibly outcomes afforded by a pure strategy will end in equilibrium, so we have to introduce mixed strategies and assign utilities to their outcomes. Lets suppose that Smith is indifferent between her second preference (staying home if Jones goes) and a lottery with a 2/3 chance of her first preference (going to the party if Jones stays home) and a 1/3 chance of her fourth preference (both going to the party). Further, lets suppose she is indifferent between her third preference (both staying home) and a lottery with a 1/3 chance of her first preference and a 2/3 chance of her fourth. We can then assign utility functions of 1, 2/3, 1/3 and 0 to the four outcomes in order of Smith's preference. Now lets suppose Jones in indifferent between his second preference (both staying home) and a lottery with a 1/2 chance of his first preference (both going) and a 1/2 chance of his fourth (staying at home with Smith going). And lets suppose that he is indifferent between his third preference (going if Smith stays) and a lottery with a 1/6 chance of his first preference and a 5/6 chance of his fourth. To these, we assign utilities of 1, 1/2, 1/6 and 0 in order of his preference. We can then write the following matrix:

Smith goes, Jones goes: 0,1
Smith stays, Jones goes: 2/3, 1/6
Smith goes, Jones stays: 1, 0
Smith stays, Jones stays: 1/3, 1/2

Suppose that Smith chooses a mixed strategy assigning a probability of 1/4 to going to the party, and a 3/4 to staying at home. Then, if Jones goes, his expected utility is [(1/4 x 1) + (3/4 x 1/6)] = 3/8. If Jones stays at home his utility will be [(1/4 x 0) + (3/4 x 1/2)] = 3/8. Each action yields the same utility. Suppose we say Jones assigns a1/2 probability to going, and a 1/2 to staying. Again, whatever Smith chooses, her utility will be 1/2

14. Although there is always at least one outcome that is in equilibrium, there can be more than one.

Consider the following sort of matrix:

Smith does, Jones does: 1,1
Smith does, Jones doesnt: 0, 1
Smith doesnt, Jones does: 1, 0
Smith doesnt, Jones doesnt: 0, 0

Each expected outcome is in equilibrium.

15. An equilibrium outcome is dominated if everyone disperfers it to another equilibrium outcome. The undominated outcome is the one perfered by everyone.

In the example above, the bottom three are dominated, and the top is undominated.

16: Despite the fact that an undominated outcome may be perfered in the situation described, there is no way to affect the utility one can get by ones choice:

This sort of example is going to be central for our later account of morality. Because we have not yet dealt with the content of preference, we can say both people are formally selfish. They could maximize each others utilities at no cost, but, we can say, have no reason to do so.

Things can actually, get quite worse, given in the form of the Prisoner's Dilemma. I have a thing a thing you like, you have a thing I like. I like your thing more than my thing (and vise versa), but both things better than either one alone. We could:

I trade, you steal: 4th, 1st
I trade, you trade: 2nd, 2nd
I steal, you trade: 1st, 4th
We both do nothing: 3rd, 3rd

According to the logic involved in maximization so far, I should reason as follows: "If you attempt trade, i'd be better off stealing. Having both is better than just having one. And suppose you try to steal. Obviously, it isent wise to try to trade, as having my thing alone is better than having nothing at all. In any case, I should not try to trade". Each aims for utility maximization, but ultimately leave each worse off.

17. A joint strategy is the product of the actors set of strategies, one for each involved in interaction. These, in turn, can be mixed or pure.

18. To solve the sort of problem above, we introduce the principle of minimax relative concession, that is, given a range of outcomes, each of which require concessions by some or all persons if it is to be selected, then an outcome be selected only if the maximum relative concession it requires is as small as possible, or a minimum.

This is just to say that the maximum concession be no greater than the maximum relative concession required. In the Prisoner's Dilemma above, it would require each to concede a single point of utility (their best possible outcome)

19. By minimizing maximum concession, one maximizes minimum benefit (maximin relative benefit).

Again, in the PD situation above, you will see this is the case. Relative benefit and concession sum into unity: [(u-u*)/(u*-u*)] + [(u*-u)/(u*-u*)] = 1.

20. Minimum benefit is an optimal outcome, an outcome that cannot make anyone better off without making someone worse off. In Prisoner's Dilemma type situations, this is a superior outcome than straight utility-maximization.

Once again, we can see this is true. The second outcome from top is such that, should either one be better off, the other is worse off.

21. It is rational to adopt a metachoice, a choice about how to make choices, we will call constrained maximization.

A constrained maximizer is someone conditionally disposed to base acions on the joint strategies of others according to the principle of minimax relative concession. This is utility-maximizing.

22. This conditional disposition can be rephrased as the Lockeian proviso.

Do not worsen the condition of others, assuming that afford you the same.

Objection to 21: It is not rational to adopt constrained maximization as a disposition. Rather, one should only comply insofar as his expected utility from constrained maximization is more than (or equal to) that by non-compliance:

The argument given is as follows: let u equal the utility gained by the actor if all individuals act on individual strategies, u' equal the utility gained should all act cooperatively in a joint strategy, and u'' equal the utility should everyone choose a joint strategy but himself. So suppose he adopt sa policy of straight maximization (one who seeks to maximize his utility using individual, as opposed to joint strategies). If others base their actions on a joint strategy, he expects utility u''. If he expects others to act on individual strategies, so does he, and expects u. If the probability that others will base their actions on a joint strategy is p, then his overall utility is [(pu'' + (1 - p)u]. And suppose he adopts constrained maximization. If others base their actions on a joint strategy, so does he, and expects utility u''. If others act on individuals strategies, so will he, and expect u. In this case, his expected utility will be [(pu' + (1 - p)u]. Since u'' is greater than u', he should adopt straight maximization.

This would be true if it was the case that the probability of others basing their actions on a joint strategy were independent of ones own disposition. But that is not correct. Only those disposed to keep agreements are suitable partners for cooperation. Thus, constrained maximizers have options open to them that straight maximizers simply dont.

Objection to the solution to the former objection: This merely shows that its wise to make people think your cooperative, and rob them blind behind their backs.

For theoretical purposes, we can simply suggest that ideal actors as "transparent"; that is, their dispositions are know by all and known by all to be known. To accommodate this to the real world, we can modify it slightly, and suggest that actors are "translucent", sometimes known to be either straight maximiers or constrained maximizers, but sometimes not.

I know the following was a bit rough in spots, but i'd enjoy comment.

-GFA
God Fearing Atheist is offline  
 

Thread Tools Search this Thread
Search this Thread:

Advanced Search

Forum Jump


All times are GMT -8. The time now is 10:58 PM.

Top

This custom BB emulates vBulletin® Version 3.8.2
Copyright ©2000 - 2015, Jelsoft Enterprises Ltd.