![]() |
Freethought & Rationalism ArchiveThe archives are read only. |
![]() |
#1 |
Senior Member
Join Date: Jun 2004
Location: Gaunilo's Island
Posts: 768
|
![]()
Here I want to discuss in more detail some of the reasons why the lack of an available desire calculus is fatal to D.U.
Consider the following proposition: "There is, at present, no such thing as an objective measure of desire fulfillment across all desires. There is no way to tell whether the "universe of desires" refers to all actual desires at present, all actual desires past and present, only “the right kind of actual desires�? past and present, postulated future desires, all desires of hypothetically ideally rational persons, or all desires of the persons I subjectively care about. There is, at present, no such thing as an objective measure of whether desires are “point particles�? with a value of 1.0, or some values are a 7.835, or 7,835,947,012. There is at present no way of answering whether, if someone ingests a chemical which “artificially�?�?strengthens�? some desires, they count more, or whether this applies to otherwise rational persons who have innate chemical imbalances such as anxiety, depression, etc." Now, think this is correct, and Alonzo seems to agree with me that this is the case, but as has been pointed out he conceives of this as an epistemological difficulty rather than an ontological one. BD from KG and I have already argued that this makes absolute hash of any claim that DU is an accurate description of the way moral language is actually used -- how can DU be a "good" or "correct" theory if the kinds of measurements it says moral statements are making are, well never made? This is why I have referred to DU as "philosophitis" and "an exercise in calling a tail a leg"; I think it's just obvious that when a philosopher starts off with a theory about a certain domain by claiming that he doesn't care what's actually in the domiain, and that he'd rather talk about something else, this is profoundly misguided methodology. Call me old fashioned, but I just don't see the value in engaging in a language game that we're going to call an "explanation" if at the outset we've decided that we're not going to explain anything in the real world or even make an attempt to even refer to anything in the real world. I simply don't see what could possibly motivate someone to adopt a theory which makes neither has any explanatory power whatsoever nor makes any predictions about what anyone will do in the real world, and compounds this by failing to supply a calculus whereby I could at least speak intelligibly about moral propositions. But here I want to challenge the ontology of D.U. on an even deeper level. I submit that "the sum of all desires" is by its very formulation an incoherent notion. Even in principle, "the sum of all desires" gives me no clue how to proceed with determining the content of its reference. There are problems with "sum", "all", and "desire". What does "sum" mean? I'm still terribly unclear about this, whether desires are point particles, whether felt strength matters, whether it's even meaningful to compare felt strength across persons, whether felt strength at time t or at time t+1 is supposed to count, whether any desire can be considered innately more valuable. What does "all" desires mean -- all actual desires at present, all desires at present and in the past, all desires that ever have been held and will be held, all desires in all possible worlds? All conscious desires? Suppose I express my desire to be buried at sea. When I'm dead, by definition I no longer possess the desire -- does my previous desire "count"? Finally, I think you simply haven't adequately defined what a "desire" is. Even in your simplest Out To Lunch example, you haven't convinced me that what's going on is a calculus of desires. Is my desire to eat chinese for lunch that day the "same as" my desire to eat lunch at all that day, a component of the desire to eat lunch that day, or a completely separate desire from my desire to eat lunch that day? Is my desire to eat chinese for lunch that day the same as, a component of, or separate from my desire to eat food generally? How about "of my desire to go on living"? How about "of my desire to eat a diversity of foods in my lifetime"? How about "my desire to support the local immigrant family who runs the restaurant"? How about "my desire to eat lunch with my friends"? It seems to me that the ontology of D.U. is mistaken insofar as it conceives of desires as part of the furniture of the universe, as things, when they are in fact events rather than entities. It seems further that any claims even in principle to be able to isolate once and for all what a desire is that will enable us to consistently employ the definition of "desire" fail in the face of the fact that the psychology of desire suffers from radical overdetermination. It is simply nonsensical to suppose that we can isolate single desires in such a way as to compare them, even as point particles, against everything else we might ordinarily want to refer to as a desire. Rather, the choice of what we consider to be a "desire in question" is entirely context dependent, and entirely a subjective choice. Desires are simply not entities to which logical atomism can be made to apply, but this is precisely what would have to be the case for "the sum of all desires" to even be a meaningful concept. |
![]() |
![]() |
#2 | |
Senior Member
Join Date: Jun 2004
Location: Gaunilo's Island
Posts: 768
|
![]()
Soon I will be travelling for an extended period; in light of this I thought I'd give this unanswered thread a bump before I left in hopes of getting some response to it.
This passage from the introductory chapter to J.L. Austin's Sense and Sensibilia struck me as a fittingly epigrammatic take on a certain method of doing philosophy - and the type of philosophical doctrine that is the characteristic product of that method - which certain readers of IIDB may find valuable: Quote:
|
|
![]() |
![]() |
#3 |
Veteran Member
Join Date: May 2005
Location: Next smoke-filled cellar over from Preno.
Posts: 6,562
|
![]()
For the aid of your more ignorant readers, could you explain the following:
D.U. epistemological ontological I mean this sincerely. This looks like an interesting discussion. Perhaps you could give us a boost into it so we could participate. |
![]() |
![]() |
#4 | |
Veteran Member
Join Date: Mar 2002
Location: UK
Posts: 5,932
|
![]() Quote:
As for "epistemological" and "ontological", I always either use a dictionary or search google. ![]() Chris |
|
![]() |
![]() |
#5 |
Veteran Member
Join Date: Oct 2003
Location: Melbourne, Oz
Posts: 1,635
|
![]()
When are you heading off, Heiro5ant, and will you still be checking in occasionally? I've saved a copy of this thread to read properly back at home, but it might take a while before I can get around to it.
|
![]() |
![]() |
#6 |
Veteran Member
Join Date: Sep 2003
Location: zero point
Posts: 2,004
|
![]()
And here I was thinking he was talking about (D)epleted (U)ranium
|
![]() |
![]() |
#7 | |
Senior Member
Join Date: Jun 2004
Location: Gaunilo's Island
Posts: 768
|
![]() Quote:
|
|
![]() |
![]() |
#8 | |||||||||
Veteran Member
Join Date: Oct 2003
Location: Melbourne, Oz
Posts: 1,635
|
![]()
Hey Hiero5ant,
Here’s my take on your comments: Caveat 1: I’ve only read a couple of Alonzo’s shorter pieces, so your understanding of DU may be clearer than mine. For my money it’s still a form of PU; clarified, but maybe not uniquely so. Caveat 2: I’ve never really read the discussions between you and Alonzo. They’ve always been longer than I’ve had time for. Now I’ve got some free time... but you seem to be referring to specific discussions between the two of you, which isn’t a great way of encouraging other participants (*wrist-slaps H5*). And I’m sorry for repeating anything you two’ve said before. Caveat 3: I’m unfamiliar with some of your terminology, so I may have misunderstood some of your comments - I’ll ask when I’m uncertain. Caveat 4: I’m not a moral objectivist as Alonzo purports to be. I’m probably an intersubjectivist, though I’ve only heard the term secondhand; I’m not actually sure that I’m not describing the same justification as he is with a different phrase, but I’ll use mine to be on the safe side. Caveat 5: Just for the record, I’m pretty much a classical utilitarian (CU) - I think preference utilitarianism (PU) is a useful way of clarifying how to achieve CU goals in many (perhaps most) situations, but where the two appear to have conflicting goals I’ll always pick happiness over preference satisfaction/desire fulfillment. So I’m not necessarily defending Alonzo - but since many of your criticisms apply to all forms of util, I figured I might as well have a go at them. I’m also curious what system ethics you prefer (and why)... saves me having to be on the defensive team all the time ![]() Caveat 6: Since I view DU and PU in much the same light, I’ll use the terms interchangably unless I specify otherwise. Caveat 7: I have no idea what your background/quality of understanding is, besides that you argue with Alonzo a lot. I’m not even sure what mine is (I did an English/Phil BA in England, and have read a bit of ethics since then, since it’s one of the only areas of phil that sustained my interest). So I’m not sure whether to talk down, up or sideways to you. I’ll try to do all three ![]() Quote:
[5] Do you mean he conceives of it as a difficulty in measuring success in DU terms whereas you consider it a problem in accepting DU at all? Assuming that’s the case, I support Alonzo on this one. Generally speaking, with most of your objections, you seem to be claiming ‘I can’t judge x, therefore x is impossible to judge.’ [1] & [3] (which look to me like much the same objection) Agreed. But I don’t think this can be prevents us from recognising that there are different degrees of desire/preference satisfaction or from claiming that we can sometimes recognise a greater degree vs a lesser. If it did, similar logic would have held that all objects moved at the same speed before the speedometer was created. [2] I’ve never heard the phrase ‘universe of desires’ - is it yours or Alonzo’s? Regardless, this is only partially true. I’d say some of these questions are perfectly answerable, even though there may not be a consensus on the answer: (e) It’s not the case that desires of your friends are more morally important than others; either morality means nothing at all in which case such a suggestion would be meaningless, or morality is in some sense universalisable, in which case it can’t make any specific claims about person p - or his friends. (b) doesn’t really make sense to me. I don’t think anyone would argue that desires that I used to hold and no longer care about are at all important; I presume that’s not what you’re getting at. Possibly you mean desires I held before but currently don’t have the capacity to hold (eg. the desire to continue being kept alive even though I’m now comatose). Where I’ll never again have the capacity, this is a relatively straightforward question for me, since it’s one of the areas where PU and CU clash: no-one is being harmed if a braindead - or just generally dead - person’s desires are thwarted. I suspect Alonzo would disagree, but that’s his problem ![]() (d) appears to be two scenarios in one: {i} What about the normal desires of a person who will/may regain the capacity to have them? {ii} Should we respect desires we can comfortably predict (ie. a baby’s desire not to grow up into a life of brutal slavery)? {ii} answers differ, but most modern utilitarians seem to have an answer. Peter Singer briefly raises the question in Practical Ethics, and calls the negative response ‘prior-existence utilitarianism’, the positive one ‘totalising utilitarianism’. He briefly discusses pros and cons of each. For PEU, the inference that if we are to consider creating life doomed to suffering a Bad Thing, we’re obliged to consider creating life blessed with happiness/preference satisfaction a Good Thing and hence are morally obliged to create as much happy life as the universe can comfortably sustain. For TU, simply the fact that most people will agree that creating life doomed to suffer is just as Bad as any other action which will eventually cause suffering. Singer seems to favour PEU, though he admits Derek Parfit refuted his original justification. I prefer TU for the reasons I just gave (I don’t consider the PEU criticism damning). At any rate, it’s not the case that it’s impossible to form a substantial opinion on the question. {i} is a little trickier, but the rational answer might be to tie it to the probability of recovery. On the other hand, I’m not convinced that it’s always wrong to murder someone in their sleep, but that’s a contentious claim that I’m too lazy to defend here. It also doesn’t solve the problem - if it’s wrong in some circumstances, the problem still applies to those people we aren’t planning to kill. But then the probability of recovery still works - it’s worse to elope with your best friend’s girlfriend if he’s passed out from an alcoholic stupor than if he’s been hit by a bus and left with a 98% chance of brain-death, especially assuming no-one else is going to be offended in either case. I don’t think it’s unusual in classical utilitarian reasoning to apply probability to moral questions (IIRC Jeremy Bentham did it). In fact, I think people do it all the time (eg. considering drunk-driving worse than speeding even though the problem with both is they increase the likelihood of the same kind of harm). (c) also covers two scenarios: {i} Should we prioritise a person’s actual preferences, or the preferences they would have if they were fully informed (and possibly if they were also capable of deducing the consequences), {ii} Should we prioritise either of the options in {i}, or should we use our own capacity for logic if theirs is lacking and do what’s ‘best’ for them (ie. denying a desperate heroin addict a hit from an HIV-infected needle). {i} I think that most PUs prefer the second option (not that I’ve read anything close to a representative sample, but I don’t remember ever hearing of any PUs who preferred the first option), and since most people, so long as they trusted the moral actor, would probably opt for the second option regarding themselves (eg. being grateful for a surprise party thrown by friends who knew the beneficiary would appreciate it), I think it’s another relatively straightforward answer. {ii} I’d say this is essentially the CU vs PU question. So it’s another straightforward answer - for me (at least insofar as I’m offering my opinion as evidence that it’s possible to answer the question, though I don’t plan to defend it here). (a) I think I’ve covered under (b) and (c) [4] Is less of a problem for CU, I think, though it’s possibly the most awkward objection. I can think of a few potential responses, whose priority partly depends on drug’s exact effects: (a) Someone with noticeably stronger desires does get more consideration than someone with lesser. Any ethical theory has to bite some unintuitive bullets, and this one isn’t internally contradictory, so it’s not necessarily fatal. (b) It doesn’t matter; each person’s maximum happiness can count as 1, their minimum happiness as 0 (or -1), and though this may mean such a drug-user requires more perks to reach the same proportion of his maximum happiness, any person on such a drug would forego a large stake in his claim to such ‘standard’ happiness since the resources used to make him reach it would be more efficiently spent elsewhere. (c) We simply treat all people in terms of their potential, in which case the fact that someone has taken the drug that helps them achieve their potential is virtually irrelevant: we want as many people to take it as possible (so long as they’re in a position to make use of it). (d) We consider happiness relative rather than fixed by x amount of brain activity. It’s immoral for Hugh Hefner to relocate his Playboy mansion next to some 3rd world farmer’s mud hut, because even though the farmer isn’t harmed directly, he sees a level of happiness that he hadn’t previously been aware existed. So it is immoral to take the drug in the first place if the user will visibly become much better off than those around them, assuming the drug is denied them (if everyone in the community takes the drug your question is moot, since everyone’s desires are equally as comparable as before). (e) We account for personal responsibility, not in any metaphysical sense, but simply in the CU sense that if we hold people responsible for good and bad choices, more will prefer the former. Therefore, no-one who takes such a drug gets any special treatment, no matter the consequences to them. I’m sure there’re other possible answers, but I haven’t thought this through too much, so they’d be pretty arbitrary. At any rate, there are numerous possible ways of responding, though they’re not necessarily exclusive (I lean slightly towards (d) and (c) at the moment); and I can’t see any reason to believe why some shouldn’t be more defensible than others in light of any given basis for whichever form of utilitarianism is in question. I’d say the burden of proof lies wholly with you if you’re going to deny the possibility of reaching a reasoned conclusion. I won’t cover the question of depression since my previous answers are mostly applicable to some degree, and I want to finish this reply before Christmas! Quote:
Quote:
Quote:
The rest of your comments mostly seem to relate directly to Alonzo. A couple I wanted to respond to anyway: Quote:
If you accept the greatest good principle, it certainly makes sense to try and refine it though (hence my qualified acceptance of PU)... And this is why I’m curious where you’re coming from - it’s easy to criticise from an obscure position, but you either need to offer an alternative interpretation or deny the principle altogether. Having said that, I’d agree that ‘the sum of all desires’ isn’t a complete interpretation since it potentially conflicts with CU. So the rest of this is mostly Alonzo’s problem, though it seems to be similar material to the first paragraph I quoted. If Alonzo wants to replace PU with DU I think he’s got a lot of work to do in clarifying it, but I don’t think, as you seem to, that present vagueness necessarily makes the theory invalid (as with the previous speedometer analogy). Quote:
Quote:
Quote:
Quote:
What’s the context? Specifically which ‘doctrine’ is it attacking? |
|||||||||
![]() |
![]() |
#9 | |
Senior Member
Join Date: Jun 2004
Location: Gaunilo's Island
Posts: 768
|
![]() Quote:
Let me see how concisely I can say it. I'm not saying "it's impossible to judge" whether you're right, I'm saying "it's impossible for you to be right because there is no such thing as maximum desire/happiness fulfilment." Desires or "happiness units" or whatever aren't "things" like pebbles on a beach, which if only you had the time you could count them all up and come to a conclusion. I am expressing doubt that the property of the universe the theory refers to as desire maximization even exists, or could exist in principle. I have very good reasons to suppose that such a property cannot exist (see the Moral Noncognitivism thread), and if some utilitarian wants to convince me that their theory is correct the burden is on them to show that it meets even minimal standards of coherence and to provide evidence that it exists. More later. |
|
![]() |
![]() |
#10 | |
Veteran Member
Join Date: Mar 2002
Location: 920B Milo Circle
Lafayette, CO
Posts: 3,515
|
![]() Quote:
I am not talking here about a theory of value. Desires, and desire fulfillment, are elements in a theory of intentional action. They, and their counterpart "beliefs", are postulated as a theory of human action -- acts, for example, such as the writing of a post on an internet discussion forum. I agree that, as soon as a better theory of intentional action comes along, the theory that states that we always act so as to fulfill the more and the stronger of our own desires given our beliefs will have to be abandoned in favor of this better theory. However, I think that a better theory is in order before we abandon the existing one. I also hold that this desire-fulfillment theory of human action is the theory that you use when you seek to explain and predict the people around you -- and when you explain your own actions to them. Answer the question, "Why do you come to this site and submit posts?", I will bet that your answer will take the form of identifying a set of beliefs and desires and asserting that your actions best fulfill those desires given your beliefs. "Desire fulfillment", as a theory of value, adds nothing to this theory of intentional action. It is not an "add-on" with additional features. It simply restates one of the propositions within this theory of action. Belief-Desire-Intention theory of action gives us the concept of "Agent has a desire that 'P', for some proposition 'P'." From this, the concept that "Any state of affairs in which 'P' is true is such as to fulfill this desire of Agent" simply restates the same fact. So, if you do not think that "desires" and "desire fulfillment" exist, then I would like to hear your theory of intentional action. Try answering the question. "Why are you responding to this post?" without talking about desire fulfillment. |
|
![]() |
Thread Tools | Search this Thread |
|