Freethought & Rationalism ArchiveThe archives are read only. |
02-20-2002, 07:41 PM | #61 | |
Regular Member
Join Date: Sep 2000
Location: New York,NY, USA
Posts: 214
|
Quote:
So I agree with your conclusion that Johnny should eat given that he desires to satisfy his hunger. Thus, you have derived and "ought" from an "is." But what about deriving values from them? |
|
02-21-2002, 12:17 AM | #62 |
Veteran Member
Join Date: Aug 2000
Location: Indianapolis area
Posts: 3,468
|
Brad Messenger,
This argument shows that statements SingleDad called "moral strategies" are in fact objective. Yet, this does not go to show that objective moral values exist. Go back and read my posts more carefully. I'm contesting the notion that there are any such things as "objective moral values." We're in complete agreement here. So I agree with your conclusion that Johnny should eat given that he desires to satisfy his hunger. Thus, you have derived and "ought" from an "is." But what about deriving values from them? I'm not trying to derive any values from this. The relevant value (Johnny wants to satiate his hunger) is one of the premises from which the prescriptive statement is derived. Edited to change "incomplete" to "in complete." [ February 21, 2002: Message edited by: Pompous Bastard ]</p> |
02-21-2002, 08:18 AM | #63 | |
Veteran Member
Join Date: Sep 2000
Location: Massachusetts, USA -- Let's Go Red Sox!
Posts: 1,500
|
Quote:
|
|
02-21-2002, 10:40 AM | #64 |
Veteran Member
Join Date: Aug 2000
Location: Indianapolis area
Posts: 3,468
|
God Fearing Atheist,
The great thing about contractarianism, i think, is that it is able to extend this sort of hypothetical imperative to preference in general; ie: whatever you prefer, you ought to do X. I don't know if I'd say that's quite true. At best, contractarianism (I am a conractarian of sorts, btw) can provide us with normative principles but there are certainly situations in which it would be accurate to say that some agent should violate those principles. To re-use my previous derivation routine: P1) Agent X is not in state Y. P2) X values state Y (that is to say, the relationship between X and Y posseses the highest positive CH) P3) Act A, if performed by X, will bring about Y. P4) All else being equal, agents will act so as to further their own values. C1) All else being equal, X would perform A. (from P1-4) P5) To say that an agent should do something is to say that that agent would do that thing, given sufficient information. C2) X should perform A. (from C1 and P5) P6) A is unethical (that is to say, to perform A would violate a behavioral norm established through contractarian argument). C3) X should perform an unethical act. (from C2 and P6) Of course, this is very simplified, and assumes that the positive "CH" of the relationship between X and Y is greater than the negative "CH" of the relationship between X and Z, the state of affairs representing the consequences of X's violation of behavioral norms. Does that make sense? Edited to add the clause ", if performed by X," to P3. [ February 21, 2002: Message edited by: Pompous Bastard ]</p> |
02-21-2002, 02:52 PM | #65 | |
Veteran Member
Join Date: Aug 2000
Location: Australia
Posts: 4,886
|
Quote:
|
|
02-21-2002, 04:01 PM | #66 | |
Regular Member
Join Date: Sep 2000
Location: New York,NY, USA
Posts: 214
|
Quote:
If we suppose, though, that Johnny says "I have this feeling that people call hunger and I wish to get rid of it." He then decides which course of action will serve this purpose. It is objectively true that the strategy that will reach this goal is eating rather than not. Thus, he should eat. If a person is still deciding amongst certain strategies to fulfill a desire, I think the word "should" is appropriate when describing the best strategy. |
|
02-21-2002, 04:09 PM | #67 | ||||||||||||
Veteran Member
Join Date: Jul 2000
Location: King George, VA
Posts: 1,400
|
Pompous Bastard:
In your post to Dr. Retard you said: Quote:
Anyway, let’s turn to your responses to my posts. Quote:
In the first place, when we set out to discuss something like “moral philosophy” (especially with a bunch of strangers) it is implicit in the project that we intend to use terms like “moral”, and by extension moral terms, in a more or less “standard” way, meaning the way most people use them. If, for example, Jabberwocky were to make an appearance and make some seemingly nonsensical remarks, only to reveal that what he meant by saying “X ought to do Y” was that Y would enrich Joe Blow from Boise more than any other available choice, we would rightly conclude that he wasn’t “getting with the program”, but was just being disruptive. It follows that considerations of what “most people” mean by terms like “right”, “good”, and “ought” are very much relevant to the discussion. Just the same, I didn’t find it necessary to refer explicitly to what “most people” mean until my response to Dr. Retard’s post. His hypothetical scenario began by supposing that PB “believed that [having the property CH] was a pretty good rational reconstruction of "right" and "good". I interpreted this to mean that he thought that it was a pretty good attempt to make sense of what most people mean when they use terms like “right” and “good”. In other words, it seemed to me that he was doing what I was talking about in the How Can Morality be Objective thread when I said: Quote:
Quote:
Quote:
Your “proof” is now as follows: Quote:
(1) C1 does not follow from P1-P4. What follows is “All else being equal, Johnny will eat. To get to C1 you need to replace P4 with something like: P4': Given enough K&U, agents would act so as to further the values that they now have. I left out the “all else being equal” clause because it seems problematic. It’s possible that all things could look equal to the agent in his current state of K&U, but wouldn’t it he had enough K&U. And of course, with increased K&U the agent’s values might change. (Both of these possibilities seem most unlikely in this case, but we’re looking at logical gaps in the “proof”. These gaps will look pretty important if you try use this form of argument to derive a conclusion that isn’t so trivially obvious.) (2) P5 is not really an “is” statement; it’s a definition of what you mean by “should”. It can be restated as “An agent should do what he would do if he had enough K&U.” In this form it’s clear that this is a “should” premise. Whether it’s a prescriptive premise depends, I suppose, on what you mean by “prescriptive”. At any rate, my earlier comment about my proposed “missing premise” is just as true of this one: Quote:
Quote:
Quote:
At any rate, although I wouldn’t say that your argument really derives an “ought” from an “is”, it does derive an “ought” from statements that are purely about the natural world in some sense. This is a built-in feature of my moral theory which carries over into yours since yours is based on mine in a sense. But then my intent was to develop a “naturalistic” theory of objective morality which avoids the naturalistic fallacy. Quote:
But it isn’t really worth arguing about whether such a statement is “factual”, or whether it’s possible in principle to derive an “ought” from an “is”. My point, you may recall, was not even about whether this is possible. It was that principles of action like the Principle of Induction , Occam’s Razor, and the Golden Rule yield prescriptive statements because they are prescriptive statements. which is to say that they are not all that different from one another in their fundamental nature. Finally, you seemed to take umbrage at one of my remarks: Quote:
Quote:
Of course there is a sense in which the motive for doing anything is “happiness”. But on analysis this turns out to be a meaningless tautology which has nothing to do with values. Thus, if we define “happiness” to be “obtaining what we desire” then the object of any act is by definition “happiness”, since the object of any act is to obtain something we desire. But if we are going to talk meaningfully (i.e., non-tautologically) about values, we have to distinguish between the values that involve achieving happiness for ourselves and values that involve achieving happiness for others. And in the sense in which these are different values (and not just different versions of the universal “value” of doing what makes one happy at the moment), it is truly a cause for pity if the things that a person values consist entirely of things that (he believes) conduce to a “happy” life for himself and none of them are directed toward the welfare of anyone else. Actually I’m pretty sure that this isn’t is true of you anyway, so there’s no reason to take offense. [ February 21, 2002: Message edited by: bd-from-kg ]</p> |
||||||||||||
02-21-2002, 05:43 PM | #68 |
Veteran Member
Join Date: Mar 2001
Location: Somewhere
Posts: 1,587
|
I apologize for not having replied to this thread in several days. I woke up a few mornings ago and realized I had tests coming up over material I hadn’t even purchased yet, let alone read. I will (hopefully) reply to BD’s post of February 19th in the not to distant future.
|
02-21-2002, 07:33 PM | #69 | |
Veteran Member
Join Date: Sep 2000
Location: Massachusetts, USA -- Let's Go Red Sox!
Posts: 1,500
|
PB said,
Quote:
I elaborated in the "Infanticide and the Social Contract" thread, if you're at all interested. |
|
02-21-2002, 07:51 PM | #70 |
Veteran Member
Join Date: Aug 2000
Location: Indianapolis area
Posts: 3,468
|
God Fearing Atheist,
I elaborated in the "Infanticide and the Social Contract" thread, if you're at all interested. I'll pop over and take a look. BTW, I'm going to be away from my PC for most of the weekend, starting tomorrow, so I won't be responding to anything until Monday, including bd's latest post. |
Thread Tools | Search this Thread |
|