FRDB Archives

Freethought & Rationalism Archive

The archives are read only.


Go Back   FRDB Archives > Archives > IIDB ARCHIVE: 200X-2003, PD 2007 > IIDB Philosophical Forums (PRIOR TO JUN-2003)
Welcome, Peter Kirby.
You last visited: Yesterday at 05:55 AM

 
 
Thread Tools Search this Thread
Old 02-19-2002, 11:42 AM   #41
Veteran Member
 
Join Date: Mar 2001
Location: Somewhere
Posts: 1,587
Post

Regarding the “ought” “is” problem. Instead of typing out my own response, I’ll quote from MacIntyre’s After Virtue:

Quote:
There are several types of valid argument in which some element may appear in a conclusion which are not present in the premises. A.N. Prior’s counter-example to this alleged principle illustrates its breakdown adequately from the premise ‘He is a sea-captain’, to the conclusion that may be validly inferred that ‘He ought to do whatever a sea-captain ought to do’….Both of these arguments are valid because of the special character of a watch and of a farmer. Such concepts are functional concepts; that is to say, we define both ‘watch’ and ‘farmer’ in terms of the purpose or function which a watch or a farmer are characteristically expected to serve.
I edited out long-winded examples of properly drawing the conclusion that given a certain set of characteristics, you could call a watch or farmer ‘good’ in that it either performed it’s functional well or it didn’t.

Rand’s ethics is a form of Aristotelian ethics and she views man as having a certain function to perform (i.e., to live). Given this function or purpose, she builds her ethic and derives the “is” of human nature into the “ought” to fulfill this function. Viewing moral statements this way, it’s clear how she evaluates a moral expression and assigns it a truth-value.

[ February 19, 2002: Message edited by: pug846 ]</p>
pug846 is offline  
Old 02-19-2002, 12:35 PM   #42
Veteran Member
 
Join Date: Jul 2000
Location: King George, VA
Posts: 1,400
Post

jlowder:

I wasn’t really “question-begging” because my statement wasn’t intended to be an argument. It was a statement about what I understand it to mean to say that something is a moral principle or moral fact. It seems totally weird to me to say that it is a moral fact that X should do Y, yet even if X knows what it means to say he “should” do Y in the moral sense and also knows that he should do Y in this sense, he might still have no rational reason for doing Y. It seems to me that this is a fundamental difference between “moral” statements and other statements about actions. The statement “Doing Y would be morally right” is just not the same kind of statement as “Doing Y would be unusual”. Unlike other properties that actions might have, “being morally right” contains within itself a rational reason for doing the thing in question. But again, I can’t prove this. All I can say is that if this is not a part of what you mean by saying that an action is “morally right” we have fundamentally different understandings of what it means to say that an action is morally right.

Pompous Bastard:

Quote:
Of course, such principles are themselves prescriptive statements, but I don't think that's why we can derive other prescriptive statements from them. We can derive prescriptive statements (P, here) from descriptive statements (D, here), providing that one of our D's is a value statement...
I don’t see your point. A typical derivation of a specific “moral fact” would be something like this:

1. Moral principle: Under such-and-such conditions any X ought to do Y.
2. Factual premise: The specified conditions hold in the case of Smith right now
3. Conclusion: Smith ought to do Y now.

Since the conclusion is obviously prescriptive, so must be one of the premises (or so we will assume for now). But Premise 2 is clearly factual, not prescriptive. So the reason we are able to derive a prescriptive conclusion is that Premise 1 (the moral principle) is prescriptive. This doesn’t seem terribly deep or controversial.

Quote:
Of course [the Principle of Induction and Occam’s Razor are prescriptive], in at least one sense: If X desires an increased understanding of the Universe, then X ought to follow principle Y.
I strongly recommend reading Clifford’s excellent essay. It’s mainly directed against religious beliefs, and makes a strong case that believing in Christianity (or almost any other religion) is not only wrongheaded but immoral. But the argument applies to pretty much any irrational belief.

Here’s an updated example. Suppose that you adopt the belief that any black man will rape a white woman if he has the opportunity and thinks he can get away with it. In itself, this belief does no harm, disgusting though it might be. But if you then find yourself on a jury in a case where a black man is accused of raping a white woman, this irrational belief might contribute to convicting an innocent man of a terrible crime. So it is immoral to allow yourself to accept beliefs without sufficient rational justification, because doing so increases the probability of being involved in unjust actions. It doesn’t matter whether you think it’s wrong, and it doesn’t matter what your own personal goals are; it’s immoral, period.

Of course, if you believe that convicting an innocent black man of rape is not “wrong” unless it happens to interfere with your ability to accomplish your own personal goals, I suppose this argument will have no force for you.

But come to think of it, there’s no need to think up hypothetical cases. I need only point out that irrational beliefs have been known to induce people to fly large airplanes into tall buildings. Of course the people who did this were (presumably) accomplishing their personal goals, but these goals were themselves based on irrational beliefs.

This is the basic case for saying that the POI and OR (and other principles involving rational justification of beliefs) are ultimately moral principles. And their validity does not depend on whether following them furthers your personal goals.

[ February 19, 2002: Message edited by: bd-from-kg ]</p>
bd-from-kg is offline  
Old 02-19-2002, 01:18 PM   #43
Veteran Member
 
Join Date: Aug 2000
Location: Indianapolis area
Posts: 3,468
Post

bd,

I don’t see your point...Since the conclusion is obviously prescriptive, so must be one of the premises (or so we will assume for now).

We can't assume that for now because I just demonsrated that prescriptive conclusions can, indeed, be derived from purely descriptive premises, providing that at least one of those premises is a descriptive statement about a value held by some agent.

BTW, I'm using "should" in this discssion roughly as you have used it in the past. X should do something if that something is what X would do, given sufficient information about the situation.

I strongly recommend reading Clifford’s excellent essay.

I've had it open in a seperate browser window since you originally suggested it, but I'm trying to get some work done and follow several threads here at the same time, so I haven't actually gotten around to reading beyond the first few paragraphs yet.

Of course, if you believe that convicting an innocent black man of rape is not “wrong” unless it happens to interfere with your ability to accomplish your own personal goals, I suppose this argument will have no force for you.

As you may have surmised by now, I don't believe that anything is "wrong," per se, merely conducive or non-conducive to the ability of individuals to lead happy lives. Most human beings find that living in a social group with principes that govern behavior (ethical principles) contributes to their overall happiness. I find a contractarian model to be useful for theorizing about what ethical principles will work best. It is trivially easy to build a strong contractarian case that arbitrary criminal conviction is undesireable so, in a roundabout way, I agree with you that it could be considered unethical to hold irrational beliefs in some circumstances.

But come to think of it, there’s no need to think up hypothetical cases. I need only point out that irrational beliefs have been known to induce people to fly large airplanes into tall buildings. Of course the people who did this were (presumably) accomplishing their personal goals, but these goals were themselves based on irrational beliefs.

What of the opposing case? Irrational beliefs have also been known to induce people to behave in any number of altruistic manners. Witness the vast number of established religious charities. Should this lead us to believe that irrational beliefs are inherently moral?
Pomp is offline  
Old 02-19-2002, 02:26 PM   #44
Junior Member
 
Join Date: Feb 2002
Posts: 78
Post

bd-from-kg,

You misunderstood me when I said that 'I am curious about what you think the fact that you can't prove the various things you mention shows'. I was curious about what you think the fact that you can't prove these things shows. I don't have the difficulties you suggest that you have.

Quote:
The simple answer is that I think that it shows that they are not propositions.
Unless you got some very special notion of 'proposition' this is just silly. If you do have some special notion of 'proposition' then neither its meaning nor its relevance is clear.


I said
Quote:
2. The validity of modus ponens as a rule of inference can be demonstrated with a simple truth-table.
You said
Quote:
A demonstration and a proof are two different things. I can demonstrate that I have two hands by raising both hands so that you can see them, but that’s not a proof. but let’s suppose that you are claiming that you can prove the validity of modus ponens by using a truth-table. I imagine that this “proof” might look something like this: (i) By definition, if a proposed rule of inference preserves truth for all possible combinations of truth-values of the propositions “going in” to it, it is valid. (ii) [A demonstration that modus ponens preserves truth for all possible combinations of truth-values of the propositions “going in” to it](iii) Conclusion: Modus ponens is a valid rule of inference. Formally, this is a perfectly valid argument. Unfortunately, (iii) follows from (i) and (ii) by modus ponens.
If this is a stipulation with respect to what you are going to call a proof, then absolutely nothing of any philosophical interest follows from what you say here.

If it is suppose to employ ordinary notions of 'proof' and 'demonstration' to make some point, it is incoherent. There is no significant distinction between 'A demonstration that modus ponens preserves truth for all possible combinations of truth-values of the propositions “going in” to it' and 'a proof that modus ponens is a valid rule of inference'.

Tom
Tom Piper is offline  
Old 02-19-2002, 02:27 PM   #45
Senior Member
 
Join Date: Jun 2001
Location: Australia
Posts: 759
Post

Quote:
Originally posted by Sivakami S:
<strong>

But why should objective facts/strategies do that ? They just give you information to make better decisions with, thats all.
Science can never give you any ought-to, just what-is and what-if's.

- Sivakami.</strong>
I agree. However, this just makes the ESSs facts. It does not make them moral facts.
David Gould is offline  
Old 02-19-2002, 02:30 PM   #46
Senior Member
 
Join Date: Jun 2001
Location: Australia
Posts: 759
Post

Quote:
Originally posted by jlowder:
<strong>

Again, if the thing in question meets the definition of a moral agent, then it would need to follow the standard to be considered moral. The issue is whether the artificial intelligence has free will. It doesn't seem to make much sense to (morally) judge the 'actions' of a thing that lacks free will.

(snip)</strong>
How do we know that humans have free will? What is free will?

If you cannot prove that I hav free will, how can you morally judge my actions?

The only way we can make a judgement on whether someone has free will or not is by their actions.
And this is circular, because we are judging them before knowing whether they have free will or not.

Thus, free will seems irrelevant to the debate.
David Gould is offline  
Old 02-19-2002, 06:43 PM   #47
Regular Member
 
Join Date: Jun 2000
Location: USA
Posts: 274
Post



[ February 19, 2002: Message edited by: jlowder ]</p>
jlowder is offline  
Old 02-19-2002, 06:47 PM   #48
Veteran Member
 
Join Date: Jul 2000
Location: King George, VA
Posts: 1,400
Post

pug846:

Ok, here finally is a response to your post of Feb. 17.

Quote:
We use [the Principle of Induction and Occam’s Razor] because they are (generally) useful to our ends. I assume (?) you would agree with this statement.
No, I don’t agree. We use them because we hope and expect them to be useful to our ends, which is to say that we hope and believe that they are valid principles of action. But we don’t know that they’ll be “useful”, and we have no way of estimating how likely it is that they will be. Our acceptance of principles like the POI and OR cannot be based on experience. They are necessarily prior to any experience, because they determine how we interpret our experiences.

Quote:
And in the same sense, the “Golden Rule” is only useful to our ends and in that respect and in that respect only, it is “true.”
But the “Golden Rule” is not useful to our ends, unless our ends are to maximize the general welfare rather than just our own. The Golden Rule is essentially a statement that the proper ends are altruistic. It would be wonderful if altruistic goals never conflicted with selfish ones, but surely you realize that they often do.

Quote:
I don’t have the thread url saved, but in your discussion of metaphysical assumptions, you argued that certain presuppositions are useful, which is why they are presupposed.
The thread is <a href="http://iidb.org/cgi-bin/ultimatebb.cgi?ubb=get_topic&f=21&t=000384" target="_blank">On the nature of metaphysical axioms</a> as cited earlier.

Actually I argued that they are necessary in a fundamental way – namely, they are necessary for carrying out the “rational project”, which is pretty much the same thing as saying that they are necessary for functioning as a rational agent.

Quote:
A presupposition of God is worthless; therefore, there is no reason to make such an assumption.
True but irrelevant, since my ethical philosophy does not depend on God in any way.

Quote:
... the phrase, “X should do Y,” is meaningless unless it is qualified with more information.
Presumably the “more information” that you have in mind is a statement of X’s goals. But of course the statement “To achieve his goals of A, B, and C, X should do Y” is not a moral statement at all. It is a statement of fact: doing Y will accomplish A, B, and C. For example, saying “You should visit your dying aunt” (meaning that it would be the “right thing to do”) is hardly the same as saying “If you want to be included in her will, you should visit your dying aunt”.

Quote:
I of course deny that the statement of the form “X should do Y” is true for any one person at any one time that X should do Y.
Sorry, I can’t reconstruct your intended meaning here.

Quote:
First, lets look at a statement of the form you suggested: Tim ought to give money to his sister so she won’t starve. I don’t think this is valid for every possible person in Tim’s position...
This paragraph is just a longish way of saying that the only possible meaning of “X should do Y” is “To achieve his goals of A, B, and C, X should do Y”. But you offer no argument to support this statement.

In any case, this is completely implausible on its face. Do you really believe that “Schmidt (a guard at Auschwitz) should not have gassed those Jews” really means “Schmidt would have achieved his goals more successfully by not gassing those Jews”? It would be more plausible to say (as many modern moral philosophers do say) that statements like “X should do Y”, when used in a moral context, don’t mean anything at all (or more precisely, that they don’t express propositions) than to engage in this kind of grotesque reinterpretation.

Quote:
In the case of the holocaust...
Ditto.

Now at this point I haven’t given much of a hint as to what my moral philosophy actually is. this is presented in some detail in the thread <a href="http://iidb.org/cgi-bin/ultimatebb.cgi?ubb=get_topic&f=14&t=000458" target="_blank">How can morality be objective?</a>; in particular in the OP and in my posts of June 10, 2001 03:55 PM and June 21, 2001 12:28 PM. For those who don’t have time to wade through all this, here’s a very short version:

(1) It is an intrinsic aspect of rationality that a rational person would always prefer to do what he would do if he had enough knowledge and understanding (K&U) of all aspects of his choice, including the consequences that each of his possible choices would have.

In fact this is so very basic that it is rare for even a halfway rational person to not prefer (as an abstract proposition) to do what he would do if he had enough K&U.

(2) But what any rational person would do if he had enough K&U is to act altruistically.

A good bit of verbiage in the posts referenced above is devoted to justifying this statement; I don’t want to try to summarize these arguments here.

(3) Statements of the form “X should do Y” are often correctly construed as “X would do Y if he had enough K&U”. And when the “should” is meant in the moral sense, this is practically always part of the meaning. The rest of the meaning is that any rational agent with sufficient K&U would also prefer that X do Y under the given conditions.

(4) Moral principles are properly interpreted as statements to the effect that one would very likely make a particular choice (or more often, would not make a particular choice) if one had enough K&U , and therefore wants (in the sense of (1)) to make that choice (or to not make it in the second case).

Thus an understanding of what it means to say that one should do Y, together with the knowledge that one should do Y, is a rational reason for doing Y, and a perfectly rational person who understands both of these things will choose to do Y.

I’ll probably have to flesh out some or all of these points later, but that’s all I have time for today.

I should point out that not everyone seems to regard this as an objective theory of morality. Frankly, I’m sick of debating this question. I think it is objective because it implies that, if “X should do Y” is true for any one person at any one time, it is true for all persons at all times, and that this fact is not contingent on the accident of what specific people may be living now, or have lived in the past, or will live in the future, or on what any of these people thinks or feels, or has thought or felt, or will think or feel. This is the basic criterion of an objective moral theory. But some feel that something more is needed to make a theory “truly” objective. So be it. I don’t really care.

[ February 19, 2002: Message edited by: bd-from-kg ]</p>
bd-from-kg is offline  
Old 02-19-2002, 07:41 PM   #49
Veteran Member
 
Join Date: Sep 2000
Location: Massachusetts, USA -- Let's Go Red Sox!
Posts: 1,500
Post

Quote:
Originally posted by bd-from-kg:
<strong>pug846:
Finally, of course, if “X should do Y” is objectively true, we might call it an “objective moral fact”.

(5) We’re finally ready to say exactly what is meant by saying that a statement of the form “X should do Y” is an objective moral fact. We mean that, if it is true for any one person at any one time that X should do Y, then it is true for all persons at all times that X should do Y.
[ February 17, 2002: Message edited by: bd-from-kg ]</strong>
Just wanted to make two points:

1) This is something I simply cant stress enough; being "objective" is logically independent of *relativity*. To be objective is to be dependent on affective relationships. To be *absolute*, and this is what I believe the original poster's intent (probably your own) was, is not to vary from person to person. We can logically have a subjective, absolute ethic, or a objective, relative ethic.

2) As two your fifth point, specifically the statement quoted above, consider a couple things:

Firstly, be careful with where you place the quantifiers. Contractarians (like myself) aim where Kant did, to find moral constraints that must apply in the absense of other-directed interests; indeed, whatever preferences people may have. But there are two ways to put this. According to the strong view (Kant's view), the existential quantifier goes over constraints first, that is, there are rational contraints on conduct that apply no matter what peoples preferences happen to be. The weak view, the neo-Hobbesian view,places the universal quantifier over preferences first; no matter what you prefer, there are rational constraints on conduct. The weaker one has to be accepted by Hobbesians, because we/they hold an instrumental conception of practical rationality. Both, however, are perfectly absolute.

Secondly, its simply not true an ethic has to cover *everyone*. Consider what Morris calls "moral standing". In every ethic, it must be decided just who ought to be afforded moral considerations, the manner the question being addressed in terms of nonconventioal attributes or properties. Kantians have rationality, classical act-utilitarianism, the capacity of pain/pleasure, and so forth. Every ethic will exclude some categories of entity....human, non-human, or even non-organic, but that is certainly no reason to reject it or label it relative.
God Fearing Atheist is offline  
Old 02-19-2002, 08:17 PM   #50
Veteran Member
 
Join Date: Aug 2000
Location: Indianapolis area
Posts: 3,468
Post

Hey, bd, the link you posted for your How Can Morality Be Objective thread actually points to your On the Nature of Metaphysical Axioms thread.

<a href="http://iidb.org/cgi-bin/ultimatebb.cgi?ubb=get_topic&f=14&t=000458" target="_blank">This</a> is the proper link.
Pomp is offline  
 

Thread Tools Search this Thread
Search this Thread:

Advanced Search

Forum Jump


All times are GMT -8. The time now is 07:47 AM.

Top

This custom BB emulates vBulletin® Version 3.8.2
Copyright ©2000 - 2015, Jelsoft Enterprises Ltd.