FRDB Archives

Freethought & Rationalism Archive

The archives are read only.


Go Back   FRDB Archives > Archives > IIDB ARCHIVE: 200X-2003, PD 2007 > IIDB Philosophical Forums (PRIOR TO JUN-2003)
Welcome, Peter Kirby.
You last visited: Yesterday at 05:55 AM

 
 
Thread Tools Search this Thread
Old 06-24-2002, 02:44 PM   #181
Veteran Member
 
Join Date: Jul 2000
Location: King George, VA
Posts: 1,400
Post

Alonzo Fyfe:

Quote:
I hold that there is no rationality of ends.
I’m not sure what you mean by this. Do you mean that there is no way to demonstrate that some ends are more rational than others? Or do you mean, for example, that it is just as rational for a human being to have the end of suffering eternal, agonizing pain as it is for him to have the end of boundless, eternal joy? If the latter, I find your position incomprehensible.

However, my theory does not even depend on the premise that it is rational for humans to pursue happiness and irrational for them to pursue misery. In fact, the only “end” that I claim is common to all rational beings is the “end” of adhering to the principles of rational action. And as far as I’m concerned at least, this is simply a tautology.

Quote:
bd:
The obvious answer is that all of them help us to function effectively in the real world.

AF:
This type of talk fits quite comfortably within traditional means rationality.
Sure. But adhering to the principles of rationality is, so to speak, a “meta-means”. That is, it is a means for obtaining information that will both help determine what ends to pursue and what means to use to pursue them.

Quote:
... could you provide me with a single example of how K&U can be used to determine the rationality of ends?
That’s not what I said. I said that K&U is used to determine what one’s goals will be. Obviously K&U doesn’t logically entail any particular goal. Such a statement would be meaningless, since the only kind of thing that can be entailed (or implied, for that matter) are propositions. Goals are not propositions. One might as well try to demonstrate “Close the door” or “Egad!”

Quote:
An agent who desires that P has a sentence programmed in the brain that says "make P true."
I suspect that few if any desires are “programmed in the brain”. But humans are clearly predisposed to have certain desires once the foundation has been laid in the form of acquiring the relevant K&U. For example, I have a desire for a strawberry milkshake (or if you prefer, the pleasure derived only from drinking a strawberry milkshake). But I couldn’t have had this desire until I at least knew of the existence of such things as strawberries and milk.

Quote:
In other words, ends can be determined quite independently of belief
Not so. Beliefs certainly do not logically entail ends, but they are necessary prerequisites for having them. (There might be some exceptions to this, but it is certainly true for a very large class of ends, including many “final” ends.)

Quote:
bd:
A rational agent presupposes that true beliefs are desirable, because that is of the essence of rationality.

AF:
Below, I will give an example of a case where true beliefs are not desirable.
Yes. That’s why I didn’t say that true beliefs are always desirable. Perhaps I shouldn’t have used the term “presuppose” since presupposition has come to have unfortunate associations. Let’s try “presume”, as in “presumed innocent until proven guilty”. A rational agent presumes that it is desirable (or at least not undesirable) to have a true belief regarding any given proposition until he has clear, convincing evidence to the contrary.

Quote:
bd:
1. Always act in a way that corresponds to your beliefs.
2. Always act in a way that corresponds to your desires.

AF:
The received view is that these are not principles of rationality but are, instead, essential components to intentional action.
In the second case you’re right. Again I was a little careless in stating the principle; I’m not used to having to deal with such nitpicky people.

I should have said “Act in a way that corresponds to (or at least does not conflict with) your strong and stable desires”. One is always acting on the basis of some desire, but a rational person does not ignore his longer-term interests (aka desires) to indulge a transient whim.

As for the first, it’s not clear to me that a person cannot act in a way that is contrary to his beliefs. But let’s assume for the sake of argument that this is true. An essential aspect of rationality is trying to form an accurate representation of reality (or at any rate one the “works” reliably) and making use of it consistently in deciding what to do. It is irrational to ignore one’s best guess as to the nature of “reality” and choose instead to just “go with the flow”; to allow oneself to be carried away by a passing emotion and do something that one’s best guess, based on years of trying to understand how the world works, says will work against one’s strong and stable desires (and perhaps even one’s transient ones).

Quote:
bd:
3. Believe only things that are consistent with your observations, and only things for which you have evidence.

AF:
This principle is vague.
So are most principles of rational action. Is Occam’s Razor an invalid principle because it’s vague?

Quote:
For example, some presently argue that certain core beliefs evolved - that there is a type of innate knowledge.
Unfortunately the track record for such “core beliefs” demonstrates pretty clearly that they cannot be regarded as innate knowledge. But we’re getting pretty far afield here. It was not my intention to conduct a seminar on the nature of rationality, but just to give some examples of “principles of rational action”. If you want to refine them, feel free. But surely you agree that the principle of believing only things for which you have evidence (properly refined or clarified) is a valid principle of rational action? If so, let’s just drop it. This principle plays no special role in my moral theory anyway, any more than the first two. I didn’t bother to be precise because the idea was just to give simple illustrations of what I mean by “principles of rational action”.
bd-from-kg is offline  
Old 06-24-2002, 02:45 PM   #182
Veteran Member
 
Join Date: Jul 2000
Location: King George, VA
Posts: 1,400
Post

Alonzo Fyfe:

Quote:
You are continuing to draw an invalid inference from the subjectivity of the language ... to the subjectivity of the theory itself.
Not so. The fundamental questions that any theory must at least purport to answer in order to qualify as a moral theory at all are “What shall I do? How shall I live?” Your answer is that these questions are meaningless; they amount to asking “what do I desire to desire?” Thus the question of how to define “should” or “morally right” becomes purely linguistic. To you calling something “morally right” has nothing to do with whether to do it; the fact that your definition yields something that could be called a “moral code” has nothing to do with how to live. Rather, according to your theory terms like “morally right” or should” are purely descriptive. That is, you define “X should do Y” to mean that Y satisfies a certain criterion. But it could just as well be used to mean that Y satisfies some other criterion. Which definition to use is simply a matter of convenience – of how best to facilitate communication.

But there seems to be an inconsistency in your position here. Your discussion about hanging Jones makes it apparent that, like practically everyone else, you prefer actions that satisfy the criterion of “rightness” that you chose. In fact, it’s pretty clear that this is why you chose to adopt this particular definition. And naturally someone with different preferences in this regard will adopt a different criterion of “rightness” – one that reflects his preferences. Oddly enough, you deny the obvious reason for this, namely that in saying that someone “should” do something (in the moral sense) you are expressing approval of this choice; you are recommending or advising it; you are giving notice that you are prepared to praise or otherwise reward the agent for doing it.

This doesn’t mesh very well with your insistence that the definition of “should” is purely a matter of linguistic convenience. In reality, since everyone understands calling an action “right” or saying that the agent “should” do it is expressing approval, etc., it would be absurd to use any definition other than the one that corresponds to your preferences – i.e., to the actions you really do approve of.

But your position is that there is no objective reason for having one set of preferences rather than another, except in the sense that one’s preferences have an objective cause. And there is no reason to suppose that one person’s preferences will be like anyone else’s. So the natural, expected state of affairs is for everyone to have a different “moral theory”. In other words, everyone will define “morally right” so that it corresponds to his own preferences. And all of these moral theories are equally “correct” or “valid” insofar as they simply specify what actions the person in question prefers. Of course, some of them will be invalid in the sense that they will involve extraneous assumptions or beliefs about nonexistent entities or properties. But if the things defined as “right” by such a theory really do express that person’s preferences, that part of the theory is just as valid as anyone else’s.

As I have pointed out repeatedly, this is what is generally understood by “subjective morality”: that moral statements are simply expressions of the speaker’s preferences.

Of course, if you insist on denying that moral statements even express personal preferences, and insist that the definition of “morally right” and “should” has nothing to do with the questions of what to do or how to live, then your position is properly called moral nihilism.
bd-from-kg is offline  
Old 06-24-2002, 02:47 PM   #183
Veteran Member
 
Join Date: Jul 2000
Location: King George, VA
Posts: 1,400
Post

The AntiChris:

Quote:
When you say that “The choice that one would approve of if one had sufficient knowledge and understanding is preferable to any other choice”, who is the "one" tasked with dispensing approval here?
Well, of course no one is “tasked” with this job; one simply does approve of certain actions and disapprove of others. But the “one” that I had in mind here (in the context of this “principle of rational action”) is the agent. The idea is that a perfectly rational agent will prefer (and therefore make) a certain choice if he knows that he would approve of if he had sufficient K&U. Of course, this situation is so rare as to be practically nonexistent; one never knows what action one would prefer if one had sufficient K&U. But it is often the case that there is strong evidence that one would prefer a different choice than the one that one is currently inclined to make if one had enough K&U. And a common function of statements like “You should do Y” is to remind the agent that this is the case. In other cases it is accompanied by an argument to this effect, or by an attempt to enhance the agent’s understanding of the effects that a proposed action will have. This is even more true of statements like “You should not do Y”; the supporting arguments are likely to consist of pointing out (or reminding him of) the harm that it will do to others, and/or trying to produce empathetic understanding of these effects as opposed to the mere abstract knowledge that so-and-so will be hurt.
bd-from-kg is offline  
Old 06-24-2002, 02:59 PM   #184
dk
Veteran Member
 
Join Date: Nov 2001
Location: Denver
Posts: 1,774
Post

Quote:
bd-from-kg: Now even actions that violate these very simple, fundamental principles are not illogical in the sense of violating some axiom or theorem of logic. Nevertheless almost everyone would agree that they are contrary to reason. Thus we see already that reason and logic are two different things; one can be unreasonable (even to the point of insanity) without being illogical.
-Another such principle which almost everyone agrees to in principle, although it is not so universally followed in practice, is that beliefs about the “real world” should be based on evidence. Thus, if Brown believes that the family next door is from Mars, but admits that he has no evidence that they are, we send him off to the funny farm without further ado. Once again there is no axiom or theorem of logic that demands that beliefs be based on evidence.
- Two other principles of this sort that have been discussed before are the Principle of Induction and Occam’s Razor. At this point we will just note that both of these principles are in the same category as the earlier ones: rejecting them is plainly irrational, but they cannot be derived from pure logic; i.e., they are not tautologies.
dk: Your first principle should have been ad populum. This is a posteriori appeal to consensus. Whereas Ockham’s Razor and Induction are priori principles of knowledge. I don’t see who you can throw them into the same bushel basket of any sort. Passions unbounded by reason are irrational, and desires are an aspect of passion (along with feelings).
Quote:
bd-from-kg: At this point we need to consider whether there is a common thread that ties together all of these aspects of rationality. The obvious answer is that all of them help us to function effectively in the real world. In particular, many of them are essentially strategies for acquiring true beliefs. A rational agent presupposes that true beliefs are desirable, because that is of the essence of rationality. And the point of having true beliefs, in the final analysis, is to make better choices. However, rationality [i]per se] does not involve any presuppositions as to the appropriate ends to which these choices are to be directed. In fact, one of the purposes of acquiring true beliefs (what I often refer to as “knowledge and understanding”, or K&U) is to determine what these goals will be. In the absence of any knowledge about the real world there could be no desires at all, except for the baseline desire (based only on the assumption that the agent in question is rational) to acquire K&U. Thus the question of what a rational agent would do if he had more K&U is highly relevant, since the essence of rationality itself is maximizing the probability of acquiring true beliefs while avoiding false ones.
dk: - How does a functional efficient murderer differ from a dysfunctional inefficient murderer? One might ANSWER, “a dysfunctional inefficient murderer gets caught; a rational murderer gets away with the dirty deed”. Honestly, my question is ill posed (even loaded) to examine humanity’s rational nature; but well posed to determined the pretext of rationalized behavior. So the appropriateness of the question hinges on whether people are rational creatures by quantity or quality. If it’s a matter of quantity then a rational murderer is superior. If it’s a matter of quality the an murderer is functional efficient murderer lacks quality. note: Bold is edited above, sorry
In the post-modern world moral relativists assert a person’s actions are a function of situation and circumstance, not reason, because most people lack the necessary cognitive faculties. The assertion is conflicted. Rational qualities impute to people dominion over their actions, therefore tend toward an end of their own will and reason. In a post modern secular world opinion makers, social institutions, and government predetermine what’s appropriate on the basis of quantitative norms. In the modern world “what people appropriately believed” was bounded (distinguished) by reason. In contrast, in the post modern world, “what people appropriately believe” is dictated by opinion makers, subject matter experts, social institutions and governments with normative processes. For example I won’t get a response to this post because I’ve violated the post modernist philosophical norms. Note: edited some of the above.
The question of what a rational agent would do with prescient knowledge is pretext, because any act based upon foreknowledge changes the future. Once changed the prescient information becomes suspect because reality doesn’t have a replay button like a VCR. Walla, a rational person can’t act with foreknowledge of the future, that would be the province of omnipotent people (and they don’t exist in a rational world). For example Martha Stewart sold 4,000 shares of ImClone Systems on an insider tip. It was a great tip, unfortunately the subsequent scandal has driven her OmniaMedia (Stewart’s company) stock down 20%, costing Sterwart close to a $100mil. Who do Stewart’s shareholders blame? Did the FDC leak their report? Did ImClone leak the leak? Did the broker act on insider information or did he play an educated guess? What about the press and scientific journals that touted ImClone’s breakthrough against cancer? What about the faulty protocols employed in the clinical studies? What can a rational person know, and who is to blame. Seems pretty clear to me this idea of moral relativism in the age of mis-information has some serious philosophical flaws where the rubber meets the road. All this [un][mis]directed blame (along with Enron, Stock Analysis’s, etc..) erodes the public trust necessary to a vibrant stock market. How does one maximize the probability of acquiring true beliefs about the future?. Answer: a) With accountability and blame b) or alternatively by making trust meaningless in a relativistic world, or to paraphrase Bill Clinton, that depends upon “what ‘is’ is”. So if the effect of moral relativism is distrust, blame and unaccountability, then trust, blame and accountability are the force vectors innate to non-theistic objective morality.
Quote:
bd-from-kg: This gives us another principle of rationality, which I think few people would dispute, and which in any case derives from the essence of what it means to be a rational being. This might be called the “Desirability of Knowledge” principle:
The choice that one would approve of if one had sufficient knowledge and understanding is preferable to any other choice.
A few explanations:
I add “understanding” because merely knowing a lot of fact isn’t enough; it is also necessary to be able to “connect the dots”. Also, some relevant knowledge consists of intimate acquantance with something rather than mere abstract knowledge of the truth of certain propostitions.
By “sufficient” K&U I mean enough so that still more K&U would not result in one’s preferring some other choice.
As for the “would approve of” clause, this needs to be understood properly in order to make the principle strictly valid. If we interpret it as meaning that that preferable choice is always what one would do if one had sufficient K&U, it is easy to think of counterexamples. E.g., take the statement “You should read this mystery novel; I think you’ll like the ending.” Trying to interpret this statement along these lines would yield something like: “if you knew enough about this novel (including the ending) you would choose to read it.” But of course if you knew the ending you might choose not to read it, for that very reason. The problem is that the hypothetical “you” who is aware of all of the relevant facts is not the actual “you” who will be making the decision. And sometimes this affects which choice is the most rational one. Thus the correct understanding of this principle is that a “hypothetical X” who has the “sufficient K&U” that the actual X lacks would have the actual X do Y. Or, to get a little closer to moral language, we can say that this “hypothetical X” would approve of the actual X doing Y. This seems to work in all cases.
dk: - A rational agent is imputed to have dominion over their actions. Still, envy, greed, sloth and deceit impugn a person’s will in lieu of K&U no matter how rational a person.
Quote:
bd-from-kg: Just the same, I will often abuse language slightly by representing the principle as saying that the choice that we would make if we had enough K&U is always the preferable one.
An interesting but subtle point about all of these “principles of rationality” is that none of them can be interpreted meaningfully as expressing propositions; none of them can really be said to be “true” or “false”. (This is why it is impossible to “prove”, or even to give evidence in favor of, any of them.) Each of them is actually a rule or guideline – a piece of advice or instruction to act in a certain way. Thus, they can be restated:
1. Always act in a way that corresponds to your beliefs.
2. Always act in a way that corresponds to your desires.
3. Believe only things that are consistent with your observations, and only things for which you have evidence.
4. Expect that patterns or regularities that hold in the part of the “real world” that you know about also hold in the part that you don’t. (The Principle of Induction, or Regularity).
5. As between explanations that fit the facts that you know, always prefer a clearly simpler one to a more complicated one.
6. Always try to do what you would approve of if you had sufficient knowledge and understanding.
Another way of looking at these principles is as a partial description of what it means to be a rational agent. If someone asks why he “should” adhere to these principles (or prescriptions, if you will) the answer in each case is simply that to do otherwise is irrational. If he then asks why he “should” be rational, there is no answer to this question; one cannot reason with a madman, or with someone who does not accept the compelling authority of Reason itself. I call such things “Principles of rational action”.
Each of them can be stated in the form of “One should...”, or in imperative form: “Act in accordance with the following rule:...” Thus they at least resemble objective moral principles. To be sure, most of them wouldn’t ordinarily be called “moral principles”, but only because this term is generally used in a more restricted sense. In any case they are objectively valid principles of action that any perfectly rational being will find compelling, for the simple reason that rejecting or violating them is ipso facto irrational. And this is just the sort of thing that would have to constitute anything that could meaningfully be called “objective morality”.
dk: - Let us examine envy: “the grief I feel for the good another possesses that surpasses the good I possess.” If I suffer because of my friend’s good, then rationally I should deprive my friend of the good. Whether it’s fame, popularity, talent or a car, the satisfaction of depriving my friend of the good that causes me grief satisfies my desires.
Lets run envy through the principles 1-6.

(1) Yes, I believe I should act to abate my grief by the most efficient and effective means.
(2,3)I act according to my desires, observations and on the evidence… to destroy my friend’s good that causes me grief.
(4) I expect my actions to abate the envy I suffer whether by scandal, hook, or crook. Once my friend is deprived of the good I don’t possess, we are normal peers.
(5) Yes, my actions are inductively consistent with my K&U.
(6) Naturally, when someone causes me grief they deserve to suffer in return, and when they suffer for the grief they caused me, my beliefs are vindicated.
Yet, it does seem odd that a rational person should seek retribution against a friend for the good of friendship. How do you explain the discrepancy.

[ June 25, 2002: Message edited by: dk ]</p>
dk is offline  
Old 06-25-2002, 02:14 AM   #185
Veteran Member
 
Join Date: Mar 2002
Location: UK
Posts: 5,932
Post

bd-from-kg

Quote:
But the “one” that I had in mind here (in the context of this principle of rational action) is the agent.
Thanks. I just wanted to be sure you weren't proposing a theoretical impartial observer as the perfectly rational arbiter of moral judgements.

As a complete layman in this subject, I'd find it extremely helpful if you could explain what your theory sets out to achieve (I mean in a practical sense rather than as a description of an ideal). Specifically, I'd be interested to know in what ways you consider your theory superior to others and to what extent, if any, its superiority is dependent on it being accepted as "objective".

Apologies for asking such rudimentary questions, but I want to be sure any comments I make aren't based on misconceptions.

Chris

[ June 25, 2002: Message edited by: The AntiChris ]</p>
The AntiChris is offline  
Old 06-25-2002, 07:42 AM   #186
dk
Veteran Member
 
Join Date: Nov 2001
Location: Denver
Posts: 1,774
Post

Quote:
Koya: Incorrect.
Just three circumstances in which setting off a "doomsday" device--and by that I'll assume total annihilation of humanity on Earth--could be considered "morally right:"
1. <ol type="1">[*]It is learned that an Asteroid is on a collision course and the planet will be completely destroyed, but not instantly, so as an act of mercy, the nation's leaders--with popular support--mutually agree to detonate their respective "doomsday" devices, which end human existence in a flash, rather than over days/weeks/months.[*]By one of our space probes, it is discovered that we here on Earth are a lost tribe of the Galaxy, which is full of humans, the majority of which travel in one gigantic spaceship, being the explorers we all are. We find all of this out because they have found our space probe and reprogrammed it with necessary information to send back to us in preparation for their arrival in three weeks. (snip)
(snip)Thus, the only hope for the continued survival of the entire human race is for these scientists to take it upon themselves to blow up the Earth before the ship can arrive.[*]It is the year 2158 and humanity has (snip…snip)
Only a select group can be chosen for survival on the enormous space ark, of course,
(snip) particular despicable humans are not allowed to either destroy yet another planet in the manner they did to this one or be the final legacy of the human race.[/list=a]

There you go. Three scenarios in which triggering a doomsday device is the morally "right" thing to do.
After all, it's objectively right, right? Therefore, there is no further discussion and I am absolutely correct and these scenarios prove that there is no other morally right action, yes?
dk: Not so fast. Sorry I confused you and 99.
On hypothetical 1, 2. There exists any number (perhaps an infinite number) of natural catastrophic phenomena that could destroy all human beings and the planet earth. However eminent the catastrophe might be is relative to the age of universe, suns, supernovas,,, etc… All epoch “first time historical events” renders inductive methods void, hence renders scientific certainty impossible. To set off a dooms day device on speculation is irrational, stupid, ignorant, and objectively wrong (in a non-theistic objective way).
On hypothetical 3. You conclude with, “Only a select group can be chosen for survival”. So this scenario doesn’t even contemplate a doomsday device, but the survival of a select self ordained group of powerful people.

[ June 25, 2002: Message edited by: dk ]</p>
dk is offline  
Old 06-25-2002, 08:46 AM   #187
Veteran Member
 
Join Date: Sep 2000
Location: Yes, I have dyslexia. Sue me.
Posts: 6,508
Question

Quote:
Originally posted by dk:
Not so fast. Sorry I confused you and 99.
No problem, but I doubt he was delighted, considering my reputation .

Quote:
MORE: On hypothetical 1, 2. There exists any number (perhaps an infinite number) of natural catastrophic phenomena that could destroy all human beings and the planet earth. However eminent the catastrophe might be is relative to the age of universe, suns, supernovas,,, etc… All epoch “first time historical events” renders inductive methods void, hence renders scientific certainty impossible. To set off a dooms day device on speculation is irrational, stupid, ignorant, and objectively wrong (in a non-theistic objective way).
So, you disagree that these two scenarios would be examples of morally good uses of the doomsday device, since people have been proved to be wrong in the past, they may be wrong in these situations.

Thus, according to your qualifications of what is or is not the "morally good" thing to do for you personally, you simply choose inaction; to deprive humans of a quick and painless death in favor of a devastating and lingering death when the asteroid destroys the Earth and to allow the infection and eventual destruction of the entire human race, because the chances that these would not happen as speculated, in your opinion and based upon historical precedent, far outweighs the certainty involved in setting off the doomsday device.

Thus, for you, "certainty" defaults to "inaction" on the outside chance of some form of miraculous event; aka, let "nature" take its course.

That's an interesting subjective standard you've set for yourself, except that it isn't logically consistent considering the facts that the doomsday device would equally be an uncertainty and it isn't necessarily the result of an action that can be considered "morally good," but also the intent of the action, which is was also my point.

I take it then that you consider the practice of suicide in Asian culture to also be "immoral" since this would be an example of an individual taking direct action in regard to their death, rather than the inaction you herald?

Quote:
MORE: On hypothetical 3. You conclude with, “Only a select group can be chosen for survival”. So this scenario doesn’t even contemplate a doomsday device, but the survival of a select self ordained group of powerful people.
No, it involves the complete destruction of the last remnants of the human race by the slaves of that "select group."

The slaves would be killed as well when the doomsday device is triggered. Perhaps that wasn't clear?

[ June 25, 2002: Message edited by: Koyaanisqatsi ]</p>
Koyaanisqatsi is offline  
Old 06-25-2002, 12:33 PM   #188
Veteran Member
 
Join Date: Jul 2000
Location: King George, VA
Posts: 1,400
Post

dk:

Quote:
Your first principle should have been ad populum.
The only area where I appeal to what “most people think” is in regard to what things are aspects or components of rationality. I give some theoretical reasons for so regarding them in the thread <a href="http://iidb.org/cgi-bin/ultimatebb.cgi?ubb=get_topic&f=21&t=000384" target="_blank">On the nature of metaphysical axioms</a> cited earlier. But I think that almost everyone regards this as self-evident, so for most people no argument is really needed.

Quote:
Whereas Ockham’s Razor and Induction are priori principles of knowledge.
You say they’re “priori principles of knowledge”; I say they’re valid principles of rational action directed at acquiring true beliefs. Are we really saying different things?

Quote:
I don’t see who you can throw them into the same bushel basket...
What bushel basket would that be?

Quote:
A rational agent is imputed to have dominion over their actions. Still, envy, greed, sloth and deceit impugn a person’s will in lieu of K&U.
Yes, real people are far from being perfectly rational. Your point being?

Quote:
Let us examine envy... If I suffer because of my friend’s good, then rationally I should deprive my friend of the good.
Non sequitur. This would follow only if you could show that (1) you would desire to deprive your friend of the good in question regardless of how much knowledge and understanding you had, and (2) it is rational to act purely in one’s own self-interest rather than taking others’ interests into account as well. I’m quite confident that you can’t show either of these things, much less both.

Quote:
Yet, it does seem odd that a rational person should seek retribution against a friend for the good the friend possesses.
A fully rational person will not seek retribution against a friend for the good he possesses. Stay tuned.

[ June 25, 2002: Message edited by: bd-from-kg ]</p>
bd-from-kg is offline  
Old 06-25-2002, 01:32 PM   #189
Veteran Member
 
Join Date: Mar 2002
Location: 920B Milo Circle Lafayette, CO
Posts: 3,515
Post

bd-from-kg:

What would you say to somebody who posted that "We must somehow decide between Einstein's theory of relativity written in German, vs Einstein's theory of relativity written in French, and unless a person can come up with an objective reason to prefer one theory over they other, they are a subjectivist concerning the laws of physics?"

We could, of course, define "subjectivist" as one who believes that there is no objective reason to prefer Einstein in German over Einstein in French. In which case no reasonable person can be an objectivist. But this is not saying anything significant.

You have stated that your conditions for an objective theory include there are some objective grounds for preferring this particular moral theory over other possible ones.

My response is that if the two theories are in fact different, then I can give you an objective grounds for preferring one over the other. And if they are not -- if they are the same theory in two languages (e.g., Einstein in German vs. Einstein in French), then we are not in any relevant way talking about different theories.

Now, let's apply this to your remarks to see if you are saying anything that I can provide an objective reason to reject, or if we are merely debating the relative merits of Einstein in German vs. Einstein in French.

As a first test, one can begin by looking at whether the argument is about the thing itself, or about what it is called. If it is an argument over whether to call this a 'moral theory' or to call that 'objective', then the argument is fit for a debate like that between Einstein in German vs Einstein in French. Nothing of consequence is at stake in accepting or rejecting the conclusion.

Your last posting contained three criticisms. The first is that what I call a moral theory has nothing to do with answering the question "What shall I do?" Whether this is true depends on what the question "What shall I do?" means. If, at this point, we get into a dispute over the proper meaning of "What shall I do?" then we are debating Einstein in German vs. Einstein in French. If one is actually debating the merits of different theories, the task then becomes to look at what different theories say about the implications of "What shall I do" in various languages (i.e., under different meanings).

"What shall(1) I do" means "What practical-shall I do?" The theory does have an answer to this question -- it is that action which maximizes fulfillment of one's desires -- or that action that one would take given one's desires if all of one's relevant beliefs are true. Which is what one would do "with sufficient knowledge and understanding," which applies the principles of means-rationality to evaluate an agent relevant to the agent's own desires.

"What shall(2) I do" means "What all-things-considered-shall I do?" The theory does have an answer to this question -- it is that action which maximizes fulfillment of all desires regardless of who has them -- or that action that one would take given one's desires if all of one's relevant beliefs are true and all of the desires were one's own. Which is what one would do "with sufficient knowledge and understanding and all desires," which applies the principles of means-rationality to evaluate an agent relevant to all desires that exist.

"What shall(3) I do" means "What would God want me to do?" Because there is no God, it is true that I have no advice to offer the person who is seeking to answer this question. Any statement that option A or option B is more pleasing to God is false.

"What shall(4) I do" means "What desire-independent-shall I do?" Because there is no desire-independent reason for action, it is true that I have nothing to say to the person who is seeking to answer the question. Any statement to the effect that option A or option B has more desire-independent value than the other is false.

"What shall(5) I do" means "What ends-as-ends-rational-shall I do?" Because there is no ends-as-ends rationality, I have nothing to say to the person who asks "What shall(5) I do," except to say that anybody who claims that option A or option B is more ends-as-ends rational than the other is making a false claim.

"What shall(6) I do" means "What would please members of the KKK." For any two options A and B, there will be objectively true cases in which option A will be more pleasing to members of the KKK than option B.

A "different theory" is a theory that takes any one of these translations and provides a different answer than I do here -- a theory that says that my answer to shall(3) as defined is incorrect because there is a God and God would be more pleased with option A than option B, would be a competing theory. Against such a competing theory, I can offer an objective reason for rejecting that theory. In other words, if the debate is between shall(x) and shall(x') where shall(x) &lt;&gt; shall(x'), then we are talking about different theories. Here, if a person cannot come up with an objective reason to prefer one over the other this is significant.

However, if the comparison at issue is (for example) shall(2) vs. shall(6), then this is logically equivalent to coomparing Einstein in German to Einstein in French. And saying that a theory that cannot provide an objective reason for elevating shall(2) above shall(6) is no more of a reason for calling that theory subjective, then the lack of an objective reason for preferring Einstein in German over Einstein in French means that Einstein's theory of relativity is subjective.

I may, in fact, express a preference for Einstein in German. Perhaps I speak German rather than French. I may even go so far as to recommend Einstein in German to others on the grounds that the French language has ambiguities that make it more difficult to understand the theory. Again, the fact that I pick Einstein in German, do so because of my own personal preferences, and recommend it to others, still provides no reason to argue that Einstein's theory of relativity is subjective.

Perhaps the debate over objective ethics is precisely this type of debate -- like having one group of people saying that Einstein in German is objectively better than Einstein in French. Objectivists are saying that there is an objective reason for preferring the theory in one language over the same theory in another, while subjectivists are saying this is not the case. If this accurately models the ethics debate, I would certainly have to side with those who claim that there is no theory-dependent reason to prefer the theory in German over the theory in French. In fact, it would be absurd to even ask for a theory dependent reason for preferring one over the other given the fact that we are talking about one theory.

This actually describes the situation in which we find ourselves. I have asserted that I will use Einstein's theory in German. And you have taken this to mean, on my part, "I hereby assert that there is an objective theory-dependent reason for preferring Einstein's theory in German over Einstein's theory in French." You have challenged me to provide this objective reason. I have failed. Thus, you triumphantly assert that I should call myself a subjectivist. Which is true -- I am a subjectivist about language. But you have gone further and inferred from the fact that I can give no theory-dependent reason for preferring Einstein's theory in German that Einstein's theory itself is subjective, and this does not follow. the theory is objective, only my choice of language is subjective.

When I say "moral shall = shall(2)", I am simply selecting a language in which to express the theory. If somebody else were to say "moral shall = shall(1)", they are picking a different language. And if any opponent were to say to me, "Let's use moral shall = shall(1)", I would have no problem with this -- I would simply need to translate my theory into this new language, and we can go on from there as if nothing has happened. The content of the theory is not affected by the language used to express the theory.

In other words, it is a mistake to interpret the same theory in a different language as a different theory and that either we must be able to offer an objective reason for selecting among them or the theory itself is subjective. The claim that we are offering two different theories is true only if somewhere within the theory, for some n, something is declared true of shall(n) in theory 1 and not true of shall(n) in theory 2. If this happens, now one must be able to provide an objective reason for preferring shall(n) of 1 over shall(n) of 2.

[ June 25, 2002: Message edited by: Alonzo Fyfe ]</p>
Alonzo Fyfe is offline  
Old 06-26-2002, 09:31 PM   #190
dk
Veteran Member
 
Join Date: Nov 2001
Location: Denver
Posts: 1,774
Post

Quote:
dk: Your first principle should have been ad populum.
bd-from-kg: The only area where I appeal to what “most people think” is in regard to what things are aspects or components of rationality. I give some theoretical reasons for so regarding them in the thread On the nature of metaphysical axioms cited earlier. But I think that almost everyone regards this as self-evident, so for most people no argument is really needed.
-Whereas Ockham’s Razor and Induction are priori principles of knowledge.
-You say they’re “priori principles of knowledge”; I say they’re valid principles of rational action directed at acquiring true beliefs. Are we really saying different things?
dk: Ok, here’s my problem
How can a person go about the process of acquiring appropriate beliefs, if the rational nature of a person is contingent upon belief? I speculate that the quality of being rational resonates in the judgment of the active intellect ; not an act of acquisition. A person can’t rationally murder another. A murderer must dehumanize the victim to mitigate the offensive act with a rationalization. To commit justifiable homicide a rational person mitigates guilt with intent such that the victim’s death was a secondary consequence.
Quote:
dk: : I don’t see who you can throw them into the same bushel basket...
A rational agent is imputed to have dominion over their actions. Still, envy, greed, sloth and deceit impugn a person’s will in lieu of K&U.
bd-from-kg: Yes, real people are far from being perfectly rational. Your point being?
dk:: According to Ockham’s Razor, greater quantities of K&U often obscure the rational priori with chaff. You’ve differentiated the “real you (present)” from a “hypothetical you (prescient or futuristic)” to construe that sufficient K&U of the future [w]should tell a rational person how to act. I challenge the proposition.
In Oedipus the tragedy could have been averted, if only Oedipus’s mother had discarded the Oracle of Delphi’s foreknowledge instead of her son. What of Shakespeare’s Hamlet, did the ghostly revelation about his father’s murder change or determine his fate? In Romeo and Juliet were the star-crossed lovers fated by a waylaid message, or betrayed by uncontrolled passion? If a person acts upon foreknowledge to change the future, then they render the foreknowledge false. The point is a person can’t have sufficient K&U to make a moral decision, except as a priori.
Quote:
bd-from-kg: Let us examine envy... If I suffer because of my friend’s good, then rationally I should deprive my friend of the good.
dk: Non sequitur. This would follow only if you could show that (1) you would desire to deprive your friend of the good in question regardless of how much knowledge and understanding you had, and (2) it is rational to act purely in one’s own self-interest rather than taking others’ interests into account as well. I’m quite confident that you can’t show either of these things, much less both.
dk: - I might respond you’re self-evident proposition was likewise a non-sequitur, but I won’t. Intelligibles (K&U) are the substance of symbols and forms, so to affirm knowledge on the basis of reality, makes what I know about reality unreal. I submit we must posit (fix and affirm) the K&U of reality on the basis of being, and then being rational creatures people can proceed to explain and understand how reality is ordered, structured, etc.... A baseball player doesn’t set about knowing how to catch a fly ball by studying projectile force vectors or quantum states. A ballplayer knows how to catch a fly-ball by being a ballplayer. A person knows about morality from the quality of being rational. Was Descartes irrational before he doubted? No, his doubt was fixed by his innate rational nature? A rational mind must assent to certain principles, if they are known and properly understood. Why? Because being rational the human mind is determined to truth i.e. predisposed to know. Properly known and understood priori knowledge transmits form absent information, so to deny a priori renders knowledge unintelligible, which of course is absurd. Either it is the truth that “being exists” as the highest priori, or realism is not realism. Note: (Paragraph updated 6/27)
- I do understand I’ve circumnavigated the question, “How do we know, we know reality?”
Quote:
dk: Yet, it does seem odd that a rational person should seek retribution against a friend for the good the friend possesses.
bd-from-kg: A fully rational person will not seek retribution against a friend for the good he possesses. Stay tuned.
dk: - I kinda agree with your conclusion, except that “fully” is a quantitative and rationality is qualitative. .

[ June 27, 2002: Message edited by: dk ]</p>
dk is offline  
 

Thread Tools Search this Thread
Search this Thread:

Advanced Search

Forum Jump


All times are GMT -8. The time now is 03:41 AM.

Top

This custom BB emulates vBulletin® Version 3.8.2
Copyright ©2000 - 2015, Jelsoft Enterprises Ltd.