FRDB Archives

Freethought & Rationalism Archive

The archives are read only.


Go Back   FRDB Archives > Archives > IIDB ARCHIVE: 200X-2003, PD 2007 > IIDB Philosophical Forums (PRIOR TO JUN-2003)
Welcome, Peter Kirby.
You last visited: Yesterday at 05:55 AM

 
 
Thread Tools Search this Thread
Old 12-21-2002, 01:05 PM   #121
Veteran Member
 
Join Date: Jul 2000
Location: King George, VA
Posts: 1,400
Post

Darkblade:

As I said before, this has all been discussed at great length in earlier posts on this thread.

Anyway, if you want to be taken seriously, you have to respond to your opponents' arguments, not just quote the conclusion and reiterate that you disagree with it.
bd-from-kg is offline  
Old 12-21-2002, 01:07 PM   #122
Veteran Member
 
Join Date: Jul 2000
Location: King George, VA
Posts: 1,400
Post

OK, it’s time to begin my second argument regarding the rationality of altruism.

For the sake of conciseness, I will refer to the view that only acting in one’s own self-interest is rational as “egoism”. (This term is often used for the related doctrine that only acting in one’s self-interest is morally right, but I am not using it in this sense here; in fact, I’m not going to refer to moral concepts at all.)

In this post I will not be trying to show that altruism is rational, but only that egoism – the doctrine that only acting in one’s self-interest is rational – is untenable.

The heart of this argument is an analysis of the concept of “self”. I expressed the basic argument as follows some time ago on another thread:

Quote:
When we drop a stone into the water, it makes a series of ripples. We can follow each ripple as it spreads outward until it disappears. But what is a ripple, anyway?

The water that “constitutes” a ripple is constantly changing. The water itself does not move outward, but up and down. So a “ripple” is really just a pattern, not a “physical thing”. It exists only in our minds, which is to say that it is a “mental construct”.

But human beings are just like that ripple. The matter that “constitutes” us is constantly changing; all that persists is a pattern – and even that pattern is constantly changing. In a very real sense, “I” do not exist as an entity that persists from birth to death. Or rather, “I” am a pattern, and a constantly changing one at that. And if “I” am identified with my “mind”, this is even clearer: the pattern changes much more rapidly than the mere physical constituents of my body.

Now when one pattern changes continuously to another, it is arbitrary whether we identify the pattern at time A with the pattern at time B. Think of a kaleidoscope: after a full turn it seems absurd to say that the pattern we observe is the “same” as the one we saw initially. If it is convenient to our purposes (i.e., to conceptualizing what is going on) we say that they are the “same” pattern, but has changed; otherwise that they are “different patterns”. Neither statement is “objectively true”.

In fact, from the point of view of strict logic, there is an objectively right answer, and that is that A and B are always different patterns. The only thing that relates them is that one can trace a continuous sequence of “in-between” patterns, all of which existed for an instant, with the earlier ones more closely resembling A and the later ones B.

Thus in identifying the Joe Smith of 6 PM today with the Joe Smith of 11 AM tomorrow we are not recognizing an objective truth but engaging in a convenient fiction: creating a mental construct that helps us to conceptualize reality. When today’s Joe Smith thinks of tomorrow’s Joe Smith as “himself” he is merely organizing his perceptions in a way that helps him to make sense of things: basically, to predict future perceptions more accurately. Tomorrow’s Joe Smith is, in a strict objective sense, no more “him” than I am. And therefore it is irrational for him to prefer the good of this Joe Smith of tomorrow over my good, other things being equal.
Now this is fine as far as it goes, but it’s a bit abstract. Let’s try to put some bones on it by considering Red, who is thinking about whether he should sacrifice some present happiness in order to be better off forty years from now. When Red talks about “himself” forty years from now, what exactly is he talking about?

Certainly he doesn’t mean that there will be someone identical to his present self in forty years, and that this is who he’s referring to. On the contrary, even if it happens that there will be such a person forty years from now, he is certainly not referring to him. He is referring to someone – call him Future Red, or Fred for short - who is forty years older than his present self, and who, by virtue of this fact alone, will be radically different from his present self in many ways. In addition, Fred may be different in a number of other ways; for example, he might be far more mature, more knowledgeable, more mellow. He may have a wife, a family, and any number of other interests and obligations that Red doesn’t have. On the other hand, he might be a hopeless alcoholic, a petty thief, in and out of jail. He might be a snob, a Scrooge, or a devout fundamentalist Baptist. In short, he might be pretty much anyone at all. But to make my point as sharply as possible, let’s suppose that Fred opposes everything that Red stands for. Suppose that Fred is a fanatical Moslem, on a mission to murder as many Americans as possible and cripple our economy to the point where millions of others die of starvation, while Red is a deeply patriotic American who is profoundly horrified at the thought of doing such things.

Why would Red be interested in the welfare of this stranger Fred? If they were to meet, they would hate each other’s guts. Are we supposed to believe that the only rational policy for Red is to look after Fred’s interests – to arrange for him to have lots of money and influence and to see that he lives as long and happily as possible – so that Fred will be enabled to being about a future state of affairs that Red would find horrifying?

The egoist says “Yes, that’s the only rational thing to do, because there’s a continuous chain of entities stretching from the present to forty years in the future, each one very similar to the one preceding and the one following, and Red is at one end of this chain while Fred is at the other.”

Now certainly this is an interesting fact, but it’s not clear why, if Red is at all rational, it would have any effect on his attitude toward Fred. To make the point even more sharply, suppose that another person, Red-2, will be living at the same time, who is far more similar to Red than Fred is: he has the same interests, the same values, the same attitudes and lifestyle, etc. If Red were to meet Red-2, they would get along famously, because they would understand one another almost perfectly. And Red has the power to arrange things so that Red-2 will attain a position of great power and influence, but only if he ignores Fred’s interests entirely. What rational reason would there be for Red not to do so? What rational reason would he have for favoring Fred, whom he regards as virtually the embodiment of evil, over Red-2, who stands (in Red’s mind at least) for all that is good?

The point here is that Fred is not Red; Fred is the person whom Red will become. These are not the same thing. When the egoist says that the only rational policy is to look after “your own” interests, what he really means is that the only rational policy is to look after the interests of the people whom you will become to the exclusion of everyone else. And there seems to be absolutely no way to justify this claim: why are these future entities necessarily worthy of your concern, and why are they the only ones worthy of it?

Here’s a “thought experiment” that may serve to illustrate the point. Imagine a Star-Trek-type “transporter” which destroys a person’s body at the “sending” end but in the process builds up a database from which it can be “reconstructed” at the “receiving” end. Now suppose that Smith gets into the transporter chamber. Seconds later his original body (call it Smith-1) has been annihilated, but a body (call it Smith-2) of someone who thinks of himself as the same person is created at the other end. Now, has Smith actually been “transported” to the new location. or is the “true” state of affairs that the original Smith is dead and has been replaced by a another “Smith”? Or in other words, are Smith-1 and Smith-2 “really” different bodies of the same person, or bodies of two different people?

It seems clear to me that the correct answer is that this is a meaningless question. If you want to think of Smith-2 as the “same person” as Smith-1, fine – no one can say you nay. But if you prefer to think of Smith-2 as a “different person”, this is also fine. In terms of facts there is nothing to choose between these views; they are simply different ways of conceptualizing the same set of facts.

But for the egoist this poses a dilemma. For example, suppose that the original Smith-1 is in a position to sell Smith-2 into slavery in return for an incredibly pleasant subjective experience right now. (We’ll assume that Smith-1 has no interest in Smith-2’s welfare; whether he should have is another question.) Would it be rational for him to do so? According to the egoist, the answer is that it would, if Smith-1 is a “different” person from Smith-2, but that it wouldn’t if they are the “same” person. But as we have seen, the answer to the question of whether Smith-1 and Smith-2 are the same or different people is a matter of interpretation, not a matter of fact. So (according to this account) the answer to the question of what it would be rational to do depends, not on what the situation is, but on how one chooses to conceptualize it. One consequence of this is that, if I consider Smith-1 to be a different person from Smith-2, for me it is rational for Smith-1 to sell Smith-2 into slavery, while if you consider them to the “same” person, for you it is irrational for Smith-1 to do so.

Thus, on this account, whether an action is rational or irrational is subjective. Note that this is not a matter of our having different opinions about whether Smith-1’s act would be rational; on this account it would be really true for me that it would be rational for Smith-1 to sell Smith-2 into slavery and really true for you that it would be irrational. Thus if I say to you that it would be rational and you reply that it would be irrational, we would not be disagreeing.

Now ordinarily when we say that it is “rational” for someone to believe something or to do something, we mean something objective; i.e., if it is true for me that Hunter’s belief that the earth is only 6,000 years old is irrational, it is also true for me, and most importantly, it is true for Hunter. It simply cannot be “true for me” that Hunter’s belief is irrational and “true for you” that it is rational. In other words, when I say that it’s irrational and you say that it’s rational, we are disagreeing.

To be sure, as often happens, there are marginal cases, because there are degrees of irrationality. In the same way, if Jim is 5’11’’, we may disagree as to whether he is “tall”, but his “tallness” is an objective property; all that we’re disagreeing about in this case is whether his height is sufficient to warrant using the term “tall”. But if John is three feet high and you insist nevertheless that he’s tall, you’re simply wrong; this is not a marginal case. In the case of Smith, we do not have a marginal case. Either Smith-2 is the same person as Smith-1 or he isn’t; he’s not 95% the same person and 5% a different person. Thus our disagreement about whether it is rational for Smith-1 to sell Smith-2 into slavery is not a disagreement over a marginal case; I say that it is completely, unequivocally rational and you say that it is completely, unequivocally irrational. This certainly appears to be a real disagreement. And yet, according to the egoist, we aren’t disagreeing at all: both of us are right. And when two people both say true things, they obviously cannot be disagreeing; truth does not conflict with truth.

So we are forced to conclude that when egoists say that it is rational to act solely in one’s self-interest, they are using the word “rational” in a subjective, and thus a nonstandard, way. Whatever it is that they mean by saying this, they do not mean what they would seem at first sight to be saying, and it is incumbent on them to explain just what it is that they do mean.

I think that this is enough to show that, at the least, the egoist position, far from being self-evident, is extremely problematic and fraught with difficulties. It’s possible that it can be patched up, but at the very least, anyone who wishes to take this position has the burden of justifying it, rather than taking it for granted that any rational person will agree with it. And I think that once it’s clear that this position is not self-evidently correct but is actually intuitively quite implausible, and therefore needs to be justified, it is also clear that there is no way to justify it, and that there is no reason for any rational person to agree with it.
bd-from-kg is offline  
Old 12-24-2002, 02:59 AM   #123
Regular Member
 
Join Date: Oct 2002
Location: I am both omnipresent AND ubiquitous.
Posts: 130
Post

Smith-2 is not the same person as Smith-2 any more than Hydrogen-Atom-2 is the same atom as Hydrogen-Atom-1. The very fact that two Smiths could simultaneously exist (via use of a similar scenario with a replicator instead of a transporter (although I still don’t know how the Smiths would be perfectly identical unless they existed in the exact same space at the exact same time)) without time travel having been used (although, of course, the Smiths would still not be identical) proves that they must not be the same person. Therefore, your conclusion is completely unfounded.

Also, you seem to make the assumption that Smith-1 should sell Smith-2 into slavery if it would benefit him. However, it would probably be (spectacularly) uncommon that Smith would, at the drop of a hat, willingly throw away his anti-slavery (and all other applicable) sentiments just so he could experience one last jolt of happiness (his sentiments would otherwise prevent him from having enough happiness). Of course, if you assume that Smith-1 is a sociopath, all bets are off. But we could all invent any number of scenarios that exist only in a vacuum, having no application for real life, where (general) morality and self-interest tend to coincide.

Why should Red care about Fred and not Red 2? Because Red will become Fred. You seem to think that Fred has no relation to Red. Red becomes Fred; he will feel things as Fred. That’s why Fred is important to Red. A vastly (fanatical Moslem as opposed to deeply patriotic American) different Fred existing is not something that is very likely, and I seriously doubt that Red would assume that that Fred would exist as opposed to a deeply patriotic Fred, especially because it is so unlikely. In any case, Red, not knowing that he would become anti-Reddish in the future, would want to benefit Fred, as he would feel things as Fred. It’s not as if we all die every attosecond. (Or is that your claim?)

The reason I didn’t decide to attack all of your ideas is a combination of the fact that no one else’s arguments were persuasive to you (forming the idea that you probably wouldn’t change your mind because of any arguments that I made) and pessimism inherent to myself. I do not take pleasure in debating without purpose, and I do not gain conviction for my own beliefs from debate, as I already have thought things through by the time I have a belief. I do like to read the debates here at Infidels, however, so I do so.

What I was trying to do (although I should have known it would have been unsuccessful) was demonstrate that your claim that not all motives are self-interested, due to the fact that, as you put it, people value things other than happiness, was unfounded, due to the fact that it can be explained with happiness, and the fact that the system is fallible, which is, at least in part, due to the fact that people’s minds do not spontaneously change to accommodate their environment (we are not that adaptable) (I know I didn’t really express all of that in my other post; I’m just elaborating.). And I believe that it is simply unnecessary to believe in the existence of altruism, because no events, to my knowledge, need it to be true to be explained. Altruism is like the supernatural of the mind; sure, it could be true, but there is no reason to believe in it (I do not believe that you have demonstrated any reasons), and it is simpler, and therefore more reasonable, by Occam’s Razor, not to. Furthermore, I find it to be irrational. Why do something that never would give you a reward? It is clear that altruism is not, as you put it, such an intrinsic good when people do not know the people as closely as friends and family, or don’t like them, or won’t be touched by any consequences from being altruistic. I would find a version of altruism acceptable; a communism of sorts, where people don’t hurt each other, but treat each other equally, because they would then not be as likely to be hurt or treated unequally as if everyone were murdering and pillaging each other. But altruism without a personal motive must not be rational. Would you support people randomly stabbing themselves if it were an “intrinsic good”, even though it would not have a personal motive? It is also interesting to me that the brain operates with the use of dopamine. The fact that altruism always seems to coincide with self-interest is highly suspicious to me. Perhaps it made more sense for people to have evolved wanting to do things because they benefitted themselves via happiness (as opposed to having less adaptable instincts)? Even if altruism were existent, I would find it rational to override it whenever it did not ultimately benefit you. So I would attempt to replace the irrational altruism with a rational one. (Of course, I think that this was what has happened in the first place.) Anyway, I ended my last post with an attempt to agree to disagree, as I was, and still am, pessimistic about convincing you of my position.

[ December 24, 2002: Message edited by: Darkblade ]</p>
Darkblade is offline  
Old 12-31-2002, 12:44 PM   #124
Veteran Member
 
Join Date: Jul 2000
Location: King George, VA
Posts: 1,400
Default

Darkblade:

OK, I’ll make one more post on the subject of altruism in general – whether it’s possible and whether it’s rational. (For the moment I’ll set aside my own arguments to the effect that not only is altruism rational, but acting in a non-altruistic way is irrational.) Along the way we’ll unavoidably touch on the question of whether all acts are motivated by a desire on the part of the agent to maximize (in some sense) his own happiness. Please note that I intend to start a new thread on this latter question soon in the Philosophy forum, which is where it belongs.

1. Is altruism possible?

Claims that there are no altruistic acts whatever – that altruism is impossible - make me wonder what the speaker means by altruism; what he would count as altruism. For example, let’s consider the following simple scenarios.

Scenario 1:
Smith sees a little girl drowning in a pond. He judges that he could probably rescue her with little danger, but his clothes will be ruined. He contemplates the future state of affairs in which the girl has drowned and compares it to the one in which he has rescued her, but has ruined his clothes. He finds that he prefers the latter than the former because he doesn’t like the idea of the little girl drowning. So he acts accordingly: he saves the girl.

Scenario 2:
Jones sees a little girl drowning in a pond. He judges that he could probably rescue her with little danger, but his clothes will be ruined. He contemplates the future state of affairs in which the girl has drowned and compares it to the one in which he has rescued her, but has ruined his clothes. He finds that he prefers the former to the latter because he doesn’t like the idea of ruining his clothes. So he acts accordingly: he saves his clothes.

Now it is absolutely clear to anyone who is not a lunatic that:

(1) Scenarios like #1 actually occur. In fact, they are pretty frequent. The vast majority of people, presented with the choice described here, would make the same choice as Smith: they would choose to save the girl rather than save their clothes.

(2) There is an important difference between the choice that Smith made and the one that Jones made. People find it useful to have words to mark important differences. In this case the difference is marked by calling Smith’s choice altruistic and Jones’s selfish.

This concludes our discussion of whether altruism is possible. This example (or any of a million others) proves definitively that it is. People do sometimes act like Smith rather than Jones; there is an important difference between the choices Smith and Jones make; and the function of the term “altruistic” is to point to this difference.

2. But does true altruism exist?

At this point the “no altruism” advocate will object, “But you’re leaving out something vital! Smith isn’t really acting altruistically; his real motive is his own happiness. So at bottom there’s no real difference between Smith and Jones after all; each of them is just maximizing his happiness. That’s all that anyone ever does: all of us always act to maximize our own happiness. And therefore no one ever really acts altruistically.”

Let’s suppose that it’s true and see what it really implies. According to this claim, Smith saved the girl because doing so made him happier than saving his clothes would have, while Jones saved his clothes because doing so made him happier than saving the girl would have. Does it follow that there is no significant difference between Smith’s choice and Jones’s? Of course not! There’s a very important difference indeed, namely the difference between what makes Smith happy and what makes Jones happy. It would be reasonable to say that Smith has an altruistic character, meaning that it is so constituted that what makes him happy is closely aligned with what conduces to the general welfare, while Jones has a selfish character.

Still, if it were true that the motive for every act is to maximize the agent’s happiness, might it not be said that, in a sense, there are no “truly” altruistic acts? Well, that depends on what one means by saying that the motive for an action is to maximize the agent’s happiness. If one is referring to the happiness that he expects to achieve from doing the thing in question, then it is indeed reasonable to say that this implies that, in a certain sense, there are no truly altruistic acts. However, the claim that everyone always chooses to do what he believes will result in the most happiness for himself is completely untenable, as is shown by several of my examples as well as any number of real-life cases. For example, in the Vietnam War there were a number of instances of a soldier jumping onto a live grenade that had landed among his group with the intent of sacrificing his own life to save his buddies. In almost all cases it worked. Unless one is willing to argue that the soldiers in question were all suicidal (totally implausible, since if they had been they would have been dead long before the events in question) it is simply impossible to explain such acts as cases of choosing to do what would produce the most happiness for the people who did this.

Even aside from such examples, the claim that the motive for every act is to maximize the agent’s happiness entails that no one ever desires the good of others as an end in itself. (If anyone had such desires, they would be motives for some acts; that’s what it means to call something a desire.) But this is an empirical claim, so if you’re making it you have the obligation to present some evidence for it. After all, humans often act in ways that can easily be explained on the assumption that they desire that some other people be better off as an end in itself, and are not easily explained otherwise. So Ockham’s Razor demands that the simplest explanation – namely that they have such desires - be accepted absent strong empirical reasons to reject it. Moreover, it seems clear to me on introspection that I desire that other people be better off as an end in itself, and I suspect that most other people find the same thing. So your theory will have to explain why so many people are deluded in this respect; what is the mechanism that produces this delusion, and why would it exist in minds that were shaped by natural selection – i.e., how is it advantageous in terms of survival or reproduction?

In view of such considerations most “no altruism” advocates will say that the motive for every act is not the happiness that the agent expects to achieve by doing the thing in question, but the happiness that he does achieve, immediately, by deciding to do it.

We must be careful to understand what is being claimed here. The “no altruism” advocate is saying not only that deciding to do what one prefers always does make one happy (in the very short run), but that the motive for doing anything is always the desire to experience this very short-run happiness.

It might be objected that the “no altruism” advocate need not claim that this is the motive for all acts; perhaps he is claiming only that it is the motive for those acts that cannot otherwise be interpreted as “self-interested”. This position is ultimately incoherent, but that need not concern us here. The point is that this explanation is invariably invoked by the “no altruism” advocate to dismiss precisely those acts that seem most clearly to be altruistic, as not being “truly” altruistic. This is the argument we are really concerned with here.

Let’s think again about one of those soldiers – call him Brown - who sacrificed his life to save his buddies by falling on a live grenade. According to the theory we’re considering, Brown did this because deciding to fall on a live grenade made him happier right at that moment than deciding to do anything else that he could possibly have done would have.

For our purposes we can ignore the utter absurdity of supposing that the prospect of an immediate, very painful death could make a perfectly healthy young man with everything to live for happier than any alternative. (No one would fault him for simply trying to save his life, which is what almost everyone actually does in this kind of situation, after all.) Rather, let’s think about what this claim (if true) would imply about the altruistic nature of the act. What the “no altruism” advocate is saying here is that if Brown had found the prospect of being blown to smithereens absolutely appalling, while the prospect of living out a normal life had been deeply appealing, but he chose to sacrifice himself anyway, his act would count as being “truly” altruistic, but that since he chose it happily, perhaps even joyously, because it meant saving his buddies, it does not count as being “truly” altruistic. All that I can say about this notion is that it is not only completely unintelligible, but incredibly perverse. Why would the fact that a self-sacrificial act is completely voluntary, freely chosen, perhaps even performed in exuberant good spirits, disqualify it from being considered altruistic? This makes no sense whatsoever. On the contrary, this kind of act would represent the most perfect altruism imaginable.

Of course, I doubt that it is possible for a completely sane, well-adjusted person to throw himself on a live grenade with exuberant good spirits, or even that making this choice could make such a person happier in any meaningful sense than making any other choice. But the point is that if this were not only possible but actually happened, the act would certainly qualify, by any halfway sane standard, as altruistic. The definition of altruism being applied here by the “no altruism” advocate to justify saying that such an act is not “truly” altruistic is simply absurd; it bears no relation whatever to what is normally meant by altruism.

What is normally meant by altruism is giving preference to the good of other people over one’s own good, or alternatively giving equal consideration to everyone’s interests, including one’s own. Either of these is a reasonable definition; the term is commonly used in each of these senses. (Actually there’s not that much difference between them; altruism in the second sense will often result in acts that are altruistic in the first.) But what the “no altruism” advocate wants to do is to define altruism as sacrificing one’s own interests to those of others, even though one would prefer (all things considered) not to. No question about it: for this definition of altruism, there is no such thing. No one ever (intentionally) does one thing if he would prefer (all things considered) to do something else. Indeed, to say that someone did so would be a logical contradiction: it would contradict the meaning of the word “prefer”. So if all that the “no altruism” advocate is saying is that no one ever sacrifices his own interests to those of others unless he prefers to do so, we can agree with him wholeheartedly. But we need not accept his absurd definition of “altruism”. If we use it in either of the “standard” meanings, it seems clear to me that there plainly is such a thing as altruism; that people do sometimes give preference to the good of others over their own good.

3. Is altruism irrational?

Finally, let’s turn to the question of whether altruism is irrational.

Many of the same people who argue that altruism does not exist, and who give reasons that clearly imply that what they mean by “altruism” cannot exist as a matter of logical necessity, also claim that altruism is irrational. Now it should be obvious that these claims are logically contradictory. If something is impossible, it doesn’t make sense to talk about whether it’s rational or irrational. We don’t talk about whether it’s rational to fall to the ground if one is pushed out of a fifth-story window, because there is no choice in the matter. Similarly, if we cannot act altruistically, it’s meaningless to talk about whether it’s rational to act altruistically.

So when it is claimed that altruism is irrational, we must assume that what is being talked about is something possible. Presumably it’s something pretty much along the lines of the standard definition of altruism: giving preference to the good of others over one’s own good. And we must assume that all instances of giving preference to the good of others are not going to be ruled out of court on the grounds that the agent preferred to do so, since otherwise we are back to the “altruism is logically impossible” position, which makes the question of rationality meaningless.

Now let’s try to analyze just what it is about acting altruistically that supposedly makes it irrational. Here we must refer once again to the “standard account” (what I’ve called that preference-desire-belief-action model - of how humans decide between alternative courses of action:

(1) You prefer (other things being equal) that a certain state of affairs, A, should hold rather than that it not hold, and this gives rise to a desire that this state of affairs should hold. You also prefer (OTBE) that another state of affairs, B, should hold, and this also gives rise to a corresponding desire.

(2) You believe that doing X will bring about A while doing Y will bring about B (and there are no other considerations that matter to you that would affect your decision).

(3) Your preference for A over not-A is stronger than your preference for B over not-B, so your desire to do X is stronger than your desire to do Y. Therefore you do X.

This is oversimplified, of course, but something of this sort is necessary even to give meaning to concepts such as “self-interested” and “altruistic” acts.

Now let’s suppose that the aspect of state of affairs A that makes you prefer it to not-A is that some other people are better off in some way, while the aspect of B that makes you prefer it to not-B is that you are better off in some way. Then according to (3), your desire that these other people be better off in this specific way is stronger than your desire that you be better off that specific way. If this is granted, the rest follows automatically. So the thing that the “altruism is irrational” advocate must hold to be irrational is the desire that some other people be better off.

But it is difficult to see what grounds there could possibly be for saying that such a preference is irrational. In general, a desire that a certain state of affairs should hold for its own sake is thought to be neither rational nor irrational. Reason is generally held to have nothing to say about what one “ought” to desire for its own sake – i.e., what one “ought” to want; it can pronounce only on whether a given act is a reasonable means of achieving the end in question. Thus, if I have a desire that certain people be better off as an “end in itself” – i.e., if I regard such a state of affairs as “intrinsically good” – no one can say me yea or nay. My desire is just as rational as any other desire for something as an “end in itself”.

So unless you are prepared to present a theory of rationality that shows that it is rational to desire certain things as ends-in-themselves and irrational to desire others, it would seem that there is no conceivable way to justify a claim that a desire for other people to be better off as an end-in-itself is irrational. Which means that there is no way to justify the claim that altruism is irrational.

It seems to me that most people who claim that altruism is irrational have something like this in the back of their minds: “I have no desire that other people be better off as an end in itself. So for me, sacrificing my interests to those of other people would be completely irrational. Therefore it is completely irrational; anyone who does it is a nut case.” The problem with this kind of reasoning is not hard to spot. For example, so far as I can tell, I have no desire to have sex with people of the same sex as myself. So for me, having this kind of sex would be completely irrational. Does it follow that therefore it is completely irrational; that anyone who does it is a nut case? People have all sorts of desires; people who have different desires from yours aren’t irrational; they’re just different.
bd-from-kg is offline  
Old 01-01-2003, 03:08 AM   #125
Regular Member
 
Join Date: Oct 2002
Location: I am both omnipresent AND ubiquitous.
Posts: 130
Default

Quote:
Originally posted by bd-from-kg

What is normally meant by altruism is giving preference the good of other people over one’s own good, or alternatively giving equal consideration to everyone’s interests, including one’s own.
Unselfish concern for the welfare of others; selflessness (dictionary.reference.com) is the actual definition of altruism. I admit that by your definition acts of altruism can, and does, occasionally, occur. But the real definition of altruism is where someone places all value on others, and none on himself. Anyway, I agree with your proposal that altruism exists, within the context of redefining altruism to your definition. I bet this thread has really just been so long because people could not agree with your definition of altruism; both sides are mostly right, if you accept the definition of that side.

Quote:
Originally posted by bd-from-kg

It seems to me that most people who claim that altruism is irrational have something like this in the back of their minds: “I have no desire that other people be better off as an end in itself. So for me, sacrificing my interests to those of other people would be completely irrational. Therefore it is completely irrational; anyone who does it is a nut case.” The problem with this kind of reasoning is not hard to spot. For example, so far as I can tell, I have no desire to have sex with people of the same sex as myself. So for me, having this kind of sex would be completely irrational. Does it follow that therefore it is completely irrational; that anyone who does it is a nut case? People have all sorts of desires; people who have different desires from yours aren’t irrational; they’re just different.
I do see where you’re coming from here, but I just think that the want (for increased total pleasure) is the end, not the other people. This creates a Theory of Everything, of sorts, for human actions. (Although, if there is no base want, is it random (and just happens to be beneficial) that people choose to have others be ends?) What I meant before about irrational altruism was more like where it was based on instinct or reflex, rather than reason (the adaptable part of the mind). However, your definition of altruism is one I could classify as within rational thought, as long as it was situational (as it is observed to be) rather than blind. I don’t think that people can willfully do anything irrational; the mind is programmed throughout life and relies heavily on association (which explains, IMO, a lot). Whatever someone does must be either purely or partly instinctual, reflexive, et cetera, or purely or partly willful and rational, lest he be insane. So “malfunctions” are more based in external changes to which the mind has not caught up with yet (or can not catch up with, due to prior programming). I just prefer willful and rational to involuntary (or to insane), so, even if forced altruism somehow existed, I would attempt to use my own mind in place of it.
Darkblade is offline  
Old 01-03-2003, 11:01 AM   #126
Veteran Member
 
Join Date: Jul 2000
Location: King George, VA
Posts: 1,400
Default

Darkblade:

In your latest post you suggest that I’m “redefining altruism”. That’s not so. My definition has come to be pretty much standard usage, although the older meaning you refer to, of acting to further other people’s interest to the total exclusion of one’s own, is still sometimes used. I could give lots of references if you insist.

Quote:
I bet this thread has really just been so long because people could not agree with your definition of altruism
The only way to interpret this that makes any sense is that you think that many of the other posters misunderstood what I meant by “altruism” and “altruistic”, and that if they had understood what I meant we wouldn’t have disagreed. This is flat-out wrong. I made it very clear what I meant by “altruism” and “altruistic” from early one. I can quote what I said in a number of places to show this, but it would be a waste of time.

I can only surmise, from this and other statements, that you simply haven’t read this thread carefully. You’ve repeated many arguments that had been thoroughly discussed earlier as though they were new points, and now you accuse me of not having made my meaning clear when in fact I made it abundantly clear.

Anyway, what the dispute has been about is whether truly altruistic acts are possible, and all of my arguments and examples have concentrated (for obvious reasons) on cases in which (in my opinion) the motive is, or at least could be, purely a desire to benefit others. Such acts are altruistic under any reasonable definition.

Quote:
... both sides are mostly right, if you accept the definition of that side.
Not so. Many people (including you) have argued that the motives for all acts are ultimately self-interested. Such acts are clearly not altruistic under any reasonable definition. Thus it’s clear that we have been disagreeing, with respect to a large class of acts, about whether they are altruistic under any reasonable definition; whether one adopts the definition of “aimed exclusively at benefiting others” or “aimed at benefiting everyone (including oneself) on net balance” doesn’t really matter.

Moreover, some of the other posters (tronvillain and 99percent come to mind)
have argued that the only rational policy is to always act in a purely self-interested way. This of course implies that acting altruistically in any reasonable sense is irrational. This is clearly a disagreement; we can’t all be right.

Quote:
I do see where you’re coming from here, but I just think that the want (for increased total pleasure) is the end, not the other people.
You can’t have it both ways. If the ultimate end of an act – the thing that the agent desires for its own sake – is his own pleasure, then the act isn’t truly altruistic. The agent isn’t sacrificing his own interest to the interest of others; he’s using others as a means to attaining pleasure for himself.

That’s what the “thought experiments” that Nozick, Pryor, and I have proposed were all about. This is the very question they were designed to shed light on. The point is, if the pleasure you obtain from benefiting other people is your real end - if benefiting others is just a means to this end – then it should be a matter of indifference to you whether you attain this end by actually benefiting people or by merely seeming to. Just as, if your end is to experience the thrill of riding on a roller coaster, it should be a matter of indifference to you whether you attain it by actually riding a roller coaster or merely seeming to (via a perfect simulation). But while most of us really would be indifferent as to whether we achieved this thrill by experiencing a perfect simulation of riding a roller coaster, or by actually riding a roller coaster, most of us would not be indifferent between experiencing the illusion of saving lives (while really causing millions of people to be tortured mercilessly), and really saving lives. This sort of example seems to show that what we are really after – our ultimate end – is not (at least not always) exclusively to experience some mental state. In many cases our ultimate end is (at least in part) to bring about a desired real state of affairs. And what we find desirable about the desired state of affairs is sometimes the welfare of other people.

Quote:
This creates a Theory of Everything, of sorts, for human actions.
I’ve addressed this point before; I really wish you would respond.

First, is this really a theory, in the sense of a hypothesis about how things are? If so, what would constitute falsification of it? For that matter, what would constitute evidence against it? It seems to me that what you really have isn’t a theory at all, but a conceptual framework that can accommodate any conceivable observations, or the results of any possible experiment, so that nothing would count as falsifying it, or even as evidence against it. If so, it’s not a theory about human behavior (i.e., about why people act the way they do); it’s just a description of how you conceptualize human behavior.

Second, as I just said in my last post, if it is an empirical theory - if some facts would actually count as evidence in its favor and others as evidence against it:

Quote:
... you have the obligation to present some evidence for it. After all, humans often act in ways that can easily be explained on the assumption that they desire that some other people be better off as an end in itself, and are not easily explained otherwise. So Ockham’s Razor demands that the simplest explanation – namely that they have such desires - be accepted absent strong empirical reasons to reject it. Moreover, it seems clear to me on introspection that I desire that other people be better off as an end in itself, and I suspect that most other people find the same thing. So your theory will have to explain why so many people are deluded in this respect; what is the mechanism that produces this delusion, and why would it exist in minds that were shaped by natural selection – i.e., how is it advantageous in terms of survival or reproduction?
Quote:
What I meant before about irrational altruism was more like where it was based on instinct or reflex, rather than reason ...
I totally fail to understand you here. An act which is literally reflexive (like jerking your leg when your knee is hit) isn’t even an intentional act. Before the question of whether an act is altruistic or self-interested can even arise, it has to be intentional and purposeful. That is, there must be a motive which necessarily consists of a desire that some state of affairs should come about and a belief that the act in question will tend to bring it about. And as I commented earlier, the standard view is that desires (or more precisely, desires for ultimate ends) cannot be based on reason; reason can only tell us how to attain ends, not what they should be.

So what exactly are you ruling in here and what are you ruling out? It’s often said, for example, that a mother will instinctively try to protect her child from harm, but it is more precise to say that she has an instinctive desire that her child not come to harm. Does the fact that this desire is instinctive make it irrational? If so, why? Are my instinctive desires to eat and to have sex also irrational? If so, why? If these aren’t irrational, please give an example of a desire that you do consider irrational, and explain why you so regard it; how does one distinguish between rational instinctive desires and irrational ones?
bd-from-kg is offline  
Old 01-03-2003, 11:07 AM   #127
Veteran Member
 
Join Date: Jul 2000
Location: King George, VA
Posts: 1,400
Default

Darkblade:

Now to address your comments that relate specifically to my argument.

But before proceeding, it may be a good idea to explain why I keep citing seemingly bizarre, off-the-wall, science-fiction-y scenarios in this discussion. Fortunately Pryor has already done this very well in another of his fine papers:

Quote:
Philosophical thought-experiments often involve pretty far-out science fiction. For instance, this term we'll be discussing brain transplants, teletransportation, and time-travel. Newcomers to philosophy tend to find all this science fiction bewildering. What relevance can science fiction cases have to real life?

To answer this question, you have to understand the nature of philosophical claims and what's required to produce a counter-example to them.

Professor Smith, for instance, is trying to tell us what death is. He's not just making a claim about actually existing creatures on the planet Earth, and what happens when they die. He's making a claim which purports to be true of any imaginable creatures anywhere, no matter how bizarre and science-fiction-y they may be.

Hence, Professor Smith's claim about what death is seems vulnerable to the following counter-example. Suppose Charles is put into suspended animation, and his body is frozen to near absolute zero. One week later, he is thawed out and revived. Now, during the period where he was frozen, all biological processes in his body had stopped. But it does not seem correct to say that Charles was dead during this period. Hence, Professor Smith's analysis of death is incorrect. Charles' biological processes had stopped but he was not dead.

Perhaps it is not in fact technologically possible to freeze a person and revive him again.
This is not important. Professor Smith's claim purports to be true of any imaginable creatures anywhere. So if it's possible even in principle for someone to be frozen, and for his biological processes to stop, without his thereby dying, then Professor Smith's claim is false. This is what our counter-example purports to show.
Thus “science-fiction-y” scenarios are not only relevant, but essential, to serious philosophy. With this in mind, let’s move on to your comments.

Quote:
Smith-2 is not the same person as Smith-2 any more than Hydrogen-Atom-2 is the same atom as Hydrogen-Atom-1.
Fine. You’re sure that they’re not the same person. But millions of Star Trek fans disagree with you. Are all of them lunatics? (Granted, some of them are lunatics. But all of them?) Because to refute my argument, it’s not enough to say that you don’t regard them as the same person; you have to show that it’s irrational to regard them as the same person.

Quote:
The very fact that two Smiths could simultaneously exist (via use of a similar scenario with a replicator instead of a transporter ...
Well, one episode of TNG did seem to imply that this was possible. But Roger Penrose, in The Emperor’s New Mind pointed out that it’s not even possible in principle because of quantum correlations.

Besides, your “continuity criterion” is subject to the same problem. It’s theoretically possible that one person could become two by a process of fission. (After all, a number of more “primitive” organisms reproduce in exactly this way. In the case of humans, of course, the process would have to be artificial, using technology that does not yet exist.) If that happened to you which of the products of the fission would “you” be? By the “continuity” test, the answer would have to be both! (Not “neither”; both of them could be traced back through an unbroken, continuous chain of intermediates to the current “you”.)

But there’s another fundamental problem with the “continuity criterion”. To illustrate, suppose that you have an identical twin, Mo. Now let’s imagine two possible futures, F1 and F2. In F1, over the next forty years you’ll become D2 and Mo will become M2. But in F2, Mo becomes D2 and you become M2. (At some point in both F1 and F2, both twins develop amnesia, so neither D2 nor M2 remembers whether he “is” [or “was”] you or Mo.) Finally, suppose that F1 and F2 resemble one another closely enough so that the effects of benefiting D2 are essentially the same in the two futures, as are the effects of benefiting M2, but the effects of benefiting D2 are quite different from the effects of benefiting M2, and (in the absence of any knowledge as to which one you will become) you strongly prefer the effects of benefiting D2. Now suppose that you can arrange either to (1) benefit D2 regardless of which future comes to pass, or (2) benefit whichever of D2 and M2 you become (depending on which future comes to pass). Is there any rational reason whatever to choose option (2)? Why wouldn’t it be rational instead to produce the best outcome (from your point of view)?

The point here is that most people consider it rational to choose one option over another based on the consequences of each choice. But in this case making the choice based on consequences is incompatible with choosing to act solely in your self-interest. Why should the mere abstract fact that D2 is the person whom you will become in F1 make it rational to benefit him even though (in the abstract) you prefer the effects of benefiting M2? Why should the mere abstract fact that D2 is not be the person that you will become in F2 make benefiting him irrational in that case, when benefiting him produces results that you strongly prefer (in the abstract) to the results of benefiting M2? In short, how can it not be rational to choose to benefit D2 in both cases? What bearing does the question of whether there is a continuous chain of entities connecting you with D2 have on the question of whether to benefit him? Isn’t the rational policy to base this choice on the future consequences of doing so, rather than on some facts about (what at that point will be) the dead past?

Quote:
...although I still don’t know how the Smiths would be perfectly identical unless they existed in the exact same space at the exact same time...
Evidently you consider this relevant; otherwise why bring it up? But it’s only relevant if identity is a necessary condition for two entities to be the “same person”. But in that case Red and Fred are different persons. Thank you for making my point.

Quote:
... it would probably be (spectacularly) uncommon that Smith would, at the drop of a hat, willingly throw away his anti-slavery (and all other applicable) sentiments just so he could experience one last jolt of happiness.
What anti-slavery sentiments? We’re assuming that Smith-1 is rational, and that egoism is the only rational stance. It’s difficult to see how an egoist could have any real anti-slavery sentiments. True, for most of his life he might oppose slavery in his own society for practical reasons; for example he might feel that it would make for a society that it would be unpleasant to live in. But in his final hour or so, on a space ship far from Earth, it’s hard so see what practical objections of this sort would still be applicable. And he might be selling Smith-2 to aliens from a far-off planet, under circumstances such that no one else would ever know what he’d done. Under these conditions it’s hard to see what practical objections of any kind he could have to doing it. And that “last jolt” of happiness could seem to him to last for a very long time: years, perhaps, or even lifetimes. Of course, Smith-2 might be made immortal by his new masters and live in misery indefinitely, but what’s that to Smith-1?

In any case, the objection that the scenario described is unlikely to actually occur is irrelevant. See Pryor’s discussion above.

Quote:
Of course, if you assume that Smith-1 is a sociopath, all bets are off.
No, not a sociopath, an egoist. That is, he doesn’t care in the least what the effects of his actions are on another person unless he happens to care about that person.

Quote:
But we could all invent any number of scenarios that exist only in a vacuum, having no application for real life where (general) morality and self-interest tend to coincide.
You seem to be misunderstanding my point entirely. I’m not even talking about morality at this point; I’m talking about what’s rational. And since the egoist position is that the only rational policy is to act in the interests of one’s self, I’m exploring the concept of “self” in order to see exactly what this means, so that we can make a fully informed decision as to whether the egoist position is reasonable.

Quote:
Why should Red care about Fred and not Red 2? Because Red will become Fred.
This, of course, is exactly the position I’m arguing against. Do you have any arguments to offer for why this is a rational reason for Red to care about Fred?

More importantly, why is the fact that Red-2 is not Red a reason not to care about him?

Quote:
You seem to think that Fred has no relation to Red.
Huh? How do you know that Red has a relationship to Fred? Because I stipulated that he does. Please try to make at least minimal sense.

Quote:
Red becomes Fred; he will feel things as Fred. That’s why Fred is important to Red.
Please stop assuming the very point at issue. The question is, will Red become Fred in the sense that he will, in time, be transformed into the different person, Fred, or does he remain the same person even though his nature changes radically? (Or is this, as I contend, a meaningless question?) Anyway, the question is not whether Fred is important to Red, but whether it is rational that Fred should be more important to Red than anyone else who will be living at the same time.

Quote:
A vastly (fanatical Moslem as opposed to deeply patriotic American) different Fred existing is not something that is very likely, and I seriously doubt that Red would assume that that Fred would exist as opposed to a deeply patriotic Fred, especially because it is so unlikely. In any case, Red, not knowing that he would become anti-Reddish in the future, would want to benefit Fred,
What you seem to be arguing is that it would be rational for Red to benefit Fred only because he holds false beliefs about him. Is this what you’re saying? If Red knew, or had good reason to suspect, that Fred’s values and interests will be radically different from his, would that be a rational reason for him not to look after Fred’s interests? If so, you must think that there is a rational criterion for deciding whose interests to look after other than that the person in question is one’s future “self”, which is to say that you repudiate egoism.

Quote:
...as he would feel things as Fred.
Once again you assume the very point at issue. Would[/i] Red “feel things as Fred”? Or would Fred feel these things as Fred?

Quote:
It’s not as if we all die every attosecond. (Or is that your claim?)
No. My claim is that one reasonable way of looking at things is that we become different people every attosecond (whatever an “attosecond” is). There are no facts that are inconsistent with this point of view. Whether to regard two entities that are not identical as the “same” entity in some sense is a matter of conceptual convenience, not a matter of fact. In fact, it’s quite likely that the question of whether it’s better to regard two entities as the “same” self at different times or as two different selves often depends on the context – i.e., on what issue or decision is involved.

The bottom line is that a “self” that persists through time is not a “thing” that really exists; it is a mental construct, a conceptual convenience, like money, a country, and lots of other things of that sort. And giving one’s allegiance to a mental construct is fundamentally irrational. “Myself, right or wrong” makes no more sense than “my country, right or wrong”.

Moreover, the decision whether to produce some effect in the future should depend solely on the nature of the effect, and what further effects it will have, not on the history leading up to the situation that you would be affecting. In other words, a rational person’s decisions are forward-looking; they take into account only the effects of the action on the future state of things. And taking into account whether a given entity that will exist in the future was once “you” in some sense (which is not a matter of fact in any case, but merely an interpretation which is usually, but not always, helpful in making sense of the flow of events) is fundamentally backward-looking; it means basing the decision, at least in part, on how the situation you will be affecting will have come into being.

A final reminder: At this point I’m arguing only that there’s no particular reason to regard a policy of acting solely to further the interests of one’s future “self” as uniquely rational, and many reasons to consider it irrational. I haven’t yet offered a “companion” argument that taking everyone’s interests equally into consideration is rational. (This is a conclusion of the “argument from empathy”, but that’s a completely separate, independent argument.) At some point I hope to get around to presenting such an argument.
bd-from-kg is offline  
Old 01-04-2003, 04:05 AM   #128
Regular Member
 
Join Date: Oct 2002
Location: I am both omnipresent AND ubiquitous.
Posts: 130
Default

People associate things with other things. Xerbo thinks of sacrificing himself on a grenade is good, and thinks of good things as happifying, so he sacrifices himself on the grenade. All sorts of things that people do are based in associative logic. Sometimes one belief induces another, which is flawed. However, the person may not know this until it is too late (or never, if his mind doesn't adapt), because he didn't think that such an instance would occur, found it irrelevant, et cetera. People do not sit down and think about every action they'll take, from the ground level up, for minutes or hours, until a conclusion is made. The system in place gives humans greater adaptability then other organisms on earth, but, unfortunately, humans aren't intelligent/evolved enough to have the flawless system that you seem to believe possessed by those people that throw themselves on grenades.

The Star Trek thing is flawed; two identical pieces of paper are not the same piece of paper, two identical stars are not the same star (or we wouldn't give them different names), two identical Qhozjops are not the same Qhozjop.

The fission thing is also flawed; the two things would not (for they could not, unless you claim that you can magically transform one quanta (of mass x) of energy into two (of mass x), without adding any foreign energy (don't think that the Many Worlds Theory will save you here)) contain the same wave-particles of energy-matter, and thus would not be the same as the original from which they were copied. Two identical phone jacks are not the same phone jack, two identical candied apples are not the same candied apple,...

As for Red not caring for Fred, this is, I believe, untenable. If you really acted as though you were not you proper all the time, you wouldn't care anything for yourself, and would devise a plot to kill yourself while having it look like murder, and have all your insurance money sent to charities (since you are fully altruistic, I presume). If you say that as long as you are mostly you, you are you, then I can't debate with you as it devolves into utter subjectivism. (please do not take my example literally; I am sure you would find better ways to be fully altruistic)

Either you are you all the time, or never, as any changes would make you into another person. So, if you can answer me, I would contend that you somehow remained you for long enough to write a response, even though you were changing all the while. (Disclaimer: Blah, blah, blah; I'm sure you'll use technicalities, et cetera to refute this, but I'm sure you understand what I mean; that the consiousness somehow lives on, even if it is modified (like an compound in different states; it is still the same compound; the same matter exists as before).)

Anway, I actually have read the whole thread, and it wasn't particularly nice (or altruistic) of you to express that you thought otherwise. I haven't (until now) posted about how your arguments are all similar to me, from my perspective. I guess it is inevitable that we both are bored with each other's arguments, I suppose. Anyway, I'm kind of tired of debating this with you. Think you've won if you want; it doesn't matter. I'm sorry if I sounded curt in this post; my humble apologies.
Darkblade is offline  
Old 01-07-2003, 12:36 PM   #129
Veteran Member
 
Join Date: Jul 2000
Location: King George, VA
Posts: 1,400
Default

Darkblade:

Frankly, your latest comments don’t strike me as serious attempts to engage my arguments. But then, that’s to be expected since, as you say, you’re tired of debating with me. I have no problem with this. The argument about psychological egoism is indeed getting old; I’m tired of it myself. But I’m a bit surprised that you’re already tired of discussing my critique of ethical egoism based on an analysis of the concept of “self”. This has hardly been talked to death yet.

You say that you apologize for “sounding curt” in this post. No problem. But I do have a problem with your comment a while back:

Quote:
The reason I didn’t decide to attack all of your ideas is a combination of the fact that no one else’s arguments were persuasive to you (forming the idea that you probably wouldn’t change your mind because of any arguments that I made) and pessimism inherent to myself. I do not take pleasure in debating without purpose, and I do not gain conviction for my own beliefs from debate, as I already have thought things through by the time I have a belief.
In other words, unlike the rest of us, you’ve thought these things through, so there’s no possibility that you might be persuaded to change your mind by anything anyone else might say (i.e., you don’t “gain conviction for your beliefs from debate”). On the other hand, the fact that someone else (me, for instance) isn’t persuaded by your arguments, or by arguments you agree with, is a clear sign that he’s a stubborn cuss who won’t listen to reason, or else is too stupid to understand the opposing arguments, so he’s not worth debating.

It might interest you (and some others here) to know that the position that you’ve been defending (which is technically known as psychological egoism; in your case the variety known as psychological hedonism) has been rejected by the vast majority of modern philosophers, essentially for the reasons that I’ve explained in this thread. You may think that these arguments aren’t even worth discussing seriously, but most philosophers have found them quite compelling.

As for me, I accepted psychological egoism for quite a long time, essentially for the reasons given by my opponents. (Believe me, I’m quite familiar with these arguments. I used them myself, many times.) But in recent years I’ve been led to reconsider as I came to recognize the power of the opposing arguments, and have now reversed my position. Thus, unlike you, I do sometimes “gain conviction for my beliefs from debate” - in this case from reading debates between other people, but sometimes from debates that I participate in. Perhaps if you put aside the arrogant assumption that you must be right and your opponents wrong, and that the only possible point of debate is to enlighten the benighted fools who hold opinions different from your own, you too will learn that you can “gain conviction from debate”, which is to say that you can sometimes actually learn something from those who disagree with you.
bd-from-kg is offline  
Old 01-07-2003, 10:46 PM   #130
Regular Member
 
Join Date: Oct 2002
Location: I am both omnipresent AND ubiquitous.
Posts: 130
Default

Quote:
Originally posted by bd-from-kg

Perhaps if you put aside the arrogant assumption that you must be right and your opponents wrong
Fine, if that's what you think. I am just pessimistic, as I have already said. You are the one that appears to be making assumptions about my state of being here. Whatever. Perhaps pessimism is self-fulfilling sometimes...
Darkblade is offline  
 

Thread Tools Search this Thread
Search this Thread:

Advanced Search

Forum Jump


All times are GMT -8. The time now is 01:35 PM.

Top

This custom BB emulates vBulletin® Version 3.8.2
Copyright ©2000 - 2015, Jelsoft Enterprises Ltd.