FRDB Archives

Freethought & Rationalism Archive

The archives are read only.


Go Back   FRDB Archives > Archives > IIDB ARCHIVE: 200X-2003, PD 2007 > IIDB Philosophical Forums (PRIOR TO JUN-2003)
Welcome, Peter Kirby.
You last visited: Yesterday at 05:55 AM

 
 
Thread Tools Search this Thread
Old 11-26-2002, 03:22 PM   #61
Veteran Member
 
Join Date: Jul 2000
Location: King George, VA
Posts: 1,400
Post

MadMordigan:

1. Regarding your Nov. 23 post:

Quote:
bd:
... the important thing at this point is simply the empirical fact that empathy (in the sense of knowledge and understanding of another person) does in fact have the effect described.

MadMordigan:
This fact is neither universal among people, nor universal across all decisions.
I disagree. I think that empathy, in the sense of really understanding how another person feels (not just in the sense of abstract knowledge that propositions like “John is sad” or “John will become sad if I do X”, are true, but real understanding of things like how John feels or will feel if...) does always produce the effect of causing one to take the effect of our actions on the other person into account in our actions. The effect of the kind of very partial, imperfect empathy that humans actually experience isn’t always (or even usually) strong enough to induce us to give the same consideration to the other person’s interests that we give to our own, but it exists.

Quote:
As I illustrated above, no matter how well you appreciated the fact that I honestly and sincerely want your cash ...you aren't going to [send me your life savings], are you?
Probably not, for a number of reasons:

(1) Oddly enough, most of us don’t define “altruism” as “devoting one’s life to MadMordigan”. It’s just barely possible that I wouldn’t send you my life savings even if I had the most perfect imaginable understanding of you. Even after taking your interests fully into account, I might conceivably decide to use the money in some other way.

Surely you don’t need to be told about all the very good reasons why it’s generally not desirable (from a purely altruistic point of view, of course) to give people other than good friends stuff that they didn’t earn and don’t deserve. And even among friends it’s best to limit this sort of thing pretty strictly.

(2) But of course I don’t have anything like a perfect understanding of your wants and needs, whereas I do have a very good understanding of my own. My theory says that people would act altruistically if they had a perfect understanding of everyone else. Of course, it also asserts that a person’s behavior changes rapidly in a more altruistic direction as his degree of understanding of the consequences to others increases. But it doesn’t say, absurdly, that they will act altruistically given their actual, present degree of understanding.

(3) I’m not perfectly rational. For example, I know perfectly well that I ought to eat less and lose weight, but that doesn’t mean that I’m going to do it. (I hope to be able to do it, but I might well fail in spite of my perfect understanding that it’s the rational thing to do.) In fact, to say that no one is always perfectly rational would be the understatement of the century. My argument is simply that acting altruistically is the rational thing to do, not that every one will start acting purely altruistically the moment they understand this.

Quote:
Having been a professional salesman and independent contractor for over 7 years, I can claim some experience with manipulating people's behaviour. People do not generally buy cars based upon how much it would mean to the salesman.
As I pointed out above, not everyone defines altruism as “devoting one’s life to MadMordigan”. And amazing as it may seem, everyone does not always act altruistically. Where did you get the idea that I claimed that they do? It’s moderately well known that most people, most of the time, give somewhat more weight to their own interests than to others’.

Quote:
If we can agree that a social morality should be evaluated based upon how well it prevents or encourages behaviours, then the differing efficacies between appeals to empathy and appeals to self interest must be taken into account.
I’m not advocating a “social morality”. The proper criterion of my moral theory is not how it influences behavior, but whether it is true. My purpose is not to encourage people to act altruistically, but to justify acting altruistically. The justification I offer is that it is rational to do so once one understands certain things. And it is not necessary to do the impossible: to actually understand everyone else in a deep, empathetic sense. It is only necessary to understand how you would act if you had this kind of understanding. At that point acting altruistically would be rational. But I don’t claim that even in this situation most people would in fact act altruistically most of the time.

The type of argument you’re attempting here would, if valid, be fatal to virtually any “mainstream” moral theory. All such theories end up identifying “doing the right thing” with acting altruistically, but very few of us are Albert Schweitzers. That is , very few of us actually do what our moral theories recommend as “right” with any great consistency. Fortunately, this no more refutes such theories than the fact that very few of us will get the right answer if we add a long column of 20-digit numbers by hand disproves the claim that there is an objectively right answer which can be determined (in principle) by adding the numbers by hand.

2. Regarding your Nov. 26 post:

I’m not clear about what your point is here. Are you merely saying that giving up one’s life for purely altruistic reasons is rare? If so, this is of little if any interest to moral philosophy. Or are you saying that it’s impossible? If so, empirical arguments are beside the point. Or are you saying that it’s possible, but never actually happens? If so, I can only say that this is extremely implausible. Given the enormous variation in human personality, character, beliefs, and motivations, it would be nothing short of incredible if it were possible to act from purely altruistic motives, yet it just happened that no one had ever done it.
bd-from-kg is offline  
Old 11-27-2002, 04:29 AM   #62
Veteran Member
 
Join Date: Feb 2001
Location: ""
Posts: 3,863
Post

JamieL
Quote:
Are such cases of people giving their lives really motivated by self-interest?
I know of a neighbour whose house was broken in by armed robbers. They ransacked the house and took money and other valuables. Then as they were leaving, they saw his beautiful 18 year old daughter.
They turned back and made to pull her forcefully - rape was in their glazed beady eyes. The father stood between them and her to thwart that abduction. They threatened to shoot him if he didn't get out of the way. He didnt budge.
They shot him.
He died.
They fled without her.

I have never been able to decide whether he did the right thing.
It may have been moral, but was it the right thing to do?

The way I see it, she could have survived the rape (as many have, albeit with scars and trauma). He could have attempted pursuing them as they fled, call the cops etc.

I feel he would have been effective as a living father and husband. Instead of dead as a brave man who defended his daughters honour. He had other kids who depended on him including her for her school fees etc etc.

What do you think? Die for honour or suffer dishonour and deal with it while alive?

I think that was a really tough situation. I might do what he did if I were in his shoes - I mean there is the hope that they wont actually shoot...

Of course I would be acting out of self-interest.
But would I be doing the objectively right thing?

bd-from-kg
Quote:
There seems to be a lot of confusion here caused by loose definitions. Let’s try to clarify things a bit.

(1) All intentional acts by definition have a purpose, which is to say that the agent expects the consequences to be preferable to him to what would have happened if he had acted otherwise. This logically implies that the agent has an interest in the outcome. In this sense, of course, all actions are self-interested by definition.

(2) But this isn’t what’s ordinarily meant by a self-interested action. What’s ordinarily meant is an act where the agent considered only what would benefit him, disregarding its effects on others. (Sometimes the meaning is broadened to allow consideration of the effect on those “near and dear” to the agent.) If such an act has a serious negative impact on others, it is commonly said to be not only self-interested but selfish...
You might be interested in seeing this post that I earlier posted:

Quote:
Now about motive and purpose. Our motives are our feelings, desires, emotions and natural inclinations. Motive is goal-creating and goal-propelling and purpose is goal-achieving. The motive is the one that provides a driving force that will activate/ compel/ necessitate action towards a purpose. The motive is the desire to achieve a need. It bubbles from within then one must act externally and "purposefully" to fulfil that desire. The motive is the one that propels a man enjoying his sleep to cut it short and rise up with the purpose of going to work.

Need I say more?
Going by that shooting example: the purpose of shooting a man through the head is always to kill. But the motives are not always the same. A soldier shooting another in a battlefield has totally different motives from a hired murderer shooting a man in a hotel room and that murderer also has very different motives from a kid who shoots his father after being abusd by him all his life. But the purpose is the same: to kill.

The purpose is goal-orientedness of an act devoid of emotions. If I start chocking (action) you someone will cry "Oh my God he is killing him!" (purpose). After my hands have been pried from your neck I will be asked "why were you trying to kill him?" (motive).
Most actions achieve certain known purposes. Though our motives propel us to choose certain lines of action, those actions do not always achieve the intended purpose/desires so motive does not equal purpose.

For example, I might kill my (ex)GF's BF hoping (motive) that she will then come back to me. If she hates me more instead, or commits suicide, then that act will have achieved an unintended purpose. It will not have acieved the desired purpose.
...
You can't mix intention with results. Unless this is a new kind or morality. Your morality is either results driven, or intention-driven. They dont have to be mutually exclusive, but its important to note that you cant be guided by both in making moral decisions because the moral agent would be locked in dilemmas many times and will have to resort to arbitrary means for arriving at moral decisions.

PS:
A rational agents actions(linked to purpose) must ultimately derive from his motives/ desires.

On this same desire/ motive topic, moral acts are supposed to be carried out in response to the demands of an agents moral principles regardless of the agent's feelings or inclinations.

For example, Mother Teresa, moved by her deep empathy and sympathy for the poor, will not be acting morally by helping the poor (though I understand she only did it because she beleived God wanted her to do so and that she only prayed for them), but if she acts out of a moral obligation and sense of duty, then her acts would be truly moral.
This is what distinguishes moral reasoning from prudential (rationality of self-interest) and natural (emotional, empathic, desire-based) action or "morality".
I am stuck with work and personal problems, but I will put aside some time and join this discussion.
Ted Hoffman is offline  
Old 11-27-2002, 06:21 AM   #63
Veteran Member
 
Join Date: Mar 2002
Location: UK
Posts: 5,932
Post

Quote:
Originally posted by bd-from-kg:
<strong>
Thus, this does not seem to me to be merely a verbal dispute – a matter of defining terms differently. It appears to me that Chris (and some others here) really believe that no one ever acts from any motives but self-interested ones in the second sense. This is the only way to make sense out of their insistence that Paul (or by implication anyone else) could not have acted from truly altruistic motives; his “real” reasons must have involved some expected benefit to himself. And this is simply false: people do occasionally act from purely altruistic motives.</strong>
I can assure you that this is not what I've been arguing.

I do not believe that all acts of altruisim are motivated by what is ordinarily meant by self-interest. In other words, I accept that, by definition, altruistic acts are not motivated by any conscious expectation of self-benefit.

My interest is in the question of what it is that motivates us to be altruistic - an activity which, by definition, appears to create no personal benefit. My initial post on this thread was in response to LordSnooty's:

I don't know, maybe I give people undue credit, but acts of altruism can occur in which there is no positive payoff, emotionally or otherwise.

Whilst I agree that, by definition, no conscious assessment of "positive payoff" preceeds a truly altruistic act, it seems to me that there must be, at some level of consciousness, an emotional need which drives us to act in the the first place and that it's the fulfilment of this need which is the positive payoff. Without some form of payoff, I can't see how we'd be motivated to act at all.

The fact that this emotional need can be explained by the evolutionary selection of a genetic disposition does not, for me at least, render altruism meaningless. Our evolved emotions often benefit our gene pool rather than us as individuals. In the case of altruism, it's the genes that are selfish, not us.

Chris

[ November 27, 2002: Message edited by: The AntiChris ]</p>
The AntiChris is offline  
Old 11-27-2002, 10:21 AM   #64
Veteran Member
 
Join Date: Oct 2001
Location: U.S.
Posts: 2,565
Post

I'm still a little muddy on the "why" of a moral theory that says you ought to do such-and-such because it is altruistic.

Granted that people have motivations that don't consciously involve self-interest (like seeking a cure for AIDS in bd-from-kg's example). However, a moral system usually proscribes what a person to do and why they should do it. "Because it's altruistic" doesn't seem quite good enough.

My thoughts on empathy are that empathy is a learned human behavior. It is possible to raise people who have no empathy whatsoever. Being a created and subjective thing, it seems empathy is not a good basis for a moral theory. To me, it seems empathy is what parents instill in their children to allow them to follow a moral theory - it creates an emotional drive to behave morally. But why should parents instill empathy and morality in their kids? To use empathy as part of this explanation seems somehow circular to me.

Jamie
Jamie_L is offline  
Old 11-27-2002, 03:12 PM   #65
Veteran Member
 
Join Date: Oct 2000
Location: Alberta, Canada
Posts: 5,658
Post

Uh, empathy is not a behavior, which sort of shoots down the "empathy is a learned human behavior" theory. Perhaps you meant "the development of empathy is fascillitated by certain human social interactions" or something to that effect?
tronvillain is offline  
Old 11-27-2002, 07:22 PM   #66
Veteran Member
 
Join Date: Jul 2000
Location: King George, VA
Posts: 1,400
Post

The AntiChris:

Quote:
I can assure you that this is not what I've been arguing.

I do not believe that all acts of altruism are motivated by what is ordinarily meant by self-interest. In other words, I accept that, by definition, altruistic acts are not motivated by any conscious expectation of self-benefit.
Then perhaps we’re basically in agreement.

Quote:
Whilst I agree that, by definition, no conscious assessment of "positive payoff" precedes a truly altruistic act, it seems to me that there must be, at some level of consciousness, an emotional need which drives us to act in the first place and that it's the fulfillment of this need which is the positive payoff.
It’s a positive payoff. But there is another positive payoff: namely, we get what were aiming for. If we don’t get this (the primary positive payoff), we certainly won’t get the secondary payoff: the satisfaction that comes from getting what we aimed for. Moreover, this primary payoff is the one that actually provides the motive for the act. As Pryor puts it:

Quote:
The pleasure is not what you were primarily aiming at; rather, it came about because you achieved what you were primarily aiming at. Don't mistake what you're aiming at with what happens as a result of your getting what you're aiming at...

Our pleasure isn't some unexplained effect of our actions, and what we're really trying to achieve all along. Our pleasure comes about because we got what we were really trying to achieve
This puts the matter about as clearly as one could hope for.

At this point I want to try to tie this part of the discussion in with the original question.

The original impetus for getting into this (for me) was tronvillain’s statement:

Quote:
If murdering someone will benefit the individual and the individual can get away with the murder, then they should murder that someone.
It seems clear to me that what tronvillain is really saying here is that the only rational kind of motive for doing anything is a self-serving one – i.e., the expectation that doing it will benefit the agent personally. Let me explain why.

Let’s suppose for the sake of argument that when tronvillain said “benefit the individual” he really meant “benefit anyone or anything that the individual would like to see benefit”. In that case, all he was really saying is that one should do X (in this case murder) if one prefers the results of doing X taken as a whole to the results of doing anything else. But of course, if one prefers the results taken as a whole to the results of any alternative, one will do X. So on this interpretation he was really saying that one should always do whatever one does do. But using the term “should” in this way is rather silly and pointless. With this usage, if I say “You should do X”, I’m saying nothing more nor less than “You will do X”; if I ask you what you think I should do, I’m really asking you what you think I will do; if I say, “Smith shouldn’t have done X”, I’m just saying that Smith didn’t do X. This applies whether tronvillain meant “should” in a moral, or practical, or some other sense. Whatever sense he had in mind, on this interpretation that meaning is rendered trivial and useless. In particular, if he meant that it would always be rational to do X if one preferred the consequences to those of all alternatives, he was saying that it is always rational to do whatever one does do and would be irrational to do anything else.

Since this interpretation makes tronvillain’s statement trivial and pointless, I have to assume that this is not what he meant; that when he said “benefit the individual”, he really meant “benefit the individual”. And since it is clearly absurd to suppose that he meant that one should do whatever will most benefit oneself in the moral sense of “should” (as this is ordinarily understood) the only reasonable interpretation is that he meant that it is irrational to do otherwise. And this is the idea that I am primarily concerned to refute: the notion that it is irrational to take the effects of one’s actions on others into account in deciding what to do – or in other words, to act altruistically. My theory, after all, claims the exact opposite: that it is rational to act altruistically and irrational not to.

Now one way to attack this claim is to argue that it is impossible to truly act altruistically; that the only effects of his actions that anyone truly considers when deciding what to do is the effects on himself. And so, as a part of the defense of my claim, I have been arguing that it is really possible to take the effects of one’s actions on others into account, and therefore that it is really possible to act altruistically. But I do not argue (absurdly) that it is possible to act altruistically (or in any other way for that matter) in the absence of any motive to do so.
bd-from-kg is offline  
Old 11-27-2002, 07:49 PM   #67
Veteran Member
 
Join Date: Jul 2000
Location: King George, VA
Posts: 1,400
Post

Jamie_L:

Quote:
I'm still a little muddy on the "why" of a moral theory that says you ought to do such-and-such because it is altruistic.
My argument isn’t really about what one “ought” to do; it’s about what it is rational to do. It concludes that it is rational to act altruistically. If you subscribe to the principle that one ought to act rationally (at least so far as one is able) this leads immediately to the corollary that one ought to act altruistically.

By the way, I consider it to be nothing short of embracing insanity to reject the principle that one ought to try to act rationally, so I do indeed draw this conclusion. But at the moment I was concerned only to refute the notion that it is irrational to sacrifice one’s own interest to the interests of others, and to show that the opposite principle is true: it is irrational not to take the interests of others into account, and this will inevitably lead (sometimes) to sacrificing one’s own interests to others’.

Quote:
... a moral system usually proscribes what a person to do and why they should do it. "Because it's altruistic" doesn't seem quite good enough.
Please look at my argument without even thinking about “ought”, “should”, etc. Perhaps the point will be clearer then.

Quote:
My thoughts on empathy are that empathy is a learned human behavior. It is possible to raise people who have no empathy whatsoever... But why should parents instill empathy and morality in their kids? To use empathy as part of this explanation seems somehow circular to me.
This misses the point entirely. My argument here doesn’t even address the question of whether we “ought” to instill empathy in our children, or even whether we ought to try to develop more of it ourselves. The argument is simply this:

(1) If you know (or have good reasons to believe) that you would make a certain choice if you had sufficient knowledge and understanding, it is rational to make that choice.

(2) Since empathy (in the sense in which I’m using the term) is a form of knowledge and understanding, it is rational to do what you have good reason to believe you would do if you had (in addition to any other relevant K&U) sufficient empathy.

(3) There are very strong reasons to believe that if you had sufficient empathy (regardless of what other K&U you might have) you would act altruistically.

The conclusion that it is rational to act altruistically follows immediately.

[ November 27, 2002: Message edited by: bd-from-kg ]</p>
bd-from-kg is offline  
Old 11-28-2002, 03:00 AM   #68
Veteran Member
 
Join Date: Mar 2002
Location: UK
Posts: 5,932
Post

bd-from-kg

Quote:
It’s a positive payoff. But there is another positive payoff: namely, we get what were aiming for. If we don’t get this (the primary positive payoff), we certainly won’t get the secondary payoff: the satisfaction that comes from getting what we aimed for. Moreover, this primary payoff is the one that actually provides the motive for the act.
I disagree that, at a very fundamental level, the primary payoff really is the motive for the act. Without the 'carrot' of the secondary payoff there would be no primary motive.

In an earlier post you said:

Quote:
All intentional acts by definition have a purpose, which is to say that the agent expects the consequences to be preferable to him to what would have happened if he had acted otherwise.
Why would an agent find the consequences of an altruistic act "preferable"? Take the example of the charity donor who explains his action by saying that "it would be better if it were spent on someone that needed it". Now, assuming the act is truly altruistic and not a cold calculation designed to improve one's standing in the community or to gain a tax break, the only possible reason the donor can have for preferring that his money was spent "on someone that needed it", is that at some level of consciousness the plight of the needy has an emotional effect on him. In the absence of this emotional disposition (your secondary motive) there is no primary motive.

I therefore agree with Pryor that "Our pleasure isn't some unexplained effect of our actions" but disagree that, at a fundamental level, it is not what we're aiming at. The conscious or unconscious promise of a positive emotional payoff must exist for there to be a primary motive. Whether or not the emotional payoff is actually realised is not important - the mere perception of a potential emotional payoff at some level of consciousness is all that's needed to motivate altruistic behaviour.

Chris
The AntiChris is offline  
Old 11-28-2002, 10:03 AM   #69
Veteran Member
 
Join Date: Jul 2000
Location: King George, VA
Posts: 1,400
Post

The AntiChris:

Quote:
Why would an agent find the consequences of an altruistic act "preferable"?
I don’t really understand the question. Is there some kind of mystery here that needs to be explained?

Quote:
Now, assuming the act is truly altruistic and not a cold calculation designed to improve one's standing in the community or to gain a tax break, the only possible reason the donor can have for preferring that his money was spent "on someone that needed it", is that at some level of consciousness the plight of the needy has an emotional effect on him.
I’m not sure what you mean. For example, I might pick up a user’s manual for Microsoft Word because I want to learn how to format a paragraph the way I’d like. Perhaps one can argue that the motives for actions such as these involve emotions at a deep level, but it would be clearer and less controversial to say simply that they involve preferences. Now ultimately preferences are not based solely on reason, but it’s not clear that they are always attributable to emotions. In some cases, to make this plausible, it almost seems that one has to invent an appropriate emotion (for an example, an emotion that favors knowing what’s happening). At the very least, one would have to say that sometimes the emotions involved are very weak; they aren’t at all the sort of emotions that can interfere with one’s ability to act rationally. In fact, I would say that beings like us, who have the capability of acting rationally, have a natural preference for acting rationally. so if you’re going to say that a motive must always involve an emotion, it would seem that you’ll have to invent an emotion that favors acting rationally – a strange sort of emotion indeed.

Quote:
In the absence of this emotional disposition (your secondary motive) there is no primary motive.
No, no, no. the preference for one outcome over another is not the secondary motive; it’s the primary motive. The secondary motive (on the occasions when it exists at all) is the desire to experience the satisfaction that comes from having accomplished one’s aim. Thus, in my computer example, the primary motive is your desire to fix the program. The secondary motive (which may not even exist) is a desire to experience the satisfaction that will result from [/i]getting what you wanted[/i], which was to fix the program.

The reason that this must necessarily be a secondary motive is that it cannot exist in the absence of the primary motive. Thus, it is impossible even to imagine that one could aim to experience the satisfaction of solving a Times crossword puzzle unless one first had a desire to solve a Times crossword puzzle. How could one have any desire to experience the satisfaction of persuading a woman to marry you unless you first had a desire to persuade her to marry you? It is logically impossible to have a desire to experience the satisfaction of getting something you want unless you first want the thing in question. And it is completely implausible that the desire to experience the satisfaction that comes from achieving a goal might be stronger than the desire to achieve the goal itself. Thus the desire to experience the satisfaction that comes from achieving a goal (when it exists at all) is secondary both in the sense that it is logically dependent on the desire to achieve the goal, and in the sense that it is weaker than the latter desire.

Quote:
The conscious or unconscious promise of a positive emotional payoff must exist for there to be a primary motive.
This is the basic doctrine that there can be no such thing as a truly altruistic act: the real aim of every act must be some benefit to the agent. But what reason do you have to believe this? What reason do you have to think that the aim of some acts cannot be some benefit to someone other than the agent? Why do you find this so unimaginable?

Anyway, what reason is there to think that there must be an emotional payoff even in the case of self-interested acts? I can see that there must be a desire, which arguably is ultimately based on emotion (although there are grounds to doubt even this as I pointed out above). But I don’t see any reason at all to think that the desire must be a desire for an “emotional payoff”. For example, it is possible that I could become deliriously happy (starting immediately) by becoming insane, whereas I know that I’ll be miserable if I remain sane, but I might well desire to remain sane nevertheless. This is very difficult to reconcile with the “emotional payoff” theory. To maintain it you would have to say that there is an emotion of some kind favoring rationality (which I find highly implausible already) and that this desire is so strong as to outweigh my desire to be deliriously happy for the rest of my life instead of being miserable (which seems completely ludicrous). It looks very much as though I have a desire to be rational which is not based on any desire for an “emotional payoff”.
bd-from-kg is offline  
Old 11-28-2002, 11:14 AM   #70
Veteran Member
 
Join Date: Oct 2000
Location: Alberta, Canada
Posts: 5,658
Post

bd-from-kg:
Quote:
I’m not sure what you mean. For example, I might pick up a user’s manual for Microsoft Word because I want to learn how to format a paragraph the way I’d like. Perhaps one can argue that the motives for actions such as these involve emotions at a deep level, but it would be clearer and less controversial to say simply that they involve preferences. Now ultimately preferences are not based solely on reason, but it’s not clear that they are always attributable to emotions. In some cases, to make this plausible, it almost seems that one has to invent an appropriate emotion (for an example, an emotion that favors knowing what’s happening). At the very least, one would have to say that sometimes the emotions involved are very weak; they aren’t at all the sort of emotions that can interfere with one’s ability to act rationally. In fact, I would say that beings like us, who have the capability of acting rationally, have a natural preference for acting rationally. so if you’re going to say that a motive must always involve an emotion, it would seem that you’ll have to invent an emotion that favors acting rationally – a strange sort of emotion indeed.
I think the source of the problem is that you see "emotion" as something in conflict with "reason", which is nonsense. Emotion informs reason: it is our motivation to act, which reason alone is incapable of providing. As for acting rationally, humans no more choose to act rationally than they choose to have arms and legs.

Quote:
No, no, no. the preference for one outcome over another is not the secondary motive; it’s the primary motive. The secondary motive (on the occasions when it exists at all) is the desire to experience the satisfaction that comes from having accomplished one’s aim. Thus, in my computer example, the primary motive is your desire to fix the program. The secondary motive (which may not even exist) is a desire to experience the satisfaction that will result from getting what you wanted, which was to fix the program.
No, no, no. The preference for one outcome over another is the motive for choosing one course over another, but what are the reasons for the preference? Emotions. Empathy. Fear. Desire.

Quote:
This is the basic doctrine that there can be no such thing as a truly altruistic act: the real aim of every act must be some benefit to the agent. But what reason do you have to believe this? What reason do you have to think that the aim of some acts cannot be some benefit to someone other than the agent? Why do you find this so unimaginable?
I think that there can be no such thing as a truly altruistic act, but I also think that anyone who laments that fact is being foolish. It is not a terrible thing! Altruism will continue as it always has.

Now, the aim of some acts can be some benefit to someone other than the agent, but the reason the agents has that aim will be selfish. No one giving to charity says to themselves, "I will give this person money because it will make be feel better, or at least not guilty", but those are the motivations. If they were lacking, or suddenly disappeared, one would stop giving money to charity (at least for "altruistic" reasons).
tronvillain is offline  
 

Thread Tools Search this Thread
Search this Thread:

Advanced Search

Forum Jump


All times are GMT -8. The time now is 07:13 PM.

Top

This custom BB emulates vBulletin® Version 3.8.2
Copyright ©2000 - 2015, Jelsoft Enterprises Ltd.