Freethought & Rationalism ArchiveThe archives are read only. |
11-28-2002, 11:37 AM | #71 | |
Veteran Member
Join Date: Mar 2002
Location: UK
Posts: 5,932
|
bd-from-kg
Quote:
I've attempted to explain how I think human altruism works. I think it likely that we've evolved a genetic capacity, reinforced by social and cultural conditioning, for empathy (a disposition for specific emotional responses). I can't be sure that this is true but it's the most plausible explanation I've encountered. You don't appear to agree so I genuinely need to understand how you believe altruism can be explained. By the way, I don't think that "there can be no such thing as a truly altruistic act", I just think that that the popular concept of altruism needs updating. Chris |
|
11-29-2002, 07:05 AM | #72 | |||||
Veteran Member
Join Date: Jul 2000
Location: King George, VA
Posts: 1,400
|
tronvillain:
Quote:
My main concern regarding this point is that emotions are often thought of as being in conflict with reason, so that saying that a motive or action is based on emotion is often taken as showing that it is irrational. Since you reject this idea, I have no real problem with your position on this point. Quote:
Are you under the impression that you’re contradicting something that I said other than the trivial point about whether motives always involve emotions rather than (nonrational) preferences? Quote:
One popular way of making the statement “true by definition” is this: (1) Every act has a motive. (2) A motive implies an interest by the agent in the outcome. (3) An interest by the agent in the outcome implies that the action is self-interested. (4) A self-interested action is by definition not altruistic. This is a perfectly valid argument if what you mean by a “self-interested action” is such that (3) is true, and if what you mean by altruistic is such that (4) is true. But in that case the argument is completely tautological: it follows immediately from the way you define the terms involved. And of course I am not in the business of disputing tautologies; I agree entirely that in that sense there is no such thing as an altruistic action. That is, there is no such thing as an (intentional) action without a motive. But what you have to mean by “self-interested” and “altruistic” in order for this argument to be valid has little to do with what most people mean by these terms, and it certainly isn’t what I mean by them Quote:
Quote:
|
|||||
11-29-2002, 09:09 PM | #73 | |||
Veteran Member
Join Date: Jul 2000
Location: King George, VA
Posts: 1,400
|
The AntiChris:
Quote:
Quote:
I think that Pryor has it right: Quote:
If you believe that some people can sometimes be motivated by a desire to make someone else better off, then you believe that what most people mean by “truly altruistic acts” exist. If you think that most people mean more than this by the term “altruistic act”, I think you’re mistaken. On the other hand, if you think that no one is ever motivated by such a desire, you believe that what most people mean by “truly altruistic acts” do not exist. But I’m baffled as to why anyone would believe this. |
|||
11-30-2002, 12:31 PM | #74 | |||
Veteran Member
Join Date: Jul 2000
Location: King George, VA
Posts: 1,400
|
Intensity :
Motive and purpose Motive and purpose can’t be separated as neatly as you seem to think. Of course what’s usually referred to as the “motive” is the desire that drives the action, whereas they purpose is to fulfill that desire. In that sense they’re clearly distinct. But in your example you say that the purpose of shooting a man through the head is to kill him, whereas there might be any number of motives. But this is an artificial distinction. For example, if we say that the purpose was to kill him, we can just as well say that the motive was a desire to kill him. Or, if you say that the motive for is a desire to get the money in his wallet, we can also say that the purpose was to get the money in his wallet. For any true statement about motive there is a corresponding statement about purpose such that the one is true if and only if the other is. 2. On consequentialism Quote:
3. On motives and morality Quote:
As Benjamin Franklin put it: Quote:
|
|||
12-01-2002, 01:48 AM | #75 | |||
Veteran Member
Join Date: Mar 2002
Location: UK
Posts: 5,932
|
bd-from-kg
Quote:
In everyday use, it seems perfectly reasonable to me to use the "popular" concept of altruism when evaluating the moral worth of an act. However, in a forum such as this where moral behaviour is examined and discussed in a little more depth, I find it strange that the role empathy and the promise of emotional payback plays in motivating moral behaviour can sometimes go unacknowledged and, on occasion, denied. Quote:
In example B, the fact that you "don't like" the current position of the sphere suggests that you have, at some level of consciousness, an aesthetic sensibility which is offended in some way (an "itch", if you like). In the absence of this "itch" it makes no sense whatsoever to move the sphere and the only possible motivation for moving the sphere is because, at some level of consciousness, you are aware of this "itch" and desire to do something about it. Quote:
Chris |
|||
12-01-2002, 10:09 AM | #76 | ||||||
Veteran Member
Join Date: Jul 2000
Location: King George, VA
Posts: 1,400
|
The AntiChris:
Quote:
Quote:
Quote:
It seems to me that (sometimes at least) this is exactly what you’re arguing: that an act cannot be altruistic because it is necessarily done to satisfy a desire of the agent’s rather than someone else’s desire. But of course this is what it means to say that the motive was the agent’s motive rather than someone else’s. So if you really believe that an act cannot be altruistic if it satisfies a desire of the agent’s, your argument is really the tautological one that I outlined in my last reply to tronvillain. As I said there, this argument is perfectly valid by uninteresting, since it relies on definitions that are neither my definitions nor the ones used by the vast majority of people. I think that there are few if any people so mad as to believe that it is possible to act on someone else’s motives, or in order to satisfy someone else’s desires. Quote:
Quote:
Really, I don’t understand what you’re trying to get at here. Certainly in Example B you have a desire to bring about a different state of affairs – namely, a state of affairs in which the sphere is in a different position relative to the sphinx. And no one disputes that a desire to bring about a different state of affairs will often motivate one to act in a way that one expects to bring about this state of affairs. As to what exactly produced a particular desire, this is often a mystery. (You can refer to an “aesthetic sensibility” if you like, but it’s not clear that this means anything other than a tendency to desire certain kinds of states of affairs in preference to others.) But in any case, I simply don’t see how the question of what causes a desire is relevant in any way to our discussion. The question is not whether the desire has a cause (obviously it does) or whether it is more appropriate to say that the act was “really” caused by the desire, or by what caused the desire; either way of speaking is perfectly OK. The real question here is whether the act was caused by an entirely different thing than either the desire to bring about a certain state of affairs, or whatever caused this desire – namely, the desire to experience the emotional satisfaction that will presumably by produced by fulfilling the desire for this state of affairs. Quote:
[ December 01, 2002: Message edited by: bd-from-kg ]</p> |
||||||
12-02-2002, 04:57 AM | #77 |
Veteran Member
Join Date: Oct 2001
Location: U.S.
Posts: 2,565
|
bd-from-kg:
Alright. Bear with me. I'm working my way through this. I went back and read your first post on empathy again, and I think I've found my stumbling block. It goes back to the notion that empathy as knowledge and understanding (E-k&u) leads to sympathetic empathy (E-s). If I understand you correctly, you essentially say that this is observed to be true, but offer no further explanation. This relationship between E-k&u empathy and E-s is vital to the arguement. If they one did not lead to another, it would not necessarily be true that if one had more E-k&u one would choose to act altruistically. So, my sticking point is that it seems to me that there are some people for whom E-k&u does not lead to E-s. In fact, it seems some people have a tendancy to try to exploit the motives, desires, etc. of other people. The more knowledge they gain, the more drive these people seem to have to exploit it. It would seem their existence makes this moral foundation not universal. While it may be true that it is rational for ME to act altruistically, I'm not so certain it is rational for all other people to act altruistically. Forgive me if I'm being obtuse. I'm not doing so intentionally. Jamie |
12-02-2002, 12:23 PM | #78 | ||
Veteran Member
Join Date: Jul 2000
Location: King George, VA
Posts: 1,400
|
Jamie_L:
Quote:
But first, it’s important to realize that even if this objection is valid, it doesn’t affect the validity of the claim that acting altruistically is rational as it applies to you. If you agree that it’s rational to do what you have good grounds for believing you would choose to do if you had enough K&U, and you agree that the evidence points strongly toward the conclusion that you would act altruistically if you had enough K&U, then it follows that it is rational for you to act altruistically. This is a rather significant conclusion, don’t you think? It’s significant even if it doesn’t apply to every single person in the world; in fact, it would be significant even if you were the only person in the world that it did apply to. After all, I’m not claiming to be showing that there is an objectively true morality, just that it is rational for human beings to act in a certain way. If there are a few for which this isn’t true, it doesn’t invalidate the conclusion for the vast majority of people, any more than the discovery that a few people are color-blind invalidates the claim that human beings can see colors. Also, if the conclusion is true for the vast majority of human beings, it seems reasonable to conclude that the relationship that I describe between E-k&u and E-s represents normal human function, and that those (if any) for whom it does not hold are defective in some way, just as those who don’t see colors aren’t just “different”, but defective: something is wrong or missing in their perceptive/cognitive apparatus. But in fact I seriously doubt that there actually are any exceptions in this case. That is, I don’t think that E-k&u ever fails to lead to E-s. However, in evaluating this claim, we need to have a realistic idea of what kind of evidence is actually available at this time. We have (at present) no way of measuring either E-k&u or E-s, and no way of quantifying them if we could. This doesn’t makes the claim meaningless; lots of other statements that virtually everyone considers meaningful are in the same boat – for example, the statement that we generally prefer (other things being equal) to spend time with people we like than with people we dislike. A “time-liking skeptic” could, of course, simply deny that this is so, and it’s hard to see how one could prove him wrong, but we all know that this statement is true even though we have no idea how to prove it. As to the idea that E-k&u leads to E-s in general, this is so obvious and well-known that it hardly needs to be argued. The only problem is that, as you put it, Quote:
As an analogy, suppose that Fred has read the owner’s manuals for a lot of cars, taken private lessons from expert drivers on the correct technique for driving a car, taken courses in safe driving, etc, but has never actually driven a car. Fred will have a lot of knowledge about driving a car, but he won’t have any real understanding of the experience of driving a car; he won’t understand what it’s like to drive a car. This may seem fanciful or at least extremely speculative, but there is actually a substantial body of scientific evidence in its favor. The kind of people who manipulate others shamelessly, without the slightest shame or remorse, have in fact been found to be severely lacking in the ability to achieve an empathetic understanding of others, and this seems to be associated with a severe lack of affect (emotion) and of any real understanding (or even any interest in achieving any real understanding) of themselves. So it appears that this kind of behavior really is linked to some kind of cognitive defect. These people have not simply chosen a different path from most of us; they are defective, just as color-blind people are defective. And you might note that they therefore are not really counterexamples to my claim, because they never do achieve E-k&u; to do so, their defect would have to be corrected. (Unfortunately we have no idea how to do this at present, but that doesn’t mean it’s impossible.) There is no reason to suppose that if they did acquire E-k&u they would not come to have E-s as a result. I’ll have a good bit more to say about this later, but I’m pressed for time today, so this will have to do for now. |
||
12-02-2002, 03:27 PM | #79 | |||||
Veteran Member
Join Date: Mar 2002
Location: UK
Posts: 5,932
|
bd-from-kg
Quote:
Quote:
Quote:
Pryor suggests that, in example A, moving the sphere was merely a means to an end and, in example B, moving the sphere was the "end". I accept that the language most of us would use to describe our motives for moving the sphere in both scenarios would probably reflect Pryor's description. However, it seems to me that in both examples the underlying motivations are the same (to 'deal' with the "itch") and that the moving of the sphere in both cases is merely a means to an end. The fact that the second "itch" is perceived at a deeper or different level of consciousness doesn't seem to me to be relevant. Quote:
Quote:
I've been attempting to argue that the altruistic desire to "make someone else better off" is, at its fundamental roots, based on complex evolved emotions and, that given certain external stimuli, these emotions can give rise to the expectation, at some level of consciousness, that altruistic behaviour will generate an emotional payoff. In other words I'm suggesting that in the absence of these evolved responses, there would be no motive to act altruistically (please note that I'm not suggesting these are the only motives for acting altruistically). It follows, therefore, that any motivation to act altruistically is logically dependent on the existence of this evolved human emotional response (i.e. the expectation of some positive "emotional payoff" or the resolution of some kind of internal emotional discomfort). This explanation may be trivially self-evident but I think it is sufficient to justify the claim that altruistic acts are fundamentally, and often subconsciously, motivated by the expectation of an emotional payoff. Chris |
|||||
12-03-2002, 11:17 AM | #80 |
Veteran Member
Join Date: Jul 2000
Location: King George, VA
Posts: 1,400
|
The AntiChris:
Well, this discussion of what is really a peripheral issue for this thread has become extraordinarily long. I really don’t understand why people like you and tronvillain think the way you do; it seems to be to be self-evidently wrongheaded. But I’ll give it one more go. Here is the “standard” or “orthodox” account (in the simplest type of case) of how one typically goes about deciding to do X rather than Y: (1) You prefer (other things being equal) that a certain state of affairs, A, should hold rather than that it not hold, and this gives rise to a desire that this state of affairs should hold. You also prefer (OTBE) that another state of affairs, B, should hold, and this also gives rise to a corresponding desire. (2) You believe that doing X will bring about A while doing Y will bring about B (and there are no other considerations that matter to you that would affect your decision). (3) Your preference for A over not-A is stronger than your preference for B over not-B, so your desire to do X is stronger than your desire to do Y. Therefore you do X. Now your position, as I understand it, is that this is never an accurate account (even in rough outline form) of the process leading to a decision to do X rather than Y. You hold that such a decision is never actually based on the preference for A over not-A, or on the fact that this is stronger than the preference for B over not-B. Before proceeding, it’s worth noting how very unorthodox and counterintuitive this position is. Far from being self-evidently true as you seem to imagine, this position is considered self-evidently false by the vast majority of people. Almost everyone would say that the schema above is an accurate description of a great number of decision-making processes - that nothing of importance is left out - whereas your position is that the most important thing has been left out – namely, the actual motive for doing X. It seems to me that it would be odd indeed if the vast majority of people were so mistaken about how they make decisions that they typically leave out the most important thing – the actual motive - from their accounts, while you (who have no access to their mental processes) have managed to give an accurate account of these processes. Now let’s look at your account of how decisions are made. You agree with 1 and 2 above, but not with 3. So what do you propose to replace it with? Well, to begin with, you note that most people expect to derive some desirable emotional “payoff” from doing X. This “payoff” consists of, among other things: (a) Your pleasure at having the desired state of affairs come about. (b) Your satisfaction at achieving your aim. It might seem at first sight as though these are two ways of saying the same thing, but they aren’t. For example, suppose that your daughter Susie wants a Bouncing Betty doll more than anything else in the world. So you desire that she should get one for Christmas. You look for weeks, but they’re sold out everywhere. But come Christmas Day, what do you know! Aunt Marge found one and gives it to Susie. Thus your desire that Susie get a Bouncing Betty doll for Christmas is satisfied, but you have not achieved your aim – someone else did. Thus you get the emotional payoff (a) but not (b). Now your position, as I understand it, is that your “real” motive for doing X rather than Y is not your desire for state of affairs A, but your desire for the pleasure that you expect to get from the fact that this state of affairs has been brought about, and possibly for the satisfaction you expect to get from bringing it about. (It’s possible that it will produce other kinds of pleasure or satisfaction as well, but nothing new would be added by considering them.) Thus you would substitute for (3) something like: (3') Your desire for the pleasure (of types a and b and possibly others) that you expect to obtain from doing X is greater that your desire for the pleasure that you expect to obtain from doing Y. Therefore you do X. The first point I want to make about this is that it is certainly not true that a desire for “type-b” satisfaction is always part of the motive for doing something. Very frequently one would be at least equally satisfied if someone else brought about state of affairs A. For example, a baby may be crying nearby because it has messed its diapers; you want it to be comfortable and happy (and above all, to stop crying), and you might therefore decide to change its diapers. But if the mother comes back into the room at that moment and changes them herself, you won’t be the least bit disappointed that you didn’t achieve your aim; all that will matter is that the desired state of affairs will have been brought about: the baby will be content and stop crying. The second point is that in cases where your desire is for someone else’s good, and the pleasure you derive from achieving it is purely of type a – i.e., it consists entirely of pleasure that this good has been brought about – most people would say that your act is indeed purely altruistic. The essential difference between an altruistic act and a self-interested one is not that one derives pleasure from having the desired state of affairs come about in the latter case but not in the former. It is the nature of the desired state of affairs. Thus, in the example above, if your desire was purely that the baby be happy and content, and the pleasure you experience after changing its diapers is purely a result of the fact that it is now happy and content, your act is purely altruistic regardless of the fact that you anticipated that you would be pleased by this state of affairs. On the other hand, if your desire was primarily that the baby stop crying so that you could listen to Mozart in peace, your act was not altruistic. But aside from this, your account of how decisions are made is wrong. The motive for doing something is not always, or even very often, solely or primarily the pleasure that you expect to derive from the desired state of affairs coming about, or from your bringing it about. In the great majority of cases the main motive by far is your desire for the state of affairs in question to come about. To be sure that we’re clear about the difference in these two views, let’s reiterate. I say that the main motive for doing something is usually (and certainly at least sometimes) the desire that a certain state of affairs should come about; you say (as I understand it) that the main, if not the only, motive for doing something is the pleasure that you expect to experience as a result of this state of affairs coming about, or from your bringing it about. Now in ordinary cases it is difficult to decide between these views, because you do derive pleasure from having the desired state of affairs come about. Thus it is hard to say which was the motive: your desire for the state of affairs itself, or your desire for the pleasure that you derive from it. The only real way to decide the issue is to consider unusual cases where the state of affairs does come about but you don’t know it, or cases where you come to believe that it came about but it really didn’t. [Note: At this point I’m basically following Pryor’s discussion in the paper cited earlier, except that I’m using different examples. Those interested might want to look at his examples as well.] Let’s consider a case of the latter type. Suppose that Bob is led to believe that there is a group of people stranded somewhere, such that they can’t get away and they can’t be rescued, but it is still possible to communicate with them. His job is to diagnose illnesses and other medical conditions and suggest treatments. He does this for many years, saving a great many lives (or so he believes) and derives a great deal of satisfaction from it. But just one minute before he dies, he learns that it was all a hoax; there never were any such people, and he helped no one. How would he react to this news? Well, I say that he would probably be hugely disappointed and upset, and angry at the people who perpetrated such a cruel joke. He would feel that he had not achieved any of his goals; that he had failed completely; that his life had been utterly wasted. But on your view, it’s hard to see why Bob would be upset. After all, his real aim was not to save lives, etc., but to experience the pleasure and satisfaction that he would derive from doing so. And he did experience this pleasure and satisfaction, through believing that he had done so. So what possible reason could he have to be upset or angry? The people who perpetrated this hoax on him actually did him a favor; he should be grateful to them. Now this conclusion seems to me to be so totally absurd that it completely refutes your theory. Obviously what Bob really wanted – his real motive for doing what he did – was to save lives and improve the health of the people he thought he was communicating with, and not to obtain the pleasure and satisfaction that could be expected from doing so. Certainly he obtained pleasure and satisfaction, but this was not his aim; his aim was to help others. We might also look at this type of situation, but where the information about the true state of affairs comes at the beginning rather than the end. Suppose that Bob is offered an opportunity to spend his life in this way, with the understanding that he would be administered a drug that would make him forget entirely that the people in question didn’t really exist. If he declined, he would have an opportunity to lead a more ordinary life, with perhaps a few opportunities to help others, and a reasonable degree of personal happiness. Would he take it? On your theory, he would be crazy not to, since his real aim is to obtain as much pleasure and satisfaction for himself as possible, and this option would obviously be by far the best from this point of view. But I suspect that the vast majority of people would decline this generous offer, because happiness and satisfaction for themselves is not their only aim, or even their most important aim, in life. We can also consider cases of the second kind, in which the desired state of affairs does come about but the agent doesn’t know it. Thus, suppose that Bob is presented with the following choice. Option 1: From now on, whenever he spends $100 (or donates it to charity), the life of an innocent child will be saved, but he won’t know anything about it. Option 2: $100 will be deposited (once and only once, right now) in his savings account. He won’t know where the money came from or why. As before, he will forget completely about having been offered these options as soon as he chooses one. Which will he choose? Well, it’s possible that Bob is so selfish that he’d prefer the $100 to saving any number of innocent children’s lives, but I think that the vast majority of people would choose Option 1. But on your theory, choosing the first option would be insane. After all, the only thing that Bob (or anyone else) really wants is desirable experiences for themselves. And option 2 will (presumably) allow him to have some desirable experiences that he wouldn’t otherwise have, whereas Option 1 won’t give him any at all. But on my theory choosing Option 1 is perfectly understandable. Most people really want to help others (though perhaps this is not a terribly strong desire in most cases), and by choosing Option 1 they will achieve this aim big time, even though they won’t know that they’re doing so, and therefore won’t derive any pleasure or satisfaction from it. So it seems clear that the vast majority of people really do desire things other than desirable mental experiences for themselves, and that they are often motivated by such desires. And once one understands that, it becomes clear that people typically are motivated to do things (just as they believe they are) primarily from a desire to achieve “real-world” outcomes. In other words, my account of how decisions are typically made is correct, and yours is wrong. [ December 03, 2002: Message edited by: bd-from-kg ]</p> |
Thread Tools | Search this Thread |
|