Freethought & Rationalism ArchiveThe archives are read only. |
12-15-2002, 07:37 PM | #111 | ||||
Veteran Member
Join Date: Jun 2001
Location: my mind
Posts: 5,996
|
Quote:
Ok, what you just said is wrong on quite a bit of levels I am wondering where to begin with. First of all you are confusing science with philosophy. Secondly you are confusing logic with science. You are also confusing psychology with reason. Think about this. Quote:
Quote:
Quote:
|
||||
12-16-2002, 12:56 PM | #112 |
Regular Member
Join Date: Aug 2002
Location: USA
Posts: 310
|
quote:
-------------------------------------------------------------------------------- Originally posted by Biff: The first of which is that he is making many philosophical arguments which rest on logical necessity, when the central issue here is one of human PSYCHOLOGY. -------------------------------------------------------------------------------- Hello Biff, welcome to the MF+P of the Internet Infidels. Ok, what you just said is wrong on quite a bit of levels I am wondering where to begin with. First of all you are confusing science with philosophy. Secondly you are confusing logic with science. You are also confusing psychology with reason. Think about this. quote: -------------------------------------------------------------------------------- Without acknowledging the fact that humans are influenced by unconscious processes, are often unaware of their own motives, and frequently do not act completely rationally, you totally miss the true answer to this question. -------------------------------------------------------------------------------- Oh, so humans act compulsively based on their "unconscious processes"? Then there is no basis for free will and consequently there are no moral issues involved! Think about this. quote: -------------------------------------------------------------------------------- bd has continually argued that emotional payoff cannot be the root cause of altruistic acts, by mistakenly equating what other debaters have CLEARLY and REPEATEDLY labelled as unconscious processes as items that an agent would be fully aware of in the course of decision making. -------------------------------------------------------------------------------- Yes because if an agent acts according to its "unconscious processes" then that agent in fact is not deciding anything. He is in fact a robot acting according to its preprogrammed specifications. No morality involved then and therefore your whole argument does not belong here, at all. quote: -------------------------------------------------------------------------------- I have had a very hard time not laughing out loud reading bd's posts as he tilts at windmills. I think it is probably best for others to let him think he has won, as this dull ass will not mend his pace with beating. -------------------------------------------------------------------------------- Arrogance will not serve you well in these forums. I suggest you keep them on check, if you have any inclination to learn anything. IMO, the level of intelligence of what is discussed here is much higher than what you pretend it is. _____ Actually, 99, it is clear that in the course of discussing the basis for moral actions, it is a fatal error to ignore the influence of psychology. In no way would any Psychologist argue or claim that the presence of unconscious motives destroys "free will", the ability to act according to preference. But to deny that unconscious processes, specifically the pursuit of pleasant inner states, plays no part in the process is to argue only half of the debate. It matters very little if you can devise a moral theory that explains why actions are performed that does not correspond to the true motives for human actions. Thus, it is bd's error to claim that his moral theory is correct. In essence, in a world of perfectly rational beings, perhaps or perhaps not his process of empathic altruism would obtain. However, in the real world, it is fruitless to try to claim that such a theory is valid. If, as you claim, moral discussions can be or SHOULD be free of any understanding of human psychology, then you might as well concede that any conclusions reached necessarily have nothing to do with human morality or how it actually functions. In the end, what bd's opponents have tried to explain, to little avail, is that the underlying motives for moral actions CANNOT be reduced to pure rational issues, and the error that bd has made is to take the perfectly valid claim that human moral action flows from conditioned emotional responses and mock them as if his opponents were claiming that agents actually consciously evaluated these unconscious emotions and acted upon them. Despite what you may think, psychology is intrinsic and inseperable from questions of human morality, as everyone here EXCEPT bd seems to understand. Biff |
12-17-2002, 08:53 AM | #113 |
Veteran Member
Join Date: Jul 2000
Location: King George, VA
Posts: 1,400
|
Biff:
It seems to me that your comments are based on a near-complete misunderstanding not only of my posts but of pretty much everyone else’s on this thread. But since I’m such a dull ass, no doubt this is just a stupid misunderstanding on my part. I have some thoughts about some of the things you’ve said, but I’m sure that, for a brilliant, insightful intellectual such as you, nothing I might say would be worth reading. But I do have one question. When I consider someone to be a mindless windbag, I generally just walk away. Why do you go to the trouble of putting down people who, through no fault of their own, are far below your exalted intellectual status? |
12-17-2002, 12:22 PM | #114 | ||||
Veteran Member
Join Date: Jul 2000
Location: King George, VA
Posts: 1,400
|
Now I’m going to wind up the subject of whether anyone ever desires anything but future subjective experiences of his own.
Pryor introduces this question by quoting Nozick’s “experience machine” scenario: Quote:
Quote:
Pryor also comments: Quote:
Now Nozick’s “experience machine” concept is a variant of the old “brain in a vat” concept so beloved by skeptics. They love to ask: “How do you know that anything you seem to be experiencing is real? How do you know that you’re not just a brain in a vat?” When a person first considers this question and realizes that it seems to be true that there is no way that he can know that he’s not just a brain in a vat, he generally finds it quite unsettling; he doesn’t want to be just a brain in a vat. In fact, a great many philosophers have expended a great deal of effort trying to show that we somehow can know that we’re not brains in vats; that our seeming experiences of trees, dogs, and other humans really are experiences of these things. But if all we cared about were our own subjective experiences, why would this matter? The only important thing would be that we are having these experiences; whether the underlying reality is anything like what it apears to be, or even whether there is any external reality at all, wouldn’t matter to us in the least. Yet it appears that to a great many people it does matter; in fact, it seems to matter to them a great deal. So this in itself would seem to show that people (at least many of us) really do care about something other than our own experiences. In his own arguments for this conclusion, Pryor makes extensive use of the Matrix concept, (To save time I’ll assume that you’re all familiar with the movie.) He points out: Quote:
Well, no, for most of us, that isn’t all that we really care about. We don’t just want the illusion of accomplishments; we want to actually accomplish things. We don’t want the illusion that all is well with the world; we want it to actually be the case that all is well with the world. Here’s yet another scenario. Two identical twins, Ron and Don, both have a burning desire to find a cure for cancer. A kindly scientist (who happens to have a couple of experience machines on hand) gives them both (without their knowledge) the illusion of living a life in which they ultimately find a cure. It’s a hard life; in the drive to cure cancer each of them destroys his own family and ruins his health (or so it seems to them). But in they end, just before they die, they succeed. Up to this point the scenarios for the two have been completely identical. But then one of the machines malfunctions, and Ron discovers that his entire life has been an illusion – and a mostly miserable illusion at that. The only thing that made all of those personal sacrifices worthwhile was the prospect of saving a lot of lives - not the belief, for a few short moments at the very end of his life, that he had done so, but actually saving them. He’s mad as hell, disappointed, frustrated, you name it. I think that we’ll all agree that Ron did not get what he wanted; he did not achieve his aim. But what about Don? He remains blissfully unaware of the illusion right to his death. Did he get what he wanted? Did he achieve his aim? I think that we know what Ron’s answer to this question would be, and we know what Don’s answer would have been if he had ever become aware of the illusion. Who, after, all, could be a better authority on whether Don really achieved his aim than Ron, who is the same as Don in all relevant respects? Yet according to the “subjective experiences are all we really care about” school of thought, Don really did get everything he really wanted; he really did achieve all of his aims. They know better than Don or Ron, you see, what they “really” wanted. Now the “subjective experience is all we care about” school is fond of claiming that their opponents ignore the complexities of human psychology; that our position is based on abstract “ivory tower” philosophical reasoning. But in fact it’s our position which is based on actual observations of human behavior. By contrast, when presented with any scenario whatsoever, where we all know what the natural human reaction would be, and that it would seem to show that we really do care about things other than our own subjective experiences, their response is always to deny the obvious; to invent fanciful, totally implausible psychological hypotheses rather then accept the simplest, most natural one. This is a clear violation of Ockham’s Razor. In fact, so far as I can tell, there is no evidence of any kind that they would accept as falsifying their theory. If so, theirs is not a scientific theory about human psychology at all; it is either an article of faith or a matter of tautological truth. If the former, there is no point in arguing about it. If the latter, it is not about the real world at all, but is merely a consequence of how they define terms – i.e., a linguistic truth, not a factual truth. And it can be “refuted” simply by saying that I don’t define my terms in that way. The way I define terms, in order for it to be true, certain things would have to be true about the real world – about the way actual human beings really think and behave. And I see no evidence that they are. At this point I rest my case on this question. In a forthcoming post I plan to return to the question of whether acting altruistically is rational by presenting the second part of my theory, which I passed over earlier. By the way, like many of you, my posts will probably be few and far between now and the end of the year because of the holiday season. |
||||
12-17-2002, 01:28 PM | #115 |
Regular Member
Join Date: Aug 2002
Location: USA
Posts: 310
|
What is mosty amusing is that at this point, you seem to be talking to yourself, as the explanation I offered for why there is a gap in understanding here was mainly a summary of what your opposition has already made abundantly clear. Although I cannot tell for sure, It seems like your peers are now ignoring you, because you don't understand at all what is really being argued here. I'm sorry if you think that you are being profound. It reminds me of a quote someone signs off with, by Bertrand Russell. "The first sign of an impending nervous breakdown is the conviction that one's work is terribly important." Enjoy talking to yourself.
Biff |
12-17-2002, 08:20 PM | #116 |
Veteran Member
Join Date: Jul 2000
Location: King George, VA
Posts: 1,400
|
Biff:
Grow up and learn some basic manners. |
12-17-2002, 08:55 PM | #117 | ||||
Veteran Member
Join Date: Jun 2001
Location: my mind
Posts: 5,996
|
Quote:
Basically I think you are confusing human behavour with morality. Sure, most of our actions and "decisions" are automatic, unconscious and many times perplexing. But that is not morality. Morality involves a conscious and therefore rational decision. This is usually be consciously recognizing a future greater gain by delaying immediate pleasure. For example: I choose not to get drunk tonight because I will be driving. I might drink and then drive with no consequences at all, so in fact the unpleasurable consequences are not clear (this unclear vision of the future is an additional factor that makes us moral). But its the responsible and conscious thing to do and the future satisfaction of being responsible and having a clear conscience is what morality is all about. It has nothing to do about unconscious mental processes. Still I do agree with you that psychology can be very useful to understand our behaviours but not our morality. Morality always involves a conscious and rational process, otherwise it is not morality. Quote:
Quote:
Quote:
[/QUOTE] |
||||
12-18-2002, 01:46 AM | #118 | |
Regular Member
Join Date: Oct 2002
Location: I am both omnipresent AND ubiquitous.
Posts: 130
|
Quote:
Why does meaning, or reality, make people happier? Why does anything make someone happier? Because it benefits the survival or improvement of the species in some way, whether directly and/or indirectly, by benefiting you. Now the system is not flawless (I don’t want to reproduce or act altruistically, at least not by its true definition), but it was the best evolution came up with so far; better than pure instinct governing our actions. So having people do actual things make them happier than not because it is logical for it to be so; illusions do not benefit the survival or improvement of the species. So it makes sense that living in an illusionary world is such anathema to people; it is tantamount to suicide, respective to the species. Now, imagine this situation: A person has the ability to experience bliss as a brain in a vat. This person is first offered (and given a short period of time in which he experiences (so he knows exactly what he could feel for the entier remainder of his life)) 50% as much happiness as could ever be normally (i.e., in reality, with or without the aid of drugs, et cetera) attained by any human. Tempting, but he refuses because he hates the idea of it being false. So he is offered 100%,...200%,...500%,...1000% as much happiness as anyone could ever normally experience, being given a taste of it each time. Even if he refused these obscene levels, he would cave at some point. You can’t honestly believe that he would refuse 10^(100^(1000^(10,000^(100,000^(1,000,000))))) (I put the parentheses in there for clarification; not everyone knows that 2^3^4 means the same thing as 2^(3^4).) times as much happiness as anyone could ever normally experience, for the entire remainder of his life. So, you see, raw happiness rules all choices. Anything (such as breathing, blinking, digesting, circulating blood, or what actual, definitional altruism would require) not under our emotional control is not a choice. (I know that you can stop blinking, at least for a while, and you can also stop breathing for a while, but these are not normally things that are chosen to be done; they are involuntary.) Nothing anyone does needs to be explained by anything besides the want for increased levels of happiness. |
|
12-18-2002, 01:08 PM | #119 | |||||||||
Veteran Member
Join Date: Jul 2000
Location: King George, VA
Posts: 1,400
|
Darkblade:
Your post illustrates why it’s time to move on. This subject has been pretty much talked out; there’s little new to be said. But since this is your first post on this thread, I’ll give the basic counterarguments once again All of these points are covered in much more detail in earlier posts if you’re really interested. Quote:
Quote:
Quote:
Quote:
In fact, what do we mean by saying that we regard something as an “ultimate end” or an “intrinsic good”, if not that the prospect of getting it makes us happy? If you do something because you believe that it will result in certain state of affairs and the prospect of achieving this state of affairs pleases you, we say that achieving it is an ultimate end for you, or that you regard it as intrinsically good. That’s what it means to say that something is an ultimate end or an intrinsic good. To say that this situation shows that you do not regard the resulting state of affairs (or some aspect of it) as an ultimate end is simply to misunderstand what it means to say that something is an ultimate end. Let’s look at it yet another way. You say that you do X because choosing to do pleases you. In a sense that’s true. But that doesn’t mean that this is your motive for doing X. Your motive is whatever it is about doing X that makes you pleased to do it. A sane person will be pleased to choose to do something because he believes that it will satisfy some desire, which is to say that it will achieve some ultimate end. Now this ultimate end might well be (in ordinary cases) the happiness that you expect actually doing X to produce, but that’s not the same thing as the pleasure that you experience immediately from choosing to do it. All that I’m saying is that the ultimate end that makes you pleased by the prospect of choosing to do X cannot be to experience the pleasure of choosing to do X. Otherwise you would be saying that what pleases you about choosing to do X is that choosing to do X pleases you. That just doesn’t make sense; it’s transparently circular. There has to be something about choosing to do X that pleases you, and the thing about it that pleases you cannot be that choosing to do it will please you. By the way, it doesn’t matter whether we actually consciously deliberate about the results of the available choices. The point is simply that, if X is an intentional act – i.e., an act done with the intent to produce some result - there must be something about the expected results of doing X that pleases us – which is to say that we expect it to achieve some ultimate aim - something that we desire for its own sake - even if we aren’t consciously aware of this aim at the moment. And if it isn’t an intentional act, as 99percent has pointed out, it is not a proper subject of moral judgments; questions such as whether it was right or wrong, or altruistic or self-interested, are simply meaningless. That’s why I said some time ago that I was talking only about intentional acts. I haven’t repeated this every time I used the word “act” because it would get pretty tiresome. Quote:
Quote:
Quote:
Actually Pryor covers this very point in the article I’ve referred to many times before. Referring to the option of “plugging in” to Nozick’s “experience machine” permanently, he says: Quote:
Quote:
Now. I really want to move on to other matters. Feel free to reply, but I’m not going to respond unless you bring up some truly new points. Note: Since posting this I've learned that Pryor now has a new paper online which concentrates more on the issue discussed here, with some new material: What's So bad about Living in the Matrix?. I strongly recommend it to anyone who's really interested in this question. |
|||||||||
12-18-2002, 10:23 PM | #120 | |
Regular Member
Join Date: Oct 2002
Location: I am both omnipresent AND ubiquitous.
Posts: 130
|
Quote:
I think it comes down to this: You believe in two (or more, perhaps) factors for why people choose to do certain things; happiness, and an altruism factor, as well. I, and others here, believe that altruism is merely an extrapolation of people’s want for their own happiness, and find it easier, more logical, and simpler to explain it that way. I suppose you probably find your explanation easier, more logical, simpler, or otherwise better as well. In any case, it does not really matter. Neither of us are about to go out and murder people because of what we believe about how the human mind works. |
|
Thread Tools | Search this Thread |
|