FRDB Archives

Freethought & Rationalism Archive

The archives are read only.


Go Back   FRDB Archives > Archives > IIDB ARCHIVE: 200X-2003, PD 2007 > IIDB Philosophical Forums (PRIOR TO JUN-2003)
Welcome, Peter Kirby.
You last visited: Yesterday at 05:55 AM

 
 
Thread Tools Search this Thread
Old 12-15-2002, 07:37 PM   #111
Veteran Member
 
Join Date: Jun 2001
Location: my mind
Posts: 5,996
Post

Quote:
Originally posted by Biff:
The first of which is that he is making many philosophical arguments which rest on logical necessity, when the central issue here is one of human PSYCHOLOGY.
Hello Biff, welcome to the MF+P of the Internet Infidels.

Ok, what you just said is wrong on quite a bit of levels I am wondering where to begin with. First of all you are confusing science with philosophy. Secondly you are confusing logic with science. You are also confusing psychology with reason. Think about this.
Quote:
Without acknowledging the fact that humans are influenced by unconscious processes, are often unaware of their own motives, and frequently do not act completely rationally, you totally miss the true answer to this question.
Oh, so humans act compulsively based on their "unconscious processes"? Then there is no basis for free will and consequently there are no moral issues involved! Think about this.
Quote:
bd has continually argued that emotional payoff cannot be the root cause of altruistic acts, by mistakenly equating what other debaters have CLEARLY and REPEATEDLY labelled as unconscious processes as items that an agent would be fully aware of in the course of decision making.
Yes because if an agent acts according to its "unconscious processes" then that agent in fact is not deciding anything. He is in fact a robot acting according to its preprogrammed specifications. No morality involved then and therefore your whole argument does not belong here, at all.
Quote:
I have had a very hard time not laughing out loud reading bd's posts as he tilts at windmills. I think it is probably best for others to let him think he has won, as this dull ass will not mend his pace with beating.
Arrogance will not serve you well in these forums. I suggest you keep them on check, if you have any inclination to learn anything. IMO, the level of intelligence of what is discussed here is much higher than what you pretend it is.
99Percent is offline  
Old 12-16-2002, 12:56 PM   #112
Regular Member
 
Join Date: Aug 2002
Location: USA
Posts: 310
Post

quote:
--------------------------------------------------------------------------------
Originally posted by Biff:
The first of which is that he is making many philosophical arguments which rest on logical necessity, when the central issue here is one of human PSYCHOLOGY.
--------------------------------------------------------------------------------

Hello Biff, welcome to the MF+P of the Internet Infidels.

Ok, what you just said is wrong on quite a bit of levels I am wondering where to begin with. First of all you are confusing science with philosophy. Secondly you are confusing logic with science. You are also confusing psychology with reason. Think about this.

quote:
--------------------------------------------------------------------------------
Without acknowledging the fact that humans are influenced by unconscious processes, are often unaware of their own motives, and frequently do not act completely rationally, you totally miss the true answer to this question.
--------------------------------------------------------------------------------

Oh, so humans act compulsively based on their "unconscious processes"? Then there is no basis for free will and consequently there are no moral issues involved! Think about this.
quote:
--------------------------------------------------------------------------------
bd has continually argued that emotional payoff cannot be the root cause of altruistic acts, by mistakenly equating what other debaters have CLEARLY and REPEATEDLY labelled as unconscious processes as items that an agent would be fully aware of in the course of decision making.
--------------------------------------------------------------------------------

Yes because if an agent acts according to its "unconscious processes" then that agent in fact is not deciding anything. He is in fact a robot acting according to its preprogrammed specifications. No morality involved then and therefore your whole argument does not belong here, at all.
quote:
--------------------------------------------------------------------------------
I have had a very hard time not laughing out loud reading bd's posts as he tilts at windmills. I think it is probably best for others to let him think he has won, as this dull ass will not mend his pace with beating.
--------------------------------------------------------------------------------

Arrogance will not serve you well in these forums. I suggest you keep them on check, if you have any inclination to learn anything. IMO, the level of intelligence of what is discussed here is much higher than what you pretend it is.

_____
Actually, 99, it is clear that in the course of discussing the basis for moral actions, it is a fatal error to ignore the influence of psychology. In no way would any Psychologist argue or claim that the presence of unconscious motives destroys "free will", the ability to act according to preference. But to deny that unconscious processes, specifically the pursuit of pleasant inner states, plays no part in the process is to argue only half of the debate.

It matters very little if you can devise a moral theory that explains why actions are performed that does not correspond to the true motives for human actions. Thus, it is bd's error to claim that his moral theory is correct. In essence, in a world of perfectly rational beings, perhaps or perhaps not his process of empathic altruism would obtain. However, in the real world, it is fruitless to try to claim that such a theory is valid.

If, as you claim, moral discussions can be or SHOULD be free of any understanding of human psychology, then you might as well concede that any conclusions reached necessarily have nothing to do with human morality or how it actually functions.

In the end, what bd's opponents have tried to explain, to little avail, is that the underlying motives for moral actions CANNOT be reduced to pure rational issues, and the error that bd has made is to take the perfectly valid claim that human moral action flows from conditioned emotional responses and mock them as if his opponents were claiming that agents actually consciously evaluated these unconscious emotions and acted upon them.

Despite what you may think, psychology is intrinsic and inseperable from questions of human morality, as everyone here EXCEPT bd seems to understand.

Biff
Biff is offline  
Old 12-17-2002, 08:53 AM   #113
Veteran Member
 
Join Date: Jul 2000
Location: King George, VA
Posts: 1,400
Post

Biff:

It seems to me that your comments are based on a near-complete misunderstanding not only of my posts but of pretty much everyone else’s on this thread. But since I’m such a dull ass, no doubt this is just a stupid misunderstanding on my part.

I have some thoughts about some of the things you’ve said, but I’m sure that, for a brilliant, insightful intellectual such as you, nothing I might say would be worth reading.

But I do have one question. When I consider someone to be a mindless windbag, I generally just walk away. Why do you go to the trouble of putting down people who, through no fault of their own, are far below your exalted intellectual status?
bd-from-kg is offline  
Old 12-17-2002, 12:22 PM   #114
Veteran Member
 
Join Date: Jul 2000
Location: King George, VA
Posts: 1,400
Post

Now I’m going to wind up the subject of whether anyone ever desires anything but future subjective experiences of his own.

Pryor introduces this question by quoting Nozick’s “experience machine” scenario:

Quote:
Suppose there were an experience machine that would give you any experience you desired. Superduper neuropsychologists could stimulate your brain so that you would think and feel you were writing a great novel, or making a friend, or reading an interesting book. All the time you would be floating in a tank, with electrodes attached to your brain. Should you plug into this machine for life, preprogramming your life's desires?...Of course, while in the tank you won't know that you're there; you'll think it's all actually happening. Others can also plug in to have the experiences they want, so there's no need to stay unplugged to serve them. (Ignore problems such as who will service the machines if everyone plugs in.) Would you plug in? What else can matter to us, other than how our lives feel from the inside?
Pryor comments:

Quote:
Nozick's view is that most of us would choose not to plug in. He thinks there are things we value over and above what experiences we have.
And Pryor later states explicitly that he agrees with Nozick on this. [Apparently some prominent philosophers and senior faculty members at prestigious universities are dull asses too. Poor guys. Biff would have set Nozick straight in no time.]

Pryor also comments:

Quote:
Notice that Nozick is talking about the question, “What do we actually value?”, not the question, “What should we value?”
In other words, this question, which is the very one that we have been discussing here for some time, is psychological: what makes human beings tick? [Pity that I didn’t notice this before. Here I thought that it could be settled by pure logic.]

Now Nozick’s “experience machine” concept is a variant of the old “brain in a vat” concept so beloved by skeptics. They love to ask: “How do you know that anything you seem to be experiencing is real? How do you know that you’re not just a brain in a vat?” When a person first considers this question and realizes that it seems to be true that there is no way that he can know that he’s not just a brain in a vat, he generally finds it quite unsettling; he doesn’t want to be just a brain in a vat. In fact, a great many philosophers have expended a great deal of effort trying to show that we somehow can know that we’re not brains in vats; that our seeming experiences of trees, dogs, and other humans really are experiences of these things. But if all we cared about were our own subjective experiences, why would this matter? The only important thing would be that we are having these experiences; whether the underlying reality is anything like what it apears to be, or even whether there is any external reality at all, wouldn’t matter to us in the least. Yet it appears that to a great many people it does matter; in fact, it seems to matter to them a great deal. So this in itself would seem to show that people (at least many of us) really do care about something other than our own experiences.

In his own arguments for this conclusion, Pryor makes extensive use of the Matrix concept, (To save time I’ll assume that you’re all familiar with the movie.) He points out:

Quote:
It's interesting that in The Matrix, we're at war with the machines. And people who are "plugged in" are being used by the machines as batteries. When they've outlived their usefulness, they get destroyed and their organic matter is recycled. That's a pretty ugly scenario. It's part of what makes being plugged in seem so undesirable to Morpheus and Neo and other characters who understand what's going on.
But once again, if all we care about is our subjective experiences, why would anyone care about what’s “really” going on? Sure, outside the Matrix people are being destroyed, and we’re participating in their destruction. But so what? Inside the Matrix all is well; our subjective experience is of a happy, fulfilling life with many accomplishments. Isn’t that all that we really care about?

Well, no, for most of us, that isn’t all that we really care about. We don’t just want the illusion of accomplishments; we want to actually accomplish things. We don’t want the illusion that all is well with the world; we want it to actually be the case that all is well with the world.

Here’s yet another scenario. Two identical twins, Ron and Don, both have a burning desire to find a cure for cancer. A kindly scientist (who happens to have a couple of experience machines on hand) gives them both (without their knowledge) the illusion of living a life in which they ultimately find a cure. It’s a hard life; in the drive to cure cancer each of them destroys his own family and ruins his health (or so it seems to them). But in they end, just before they die, they succeed. Up to this point the scenarios for the two have been completely identical. But then one of the machines malfunctions, and Ron discovers that his entire life has been an illusion – and a mostly miserable illusion at that. The only thing that made all of those personal sacrifices worthwhile was the prospect of saving a lot of lives - not the belief, for a few short moments at the very end of his life, that he had done so, but actually saving them. He’s mad as hell, disappointed, frustrated, you name it. I think that we’ll all agree that Ron did not get what he wanted; he did not achieve his aim. But what about Don? He remains blissfully unaware of the illusion right to his death. Did he get what he wanted? Did he achieve his aim? I think that we know what Ron’s answer to this question would be, and we know what Don’s answer would have been if he had ever become aware of the illusion. Who, after, all, could be a better authority on whether Don really achieved his aim than Ron, who is the same as Don in all relevant respects? Yet according to the “subjective experiences are all we really care about” school of thought, Don really did get everything he really wanted; he really did achieve all of his aims. They know better than Don or Ron, you see, what they “really” wanted.

Now the “subjective experience is all we care about” school is fond of claiming that their opponents ignore the complexities of human psychology; that our position is based on abstract “ivory tower” philosophical reasoning. But in fact it’s our position which is based on actual observations of human behavior. By contrast, when presented with any scenario whatsoever, where we all know what the natural human reaction would be, and that it would seem to show that we really do care about things other than our own subjective experiences, their response is always to deny the obvious; to invent fanciful, totally implausible psychological hypotheses rather then accept the simplest, most natural one. This is a clear violation of Ockham’s Razor. In fact, so far as I can tell, there is no evidence of any kind that they would accept as falsifying their theory. If so, theirs is not a scientific theory about human psychology at all; it is either an article of faith or a matter of tautological truth. If the former, there is no point in arguing about it. If the latter, it is not about the real world at all, but is merely a consequence of how they define terms – i.e., a linguistic truth, not a factual truth. And it can be “refuted” simply by saying that I don’t define my terms in that way. The way I define terms, in order for it to be true, certain things would have to be true about the real world – about the way actual human beings really think and behave. And I see no evidence that they are.

At this point I rest my case on this question. In a forthcoming post I plan to return to the question of whether acting altruistically is rational by presenting the second part of my theory, which I passed over earlier.

By the way, like many of you, my posts will probably be few and far between now and the end of the year because of the holiday season.
bd-from-kg is offline  
Old 12-17-2002, 01:28 PM   #115
Regular Member
 
Join Date: Aug 2002
Location: USA
Posts: 310
Post

What is mosty amusing is that at this point, you seem to be talking to yourself, as the explanation I offered for why there is a gap in understanding here was mainly a summary of what your opposition has already made abundantly clear. Although I cannot tell for sure, It seems like your peers are now ignoring you, because you don't understand at all what is really being argued here. I'm sorry if you think that you are being profound. It reminds me of a quote someone signs off with, by Bertrand Russell. "The first sign of an impending nervous breakdown is the conviction that one's work is terribly important." Enjoy talking to yourself.

Biff
Biff is offline  
Old 12-17-2002, 08:20 PM   #116
Veteran Member
 
Join Date: Jul 2000
Location: King George, VA
Posts: 1,400
Post

Biff:

Grow up and learn some basic manners.
bd-from-kg is offline  
Old 12-17-2002, 08:55 PM   #117
Veteran Member
 
Join Date: Jun 2001
Location: my mind
Posts: 5,996
Post

Quote:
Originally posted by Biff:
Actually, 99, it is clear that in the course of discussing the basis for moral actions, it is a fatal error to ignore the influence of psychology. In no way would any Psychologist argue or claim that the presence of unconscious motives destroys "free will", the ability to act according to preference. But to deny that unconscious processes, specifically the pursuit of pleasant inner states, plays no part in the process is to argue only half of the debate.
Apologies for the delay in my answer.

Basically I think you are confusing human behavour with morality. Sure, most of our actions and "decisions" are automatic, unconscious and many times perplexing. But that is not morality. Morality involves a conscious and therefore rational decision. This is usually be consciously recognizing a future greater gain by delaying immediate pleasure. For example: I choose not to get drunk tonight because I will be driving. I might drink and then drive with no consequences at all, so in fact the unpleasurable consequences are not clear (this unclear vision of the future is an additional factor that makes us moral). But its the responsible and conscious thing to do and the future satisfaction of being responsible and having a clear conscience is what morality is all about. It has nothing to do about unconscious mental processes.

Still I do agree with you that psychology can be very useful to understand our behaviours but not our morality. Morality always involves a conscious and rational process, otherwise it is not morality.

Quote:
It matters very little if you can devise a moral theory that explains why actions are performed that does not correspond to the true motives for human actions.
If the motives are conscious and derived with a rational thinking process then yes, a moral theory should be able to correspond to the motives of a human action.

Quote:
Thus, it is bd's error to claim that his moral theory is correct.
No, because bd-from-kg (and me too) is attempting to reason out with logical arguments, so a moral theory should be able to be correct or at leat be able to make such a claim, whether you agree to it or not, because it is derived objectively and with the assumption that at least some acts are being done with a logical reason in mind. Sure there are many people who act impulsively, even kill impulsively but that does not discount that there are people who do not act impulsively always.

Quote:
In essence, in a world of perfectly rational beings, perhaps or perhaps not his process of empathic altruism would obtain. However, in the real world, it is fruitless to try to claim that such a theory is valid.
Your are incorrect to assume that all humans are either rational or not, or rational all the time or never.

[/QUOTE]
99Percent is offline  
Old 12-18-2002, 01:46 AM   #118
Regular Member
 
Join Date: Oct 2002
Location: I am both omnipresent AND ubiquitous.
Posts: 130
Post

Quote:
Originally posted by bd-from-kg:

But once again, if all we care about is our subjective experiences, why would anyone care about what’s “really” going on? Sure, outside the Matrix people are being destroyed, and we’re participating in their destruction. But so what? Inside the Matrix all is well; our subjective experience is of a happy, fulfilling life with many accomplishments. Isn’t that all that we really care about?

Well, no, for most of us, that isn’t all that we really care about. We don’t just want the illusion of accomplishments; we want to actually accomplish things. We don’t want the illusion that all is well with the world; we want it to actually be the case that all is well with the world.
I don’t remember the claim of people preferring the illusion of accomplishments, if it would be happy to be under that illusion, being made. I, at least do not make it. But I do believe that all choices people make are to make themselves happier (than they would be if they had not made that choice), factoring in risk and reward magnitudes and probabilities, of course. The fact that people want meaning (or reality; true accomplishment of actions) as well as raw happiness (quantity) is understandable. Because meaning makes us considerably happier than lack of meaning does. It does not matter if your memory would be erased, because you would feel dismal making the choice to forsake meaning, and you would probably not be thinking about how happy you will be in the future, at least not strongly, and, in any case, anticipatory “beforeshocks” are not as powerful as currently experienced emotions, i.e., emotions experienced because you believe you will experience future emotions are not as relatively potent as emotions experienced because of present factors. Pick that apart due to the fact I could not state it flawlessly, but I am sure you know what was meant by what I said.

Why does meaning, or reality, make people happier? Why does anything make someone happier? Because it benefits the survival or improvement of the species in some way, whether directly and/or indirectly, by benefiting you. Now the system is not flawless (I don’t want to reproduce or act altruistically, at least not by its true definition), but it was the best evolution came up with so far; better than pure instinct governing our actions. So having people do actual things make them happier than not because it is logical for it to be so; illusions do not benefit the survival or improvement of the species. So it makes sense that living in an illusionary world is such anathema to people; it is tantamount to suicide, respective to the species.

Now, imagine this situation: A person has the ability to experience bliss as a brain in a vat. This person is first offered (and given a short period of time in which he experiences (so he knows exactly what he could feel for the entier remainder of his life)) 50% as much happiness as could ever be normally (i.e., in reality, with or without the aid of drugs, et cetera) attained by any human. Tempting, but he refuses because he hates the idea of it being false. So he is offered 100%,...200%,...500%,...1000% as much happiness as anyone could ever normally experience, being given a taste of it each time. Even if he refused these obscene levels, he would cave at some point. You can’t honestly believe that he would refuse 10^(100^(1000^(10,000^(100,000^(1,000,000))))) (I put the parentheses in there for clarification; not everyone knows that 2^3^4 means the same thing as 2^(3^4).) times as much happiness as anyone could ever normally experience, for the entire remainder of his life. So, you see, raw happiness rules all choices. Anything (such as breathing, blinking, digesting, circulating blood, or what actual, definitional altruism would require) not under our emotional control is not a choice. (I know that you can stop blinking, at least for a while, and you can also stop breathing for a while, but these are not normally things that are chosen to be done; they are involuntary.) Nothing anyone does needs to be explained by anything besides the want for increased levels of happiness.
Darkblade is offline  
Old 12-18-2002, 01:08 PM   #119
Veteran Member
 
Join Date: Jul 2000
Location: King George, VA
Posts: 1,400
Post

Darkblade:

Your post illustrates why it’s time to move on. This subject has been pretty much talked out; there’s little new to be said. But since this is your first post on this thread, I’ll give the basic counterarguments once again All of these points are covered in much more detail in earlier posts if you’re really interested.

Quote:
I don’t remember the claim of people preferring the illusion of accomplishments, if it would be happy to be under that illusion, being made. I, at least do not make it. But I do believe that all choices people make are to make themselves happier...
Here you seem to be missing the point completely and contradicting yourself into the bargain. Either all that people care about is their own happiness (or other subjective experiences) or it isn’t. If it is, by definition there is nothing to choose between having the subjective experience of accomplishing something without accomplishing it (i.e., the illusion of accomplishment) and having the very same subjective experience as a result of actually accomplishing it. Of course one would not prefer the illusion to the reality, but by the same token one would not prefer the reality to the illusion. (That’s the point of all of these “science-fictionish” examples. In real life, ordinarily the subjective experience of accomplishing something occurs if and only if one actually accomplishes it. To see whether it’s only the subjective experience or also the actual accomplishment that really matters to us, we need to look at cases where the one occurs but not the other.)

Quote:
The fact that people want meaning (or reality; true accomplishment of actions) as well as raw happiness (quantity) is understandable. Because meaning makes us considerably happier than lack of meaning does.
Well, yes. That’s because we desire to actually accomplish things, not merely to enjoy the illusion that we’ve accomplished things. That’s my point. And yes, of course people also desire to be happy.

Quote:
It does not matter if your memory would be erased, because you would feel dismal making the choice to forsake meaning,
Right on. And why would we feel dismal about forsaking meaning? Obviously, because we care about the “meaning” – i.e., about actually accomplishing things – and not just about the subjective experience that ordinarily goes with accomplishing things.

Quote:
...and you would probably not be thinking about how happy you will be in the future...
Of course you will. It’s just that the prospect of this future happiness is more than counterbalanced by the knowledge that it will be based on an illusion, and that a great many people will be suffering as a result of the very choice that produced your delusional happiness. The question that you’re avoiding is, why would making this choice make you unhappy? Why wouldn’t the prospect of getting the only thing that you really care about – namely, happiness for the rest of your life – make you happy? Why would the prospect of causing misery and death to many other people make you unhappy if that’s not something you really care about?

In fact, what do we mean by saying that we regard something as an “ultimate end” or an “intrinsic good”, if not that the prospect of getting it makes us happy? If you do something because you believe that it will result in certain state of affairs and the prospect of achieving this state of affairs pleases you, we say that achieving it is an ultimate end for you, or that you regard it as intrinsically good. That’s what it means to say that something is an ultimate end or an intrinsic good. To say that this situation shows that you do not regard the resulting state of affairs (or some aspect of it) as an ultimate end is simply to misunderstand what it means to say that something is an ultimate end.

Let’s look at it yet another way. You say that you do X because choosing to do pleases you. In a sense that’s true. But that doesn’t mean that this is your motive for doing X. Your motive is whatever it is about doing X that makes you pleased to do it. A sane person will be pleased to choose to do something because he believes that it will satisfy some desire, which is to say that it will achieve some ultimate end. Now this ultimate end might well be (in ordinary cases) the happiness that you expect actually doing X to produce, but that’s not the same thing as the pleasure that you experience immediately from choosing to do it. All that I’m saying is that the ultimate end that makes you pleased by the prospect of choosing to do X cannot be to experience the pleasure of choosing to do X. Otherwise you would be saying that what pleases you about choosing to do X is that choosing to do X pleases you. That just doesn’t make sense; it’s transparently circular. There has to be something about choosing to do X that pleases you, and the thing about it that pleases you cannot be that choosing to do it will please you.

By the way, it doesn’t matter whether we actually consciously deliberate about the results of the available choices. The point is simply that, if X is an intentional act – i.e., an act done with the intent to produce some result - there must be something about the expected results of doing X that pleases us – which is to say that we expect it to achieve some ultimate aim - something that we desire for its own sake - even if we aren’t consciously aware of this aim at the moment. And if it isn’t an intentional act, as 99percent has pointed out, it is not a proper subject of moral judgments; questions such as whether it was right or wrong, or altruistic or self-interested, are simply meaningless. That’s why I said some time ago that I was talking only about intentional acts. I haven’t repeated this every time I used the word “act” because it would get pretty tiresome.

Quote:
Why does meaning, or reality, make people happier? Why does anything make someone happier? Because it benefits the survival or improvement of the species in some way, whether directly and/or indirectly, by benefiting you. Now the system is not flawless ..., but it was the best evolution came up with so far; better than pure instinct governing our actions. So having people do actual things make them happier than not because it is logical for it to be so; illusions do not benefit the survival or improvement of the species. So it makes sense that living in an illusionary world is such anathema to people...
Amen, brother. Natural selection provides a very powerful, convincing explanation for why we prefer actually accomplishing things to merely enjoying the illusion of accomplishing things. One would expect that natural selection would have produced desires for actual real-world results, since this is what affects actual survival. If you are going to claim that it didn’t produce such desires, you need to explain why it didn’t.

Quote:
Now, imagine this situation: A person has the ability to experience bliss as a brain in a vat... You can’t honestly believe that he would refuse 10^(100^(1000^(10,000^(100,000^(1,000,000))))) times as much happiness as anyone could ever normally experience, for the entire remainder of his life.
Actually I could honestly believe that some people would refuse it on these terms, but I believe that you’re right in thinking that most people wouldn’t.

Quote:
So, you see, raw happiness rules all choices.
Non sequitur. All that this shows is that happiness is one of the things we ultimately desire. No question about that. But the fact that you found it necessary to stipulate an unimaginable degree of happiness before you felt confident that the subject would choose happiness over all else shows that you must really believe that there is something else – something that competes with happiness as the object of our ultimate desires. If happiness were truly all that we desired, even the prospect of an infinitesimally greater degree of happiness (as the subject perceives it) would be enough to tip the balance in favor of the “brain in a vat” scenario.

Actually Pryor covers this very point in the article I’ve referred to many times before. Referring to the option of “plugging in” to Nozick’s “experience machine” permanently, he says:

Quote:
... we have to compare what we'd get by plugging into the experience machine to what we'd get if we don't plug in. I've only been arguing that we'd miss out on some things we'd value if we plugged in. I'm not saying that it would never be reasonable to plug in. In some cases, the good of being plugged in could outweigh the bad. If the real world is miserable and nasty enough, it may make sense to plug in... All I'm saying is that plugging in won't give us everything we want. Our [subjective] experiences aren't all that we value.
Later he adds:

Quote:
I hope to have persuaded you of this final point:

Third Point. We do ordinarily care about what the world is really like, over and beyond what we have evidence for thinking it's like.

I haven't been able to say, though, how much we care about this. It's hard to know what the right balance point is. How bad does the real world have to be, before it makes sense to make Cypher's choice, and plug back into the blissful experience machine? I have no answer to this question.
This is all that I've been arguing: that although we do care (very much, in fact) about our own happiness, we also care about other things. And one of the other things that most of us care about is other people's happiness. Not nearly so much as our own in most cases, to be sure, but we care about it nonetheless. This claim is neither radical nor naive.

Now. I really want to move on to other matters. Feel free to reply, but I’m not going to respond unless you bring up some truly new points.

Note: Since posting this I've learned that Pryor now has a new paper online which concentrates more on the issue discussed here, with some new material: What's So bad about Living in the Matrix?. I strongly recommend it to anyone who's really interested in this question.
bd-from-kg is offline  
Old 12-18-2002, 10:23 PM   #120
Regular Member
 
Join Date: Oct 2002
Location: I am both omnipresent AND ubiquitous.
Posts: 130
Post

Quote:
Originally posted by bd-from-kg:

This is all that I've been arguing: that although we do care (very much, in fact) about our own happiness, we also care about other things. And one of the other things that most of us care about is other people's happiness. Not nearly so much as our own in most cases, to be sure, but we care about it nonetheless. This claim is neither radical nor naive.

Now. I really want to move on to other matters. Feel free to reply, but I’m not going to respond unless you bring up some truly new points.
I already explained why people would reject living as a brain in a vat; it would make them unhappy (right now (I did explain why this was important as well)). The reason I used such a large number is that I believe that, at that number, even theists who thought that they would burn in hell (if they chose to live as a brain in a vat) would subconsciously realize that that was not nearly as likely as they previously thought (i.e., realize the tangibility of the happiness of living as a brain in a vat as opposed to the intangibility of hell), and would then opt for living as a brain in a vat. I do not believe anyone could resist the amount of happiness I proposed (as you do), because it was SO much.

I think it comes down to this: You believe in two (or more, perhaps) factors for why people choose to do certain things; happiness, and an altruism factor, as well. I, and others here, believe that altruism is merely an extrapolation of people’s want for their own happiness, and find it easier, more logical, and simpler to explain it that way. I suppose you probably find your explanation easier, more logical, simpler, or otherwise better as well. In any case, it does not really matter. Neither of us are about to go out and murder people because of what we believe about how the human mind works.
Darkblade is offline  
 

Thread Tools Search this Thread
Search this Thread:

Advanced Search

Forum Jump


All times are GMT -8. The time now is 01:35 PM.

Top

This custom BB emulates vBulletin® Version 3.8.2
Copyright ©2000 - 2015, Jelsoft Enterprises Ltd.