FRDB Archives

Freethought & Rationalism Archive

The archives are read only.


Go Back   FRDB Archives > Philosophy & Religious Studies > Moral Foundations & Principles
Welcome, Peter Kirby.
You last visited: Yesterday at 03:12 PM

 
 
Thread Tools Search this Thread
Old 01-13-2004, 09:17 PM   #101
Veteran Member
 
Join Date: Aug 2003
Location: Berkeley, CA
Posts: 1,930
Default

Regarding sentience, I agree with Pixnaps. Certainly it only shifts the problem; but it seems to me to shift the problem to something that seems more likely to be resolved in the relatively near future. It awaits basically developments in neuroscience, while as far as I can tell we have no idea where a solution to the problem of defining desire would come from.
Kalkin is offline  
Old 01-13-2004, 09:30 PM   #102
Ed
Veteran Member
 
Join Date: Jun 2000
Location: SC
Posts: 5,908
Default

Quote:
Originally posted by Pixnaps
Ed - once again, I find your questions to be bizarre, repetitive, and besides the point.

Originally posted by Ed
Ok, what is the objectively correct answer?

pix: As has been said (many times) before: a good desire tends to fulfill desires generally. This desire-fulfillment (or lack thereof) is a matter of objective fact.
An act can be consistent (or not) with how a person with good desires (see above) would act in that situation. This too is a matter of objective fact.

The "objectively correct answer" (for judging the moral worth of an act) is how consistent the act is with how a person with good desires would have acted. (Hopefully Alonzo will correct me if i'm wrong here).


Why is that the correct answer for judging the moral worth of the act? And who came up with that criterion?


Quote:
Ed: Yes, but we are talking about how this view would work in a society. And many desires are contradictory. In many areas there is no one unifying desire in a society.

pix: So what? Some desires may not get fulfilled in that case. What is important is that as many desires as possible get fulfilled. (Which in practice will mean fulfilling good desires, and thwarting bad ones, since both these strategies will tend to result in maximised desire-fulfillment generally).
Why is it important that as many desires as possible get fulfilled and who decided it was important?


Quote:
Ed: Hatred does not have a desire, only persons do. Hatred is just an emotion within a person. So it cannot fulfill its own desire.

pix: obviously. I meant it fulfills no desire other than itself (or if you're going to be picky, perhaps "few desires other than those which are closely linked to the hatred itself" would be a more precise answer).
Ok.

Quote:
Ed: But a majority of people in a society acting on their hatred CAN thwart many other desires at the same time fulfilling their desires. So this must be good according to utilitarianism, right?

pix: no, because thwarting desires is bad, and the hatred (most likely) would not fulfill as many (or as strong) desires as it thwarts. (it does more harm to its targets, than it does good to its perpetrators).
Why is thwarting desires "bad"?

Quote:
pix: Think about it... 100 nazis kills one jew and they all feel a little bit good about it (fulfilled desire to kill jews, prove their superiority, whatever). And that's the only fulfillment it achieves. (As i said before: "i have trouble imagining how it could tend to fulfil a wider desire set")

Now think about the hundreds... thousands... of desires that Jew had which are now thwarted. And those of his family who will never see him again. His employees who are now out of a job. The customers who have lost the benefits of his work. And so forth.
Are these good desires and if so why?

Quote:
pix: What if, instead, all those nazis had a desire to keep the Jew alive and prevent racial hatred? They could fulfill that desire, which gives them the same benefits (a few desires fulfilled directly) as before, but now all those other desires which are dependent on the Jew being alive... they are no longer thwarted!

You may complain that going back in time to judge the best possible desires is beside the point. But, according to Desire Utilitarianism (if i've interpreted it correctly), this is actually where (when?) moral judgements are supposed to be made. Choosing the best possible desires, is the key to morality (according to this theory).

So it seems clear that this sort of hatred will tend to be immoral (perhaps we can even say "always"?), no matter how large a majority has the potential for that desire. That very same majority could always do far better with some alternative desire.
Okay I think I understand Desire utilitarianism better, but it does not answer the question of why those things that you call "good" above ARE good. Just saying that good desires are good because they fulfill more good desires is circular.
Ed is offline  
Old 01-13-2004, 09:48 PM   #103
Regular Member
 
Join Date: Dec 2003
Location: New Zealand
Posts: 260
Default

First of all... Ed- please please please stop writing everything in bold!!! It's infuriating the hell out of me!
Just take the effort to insert an extra [/B] in the appropriate place to turn the bold off.
Or you can even remove the {B} (i use curly braces in place of square ones, so as to avoid triggering the code myself) from the start of the quote itself.
Either way. Just a little less bold would make your posts far more legible.
Thanks in advance

To address the content of your post now:

Essentially, you are asking meta-level questions, not "what does Desire Utilitarianism say?", but rather, "why should we think D.U. is accurate?"

I've just given a full answer (to pretty much the exact same questions) to another religious absolutist ("philosophical" - he's posted here a couple of times too) on some other forums.

So i hope you will understand if i do not bother to repeat myself, but rather, simply provide you with the link.

Desire Utilitarianism (at SM forums)

That takes you to the 3rd page of a thread on Desire Utilitarianism. My post at the top of the page directly addresses the sorts of questions you're raising. Feel free to go back and skim through the previous pages too if you're interested in getting a fuller understanding of the argument.
Pixnaps is offline  
Old 01-14-2004, 04:55 AM   #104
Veteran Member
 
Join Date: Mar 2002
Location: 920B Milo Circle Lafayette, CO
Posts: 3,515
Default

Pixnaps: In the philosophy of psychology, I know of some powerful arguments that suggest that 'sentience' is just as problematic as 'soul', and I am uncomfortable founding a theory on a concept that has these types of problems. The best of the best in the field of philosophy is working on this problem. I am not going to pretend that I am near as good as these guys. If they say that this is a problem with the concept of 'desire', I believe them.

Ed Some of your arguments confuse ethics and language. You could ask, "Who decided that the woody thing growing in my front yard is going to be called a 'tree'. We did not have to call it a 'tree', we could have called it an 'arbora' or a 'megaplant' or whatever. Instead, we call it a tree.

However, the fact that we could have CALLED it something else does not keep it from being any less real. Changing its name does not change its height, color, solidity, or tendency to drop leaves all over my yard each fall.

"What's in a name? A rose by any other name would smell as sweet." -- Shakespear.

We could have used a word other than 'harm' to refer to the thwarting of a strong and stable desire, but that would not change the nature or the qualities of thwarting a strong and stable desire.

I am not the one who decided that the word 'value' refers to relationships between states of affairs and decires. The argument is that if you look at the way that people use value terms, this is what the word refers to. Just as if you look at the way people use the word 'tree', you will discover that they use it to refer to tall woody things like the one sitting in my front yard.

Some people make false claims about values, just as some people make false claims about trees. Somebody might think that trees house tree sprites. Somebody might think that values are rooted in a diety. Both of these claims are wrong. But a tree is still a tree. And a value is still a value.
Alonzo Fyfe is offline  
Old 01-14-2004, 06:07 AM   #105
Veteran Member
 
Join Date: Oct 2001
Location: U.S.
Posts: 2,565
Default

On the thermostat issue:

I'm not familiar with the latest philosophy of mind, nor do I think I have an answer for this apparent problem, but I do have a couple of observations to make:

1) With regards to BDI theory, while the thermostat appears to have something that meets the definition of "desire", it does not appear to have anything that meets the definition of the term "belief". In some ways, it seems this may be what separates the sentient from the non-sentient. Sentient entities have beliefs, and thus follow (in theory) the BDI model. A thermostat, however, would appear to follow a simpler DI model. One might say the lower levels of the animal kingdom would follow such a model as well.

2) Other than the absurd impact on desire-utilitarianism, it may not be so ludicrous to say a thermostat has a "desire". One can imagine a complex computer system that approaches artificial intelligence. Such a system might have "desires" that really do approach the common-sense definition of the term. The thermostat is merely a much simpler intelligence. It doesn't seem entirely unreasonable to say that such an intelligence is programmed with a desire.

3) Here's where I really go out on a limb: Suppose desire-utilitarianism were modified. Suppose the only desires that impact morality are those desires held by entities operating in a BDI-mode, but not those operating in a DI-mode. It seems, that this may be a somewhat less ambiguous way of incorporating the concept of "sentience" into the theory.

It seems like one might need a further explanation of why DI-mode entities are exempt, and I don't have one right now other than my "gut-feel" that this seems acceptable. Furthermore, I sense it might still be possible to squeeze some definition of "belief" into the thermostat, though that seems even more absurd than a thermostat with "desires".

Just some thoughts though. Maybe someone with more philosophical might can do something useful with them. Or shoot them down so I don't have to worry about them.

Jamie
Jamie_L is offline  
Old 01-14-2004, 07:56 AM   #106
Contributor
 
Join Date: Dec 2002
Location: Alaska!
Posts: 14,058
Default

Quote:
Originally posted by Ed
Okay I think I understand Desire utilitarianism better, but it does not answer the question of why those things that you call "good" above ARE good. Just saying that good desires are good because they fulfill more good desires is circular.
Ed, consider the possiblity that this is just what the word means. Can you even think of an instance where you could properly use the word "good" without referring to the gratification of a desire?

Alonzo's definition is based on both usefulness and consistency with the way people use language.

crc
Wiploc is offline  
Old 01-14-2004, 04:53 PM   #107
Regular Member
 
Join Date: Dec 2003
Location: New Zealand
Posts: 260
Default

Quote:
Originally posted by Jamie_L
1) With regards to BDI theory, while the thermostat appears to have something that meets the definition of "desire", it does not appear to have anything that meets the definition of the term "belief". In some ways, it seems this may be what separates the sentient from the non-sentient. Sentient entities have beliefs, and thus follow (in theory) the BDI model. A thermostat, however, would appear to follow a simpler DI model. One might say the lower levels of the animal kingdom would follow such a model as well.

<snip>

Furthermore, I sense it might still be possible to squeeze some definition of "belief" into the thermostat, though that seems even more absurd than a thermostat with "desires".
hmm, i'd always thought much the opposite actually... that's it makes a lot more sense to think of machines as having beliefs than as having desires.

After all, any sort of sensory input (which machines can easily have, eg light detectors, or thermometers) provides information about the outside world, and hence forms a sort of "belief".
A thermostat might have a "belief" that the current temperature is less than 70 degrees, and this would cause it to turn on. Once it's belief changes (due to updated input), it would turn off again.

Something like that, anyway.
Pixnaps is offline  
Old 01-14-2004, 09:37 PM   #108
Ed
Veteran Member
 
Join Date: Jun 2000
Location: SC
Posts: 5,908
Default

Quote:
Originally posted by Pixnaps
First of all... Ed- please please please stop writing everything in bold!!! It's infuriating the hell out of me!
Just take the effort to insert an extra
in the appropriate place to turn the bold off.
Or you can even remove the {B} (i use curly braces in place of square ones, so as to avoid triggering the code myself) from the start of the quote itself.
Either way. Just a little less bold would make your posts far more legible.
Thanks in advance

To address the content of your post now:

Essentially, you are asking meta-level questions, not "what does Desire Utilitarianism say?", but rather, "why should we think D.U. is accurate?"

I've just given a full answer (to pretty much the exact same questions) to another religious absolutist ("philosophical" - he's posted here a couple of times too) on some other forums.

So i hope you will understand if i do not bother to repeat myself, but rather, simply provide you with the link.

Desire Utilitarianism (at SM forums)

That takes you to the 3rd page of a thread on Desire Utilitarianism. My post at the top of the page directly addresses the sorts of questions you're raising. Feel free to go back and skim through the previous pages too if you're interested in getting a fuller understanding of the argument. [/B]

I think I have learned more about your view by reading that. And I think it has another serious problem. Going back to the Nazi example. Since Nazis think that they were genetically superior than jews, if they eliminated jews then according to their understanding of evolution future humans that evolved would be even more superior in all ways so that desire fulfillment would be maximized far beyond any of the desires that the relatives of the jews that were killed and etc. So then according to desire utilitiarianism it would be the good desire. Because with the elimination of inferior human beings, the highly intelligent superior humans would be able to make advanced technology to fulfill more and more desires.
Ed is offline  
Old 01-14-2004, 10:27 PM   #109
Regular Member
 
Join Date: Dec 2003
Location: New Zealand
Posts: 260
Default

Quote:
Since Nazis think that they were genetically superior than jews, if they eliminated jews then according to their understanding of evolution future humans that evolved would be even more superior in all ways so that desire fulfillment would be maximized far beyond any of the desires that the relatives of the jews that were killed and etc. So then according to desire utilitiarianism it would be the good desire. Because with the elimination of inferior human beings, the highly intelligent superior humans would be able to make advanced technology to fulfill more and more desires.
hmm, i wouldn't be so sure of that. I mean, if someone had the desire to commit genocide whenever they believed it would benefit humanity in the long run... isn't this pretty obviously gonna cause more harm than good?
Pixnaps is offline  
Old 01-15-2004, 04:47 AM   #110
Veteran Member
 
Join Date: Mar 2002
Location: 920B Milo Circle Lafayette, CO
Posts: 3,515
Default

The 'thermostat' problem exists for beliefs as well as desires.

Assume that the temperature in a room falls below 70 degrees, and the heater kicks on. This can be described in the following terms:

A desire that P is simply a disposition to make it the case that P becomes or remains true. A belief that P is a disposition to act as if P is true. A thermostat is set for 70 degrees. Thus, it has a disposition to make it the case that 'this room is at least 70 degrees' becomes or remains true. If the thermostat believes that the temperature is at or above 70 degrees, then it does not activate the heater. If the thermostat believes that the temperature is below 70 degrees, it activates the heater. The thermostat uses sense data primarily to determine the room's temperatue. However, the thermostat can be fooled. Put a heat source near the thermostat, and it may come to believe that the room is at least 70 degrees when, in fact, the room is cooler than that.

This ties into a lot of work being done in the philosophy of psychology. One of the central themes to look at is Searle's Chinese Room argument. Searle postulates a person sitting in a room, receiving a stream of chinese characters handed to him through one window, using a set of rules to create a new stream of chinese characters and sending them out another window. To somebody outside the room, it appears as if the person inside 'understands' Chinese. Searle wants to make the point that this would not count as understanding.

Note: 'understanding' is a propositional attitude in the 'belief' family.

Yet, Turing's test for artificial intelligence would say that this is sufficient. The Turing test says that if any machine acts in a way that it appears indistinguishable from an intelligent agent is an intelligent agent. Searle's Chinese Room would qualify, on this model, as a person who understands Chinese.

Daniel Dennett, as I mentioned elsewhere, is the author of the thermostat problem. However, Dennett does not see this as a problem for the philosophy of mind. He is content with the idea that thermostats have desires, but it has serious implications for desire-fulfillment ethics. (Thermostats have rights?) Dennett, understandably, has his critics, among them Steve Stitch and Hilary Putnam. Stitch, in particular, criticizes Dennett based on the implications of Dennett's theory on morality. The problem with the views of these critics, however, is that their alternatives seem ad hoc and arbitrary. They don't really explain anything, they simply assert that there is a difference without accounting for the difference.

The main point is that, I do not expect to find a simple solution to the intentionality problem.

Alonzo Fyfe
Alonzo Fyfe is offline  
 

Thread Tools Search this Thread
Search this Thread:

Advanced Search

Forum Jump


All times are GMT -8. The time now is 07:51 PM.

Top

This custom BB emulates vBulletin® Version 3.8.2
Copyright ©2000 - 2015, Jelsoft Enterprises Ltd.