![]() |
Freethought & Rationalism ArchiveThe archives are read only. |
![]() |
#101 |
Veteran Member
Join Date: Aug 2003
Location: Berkeley, CA
Posts: 1,930
|
![]()
Regarding sentience, I agree with Pixnaps. Certainly it only shifts the problem; but it seems to me to shift the problem to something that seems more likely to be resolved in the relatively near future. It awaits basically developments in neuroscience, while as far as I can tell we have no idea where a solution to the problem of defining desire would come from.
|
![]() |
![]() |
#102 | ||||||
Veteran Member
Join Date: Jun 2000
Location: SC
Posts: 5,908
|
![]() Quote:
Why is that the correct answer for judging the moral worth of the act? And who came up with that criterion? Quote:
Quote:
Quote:
Quote:
Quote:
|
||||||
![]() |
![]() |
#103 |
Regular Member
Join Date: Dec 2003
Location: New Zealand
Posts: 260
|
![]()
First of all... Ed- please please please stop writing everything in bold!!! It's infuriating the hell out of me!
Just take the effort to insert an extra [/B] in the appropriate place to turn the bold off. Or you can even remove the {B} (i use curly braces in place of square ones, so as to avoid triggering the code myself) from the start of the quote itself. Either way. Just a little less bold would make your posts far more legible. Thanks in advance ![]() To address the content of your post now: Essentially, you are asking meta-level questions, not "what does Desire Utilitarianism say?", but rather, "why should we think D.U. is accurate?" I've just given a full answer (to pretty much the exact same questions) to another religious absolutist ("philosophical" - he's posted here a couple of times too) on some other forums. So i hope you will understand if i do not bother to repeat myself, but rather, simply provide you with the link. Desire Utilitarianism (at SM forums) That takes you to the 3rd page of a thread on Desire Utilitarianism. My post at the top of the page directly addresses the sorts of questions you're raising. Feel free to go back and skim through the previous pages too if you're interested in getting a fuller understanding of the argument. |
![]() |
![]() |
#104 |
Veteran Member
Join Date: Mar 2002
Location: 920B Milo Circle
Lafayette, CO
Posts: 3,515
|
![]()
Pixnaps: In the philosophy of psychology, I know of some powerful arguments that suggest that 'sentience' is just as problematic as 'soul', and I am uncomfortable founding a theory on a concept that has these types of problems. The best of the best in the field of philosophy is working on this problem. I am not going to pretend that I am near as good as these guys. If they say that this is a problem with the concept of 'desire', I believe them.
Ed Some of your arguments confuse ethics and language. You could ask, "Who decided that the woody thing growing in my front yard is going to be called a 'tree'. We did not have to call it a 'tree', we could have called it an 'arbora' or a 'megaplant' or whatever. Instead, we call it a tree. However, the fact that we could have CALLED it something else does not keep it from being any less real. Changing its name does not change its height, color, solidity, or tendency to drop leaves all over my yard each fall. "What's in a name? A rose by any other name would smell as sweet." -- Shakespear. We could have used a word other than 'harm' to refer to the thwarting of a strong and stable desire, but that would not change the nature or the qualities of thwarting a strong and stable desire. I am not the one who decided that the word 'value' refers to relationships between states of affairs and decires. The argument is that if you look at the way that people use value terms, this is what the word refers to. Just as if you look at the way people use the word 'tree', you will discover that they use it to refer to tall woody things like the one sitting in my front yard. Some people make false claims about values, just as some people make false claims about trees. Somebody might think that trees house tree sprites. Somebody might think that values are rooted in a diety. Both of these claims are wrong. But a tree is still a tree. And a value is still a value. |
![]() |
![]() |
#105 |
Veteran Member
Join Date: Oct 2001
Location: U.S.
Posts: 2,565
|
![]()
On the thermostat issue:
I'm not familiar with the latest philosophy of mind, nor do I think I have an answer for this apparent problem, but I do have a couple of observations to make: 1) With regards to BDI theory, while the thermostat appears to have something that meets the definition of "desire", it does not appear to have anything that meets the definition of the term "belief". In some ways, it seems this may be what separates the sentient from the non-sentient. Sentient entities have beliefs, and thus follow (in theory) the BDI model. A thermostat, however, would appear to follow a simpler DI model. One might say the lower levels of the animal kingdom would follow such a model as well. 2) Other than the absurd impact on desire-utilitarianism, it may not be so ludicrous to say a thermostat has a "desire". One can imagine a complex computer system that approaches artificial intelligence. Such a system might have "desires" that really do approach the common-sense definition of the term. The thermostat is merely a much simpler intelligence. It doesn't seem entirely unreasonable to say that such an intelligence is programmed with a desire. 3) Here's where I really go out on a limb: Suppose desire-utilitarianism were modified. Suppose the only desires that impact morality are those desires held by entities operating in a BDI-mode, but not those operating in a DI-mode. It seems, that this may be a somewhat less ambiguous way of incorporating the concept of "sentience" into the theory. It seems like one might need a further explanation of why DI-mode entities are exempt, and I don't have one right now other than my "gut-feel" that this seems acceptable. Furthermore, I sense it might still be possible to squeeze some definition of "belief" into the thermostat, though that seems even more absurd than a thermostat with "desires". Just some thoughts though. Maybe someone with more philosophical might can do something useful with them. Or shoot them down so I don't have to worry about them. ![]() Jamie |
![]() |
![]() |
#106 | |
Contributor
Join Date: Dec 2002
Location: Alaska!
Posts: 14,058
|
![]() Quote:
Alonzo's definition is based on both usefulness and consistency with the way people use language. crc |
|
![]() |
![]() |
#107 | |
Regular Member
Join Date: Dec 2003
Location: New Zealand
Posts: 260
|
![]() Quote:
After all, any sort of sensory input (which machines can easily have, eg light detectors, or thermometers) provides information about the outside world, and hence forms a sort of "belief". A thermostat might have a "belief" that the current temperature is less than 70 degrees, and this would cause it to turn on. Once it's belief changes (due to updated input), it would turn off again. Something like that, anyway. |
|
![]() |
![]() |
#108 | |
Veteran Member
Join Date: Jun 2000
Location: SC
Posts: 5,908
|
![]() Quote:
I think I have learned more about your view by reading that. And I think it has another serious problem. Going back to the Nazi example. Since Nazis think that they were genetically superior than jews, if they eliminated jews then according to their understanding of evolution future humans that evolved would be even more superior in all ways so that desire fulfillment would be maximized far beyond any of the desires that the relatives of the jews that were killed and etc. So then according to desire utilitiarianism it would be the good desire. Because with the elimination of inferior human beings, the highly intelligent superior humans would be able to make advanced technology to fulfill more and more desires. |
|
![]() |
![]() |
#109 | |
Regular Member
Join Date: Dec 2003
Location: New Zealand
Posts: 260
|
![]() Quote:
|
|
![]() |
![]() |
#110 |
Veteran Member
Join Date: Mar 2002
Location: 920B Milo Circle
Lafayette, CO
Posts: 3,515
|
![]()
The 'thermostat' problem exists for beliefs as well as desires.
Assume that the temperature in a room falls below 70 degrees, and the heater kicks on. This can be described in the following terms: A desire that P is simply a disposition to make it the case that P becomes or remains true. A belief that P is a disposition to act as if P is true. A thermostat is set for 70 degrees. Thus, it has a disposition to make it the case that 'this room is at least 70 degrees' becomes or remains true. If the thermostat believes that the temperature is at or above 70 degrees, then it does not activate the heater. If the thermostat believes that the temperature is below 70 degrees, it activates the heater. The thermostat uses sense data primarily to determine the room's temperatue. However, the thermostat can be fooled. Put a heat source near the thermostat, and it may come to believe that the room is at least 70 degrees when, in fact, the room is cooler than that. This ties into a lot of work being done in the philosophy of psychology. One of the central themes to look at is Searle's Chinese Room argument. Searle postulates a person sitting in a room, receiving a stream of chinese characters handed to him through one window, using a set of rules to create a new stream of chinese characters and sending them out another window. To somebody outside the room, it appears as if the person inside 'understands' Chinese. Searle wants to make the point that this would not count as understanding. Note: 'understanding' is a propositional attitude in the 'belief' family. Yet, Turing's test for artificial intelligence would say that this is sufficient. The Turing test says that if any machine acts in a way that it appears indistinguishable from an intelligent agent is an intelligent agent. Searle's Chinese Room would qualify, on this model, as a person who understands Chinese. Daniel Dennett, as I mentioned elsewhere, is the author of the thermostat problem. However, Dennett does not see this as a problem for the philosophy of mind. He is content with the idea that thermostats have desires, but it has serious implications for desire-fulfillment ethics. (Thermostats have rights?) Dennett, understandably, has his critics, among them Steve Stitch and Hilary Putnam. Stitch, in particular, criticizes Dennett based on the implications of Dennett's theory on morality. The problem with the views of these critics, however, is that their alternatives seem ad hoc and arbitrary. They don't really explain anything, they simply assert that there is a difference without accounting for the difference. The main point is that, I do not expect to find a simple solution to the intentionality problem. Alonzo Fyfe |
![]() |
Thread Tools | Search this Thread |
|