Freethought & Rationalism ArchiveThe archives are read only. |
09-11-2002, 05:59 PM | #111 |
Veteran Member
Join Date: Jul 2002
Location: Overland Park, Kansas
Posts: 1,336
|
Greetings:
It's true; dictionaries are seldom written by philosophers. (Sorry, but it's true.) Dictionaries show the current, common usage of a word (and there are often several current usages, and often they contradict one another)--what the 'average' person means when he or she uses a given word. Now, do you trust your epistemology to those folks? (Didn't think so...) Keith. |
09-12-2002, 12:15 PM | #112 |
Regular Member
Join Date: Jan 2001
Location: not so required
Posts: 228
|
Patience, bd-from-kg.
Would you like for me to reply to your most recent post, your previous two posts, or all three? Not all of us have the abundant reserves of time that you apparently do. Kip |
09-12-2002, 09:33 PM | #113 | |
Veteran Member
Join Date: Jul 2000
Location: King George, VA
Posts: 1,400
|
Quote:
|
|
09-14-2002, 04:33 PM | #114 | |||||||||||||||||||||
Regular Member
Join Date: Jan 2001
Location: not so required
Posts: 228
|
bd:
Let's begin with some core ideas that we can both reference throughout the discussion: 1. The Possible/Conceivable Distinction [PCD] This includes the distinction between possible and logically possible. 2. The Cause/Correlation Distinction [CCD] This refers to Leibniz's clocks and the Humean Problem of Induction. 3. The LFW (Freedom of Spontaneity)/Freedom of Action Distinction [FFD] This is the PCD applied to the idea of free will. 4. The Appeal to the Impossible [AI] This is, after you are done criticizing all of the ideas I present, the only argument you make for any other idea of moral responsibility. The essence of the appeal is: P. The requirements for X can never be satisfied C. There must be other requirements for X. In this discussion, X is moral responsibility. 5. Appeal to Internal Causation [AIC] This is the appeal to the idea that physical laws operate internally, as well as externally, and therefore we are, nevertheless, responsible. The AIC is always met with the reply that, at some point, we had no bodies, and therefore the determining force must ultimately have been external, not internal. Now, allow me to respond your posts. Quote:
Quote:
To be honest, these are ideas are inspired by Wolfram's book A New Kind of Science and the idea that the universe is a sort of deterministic, discrete computer. But all of this is conjecture and I do not pretend to know that the universe is either deterministic, indeterminate, or mixed. Quote:
"that weasal method of sucking the meaning out of words, and then presenting the empty shells in an attempt to palm them off them off as giving the Christian faith a new and another interpretation." I suspect that compatibilists may not be entirely innocent of the same. Quote:
Quote:
Quote:
Quote:
Quote:
Quote:
However, right after that you write: Quote:
At this point, at your request, I returned to your original argument to reconsider. Upon inspection I discover that, besides the obvious objection I raised (if by actions you meant “other” actions), I also disagree with the logic of your argument (and not only the premises). At the risk of oversimplification, your argument seems to be: P1. A libertarian’s actions are caused (by the libertarian). P2. LFW entails that a person's actions have no cause. C. LFW is false. My reply is to simply deny P2. Libertarians do maintain, without contradiction, that all of their actions are caused: caused by the libertarians themselves. The libertarian only claims that his choices are unconstrained. You seem to have anticipated this objection by saying: Quote:
Now you mention the cause/responsibility example: Quote:
Quote:
Quote:
Quote:
You illustrate how I contradict myself: Quote:
Later, you say: Quote:
Quote:
For example, we praise puppies but no one holds a puppy morally responsible. No one thinks that a puppy has free will. And if a puppy attacked a baby, we would punish the puppy (perhaps even execute the puppy) as a deterrent to prevent more attacks. But we would not blame or condemn the puppy. The same hold for robots and toddlers. Quote:
So, allow me to distance myself from that claim which must have seemed quite absurd. Also, I was reading your use of the word “preference” from a subjectivist perspective. Now that I understand you meant the preference of an “objective moral system”, obviously your use of “should” and “preference” (in 6 and 7) are equivalent (almost by definition). Finally, about the distinction between consequentialism and deontological theories, I am not a moral philosopher and do not pretend to understand these ideas (or even the consequences of my own position) as well as you do. I subscribe to neither theory (indeed we have yet to establish any system of either morality or responsibility). The morality of an action cannot be a function of only its consequences, because, how do you measure the consequences? Would you measure the consequences in terms of their consequences are so on ad infinitum? Likewise, I do not subscribe to deontological theories because, obviously, if a woman is raped, the rape is wrong at least partly because of the pain and suffering the rape causes. If the rape caused everyone to be happy and healthy, we might reconsider. The truth, I suspect, is some mix. If your position (as you say) is consequentialism, at some point you must measure consequences not in terms of further consequences but something else. Hume is far more eloquent than I could be: Quote:
This position is neither consequentialism nor a deontological theory, but some mix (as best as this amateur philosopher can tell). At some point, I feel, you must admit some metaphysical “ought-to-be-doneness” and consequentialism can become a function of not only consequences but also how well those consequences approach some “ought-to-be-doneness”. Quote:
[ September 14, 2002: Message edited by: Kip ]</p> |
|||||||||||||||||||||
09-14-2002, 05:28 PM | #115 | ||||||||
Regular Member
Join Date: Jan 2001
Location: not so required
Posts: 228
|
bd:
Your previous post began quite well. Although I note that have become less patient! I agree with your summary of the dispute and everything you say until: Quote:
Quote:
Quote:
Quote:
Quote:
The reason, I submit, is that humans are sufficiently complex to establish the illusion that humans are not mechanical, whereas robots cannot maintain such deception. We understand every single part of how an Intel chip processes data, but there are many territories of the mind uncharted. The notion of vitalism or Platonic souls still resides in the back of the public consciousness. I maintain that the sole reason we exempt robots from responsibility is that they are mechanical and that we cannot deny robots are mechanical. Or do you suggest any alternative distinction? And why would that distinction be relevant? Wouldn't it be easier to say "You damn computer, I blame you for that mistake, how could you be dumb?" That would be so much easier than admitting that the computer's creator, or even worse, the user, has done something wrong which we must repair. But in the case of humans, who would we blame? Most of the time if a human malfunctions, say by going on a killing spree, we cannot repair the human. We cannot open his skull and reprogram his brain. So we take the easy way out and simply blame the human. Now to entertain the real question: would we hold Data responsible? I am not sure. There are strong reasons for both claims. On the one hand, humans are machines themselves, and so as technology progressed, we would cross along the range of complexity until reaching a creation just as complex (and soon to be more complex) than humans themselves. That would be compelling. However, no matter how complex the creation became, the robots would always be distinguished by being a creation. In the back of every human's mind, we would remember that the robot is not some natural, mysterious product of evolution but an artificial human construct, as are houses and televisions. So, although robots will become sufficiently complex (we could even talk of granting robots that exceed humans in complexity and intelligence "more" free will than we possess ourselves), we may nevertheless hold reservations because we are reminded of the robot's artificial nature (whether or not that issue should be, rather than would be, relevant is debatable). Quote:
Quote:
Quote:
At this point, I am only sure that we are sure of very little. If you wish to suggest some condition for moral responsibility or reply to selections from my posts I would love to continue. But I feel we are digressing rather than progressing. [ September 14, 2002: Message edited by: Kip ]</p> |
||||||||
09-16-2002, 06:53 PM | #116 |
Senior Member
Join Date: Sep 2002
Location: San Marcos
Posts: 551
|
I think to answer this one has to look into the very nature of morality itself; to me morality is a matter or looking at consequences and character judgement, so there are no quarrels with the determinist position in my moral theory. Hoever my moral theory would actually be usurped by the free will/randomist position. Because one could not judge character then, because character would never in any sense be constant as character would at any moment be open to radical change for no causal reason whatsoever. What sense does it make to praise a Nobel Prize winner when they apear at random only to maybe become bloodthirsty bigots the enxt for no predicable reason whatsoever?
In my theory deciding on whether or not a given person is responsible for something is for the most part based on whether or not the given consequence happened as a result of that person's character or some external accident, beyond that person's intention or ability to manipulate. For example, if you blame something for sleeping in and being late, it is the laziness of the person being condemned, a certain intrinsic trait.However a person who's car broke down cannot be blamed because that's not a character flaw but a matter of external circumstance. Now do I not realize that these people had no choice in regards to whether they were of a certain character or not? Of course I do, but that's not relevant. What is relevant is their character now. I think that other factors determine whether an organism can be seen as morally responsible apart from free will. Traits like intention. What puzzles me though is why a determinist would even argue against humans holding each other responsible,blaming,praising etc. Because weren't such humans determined to punish and blame others? Oh yes, btw I am not compatibilist, as I in no sense believe in free will, a concept I see as somewhat religious and randomist i.e. that human minds are somehow free of causality. I believe that people can reason and cognitively select using their frontal lobes but such things are completely within causal laws. I believe that morality has nothing to do with free will. I also likewise do not adhere to the is/ought dichotomy. [ September 16, 2002: Message edited by: Primal ] [ September 16, 2002: Message edited by: Primal ]</p> |
09-17-2002, 01:12 PM | #117 | ||
Veteran Member
Join Date: Jul 2000
Location: King George, VA
Posts: 1,400
|
Kip:
Once again there seems to be little point in replying to your posts in detail since they consistently reflect a near-total lack of understanding of what I’ve been trying to say. In this post I’m going to go through my analysis of causality and what it means to “do” something, in some detail, and point out the implications for LFW, PAP, and “ultimate responsibility”. In the end I hope that you will at least come to an understanding (if you haven’t already) that the issues that are troubling you have nothing to do with determinism or free will. In reality they spring from a total misconception of the nature of morality. I’ll discuss my ideas about morality in a following post. 1. The nature of causation. What do we mean when we say that A causes B? A typical answer given by professional philosophers (in fact the wording here is taken directly from Brand Blanshard) is that A is so connected to B that given A, B must occur. The problem, of course, lies in that word “must”. What does it mean to say that B “must” occur? One interpretation that has been offered is that it means that A has in fact always been followed by B. But on reflection it’s clear that this is not what is meant. When we say, for example, that a large steel ball hitting an ordinary window at high speed will cause it to shatter, we do not mean merely that windows hit by large steel balls have, as a matter of fact, always shattered in the past. For one thing, causal statements sometimes involve conditions that have never in fact occurred. For another, they often refer to possible future events. (An example of both: “If a football were to fall into the sun, it would be destroyed.”) So perhaps it means that A not only has always been followed by B, but that it always will, as a matter of fact, be followed by B? (This is what you call “perfect correlation”.) But this doesn’t work either. Among other problems, the statement may refer to an event that not only never has, but almost certainly never will, occur. (Example: “If the Earth were to fall into Sirius, it would be destroyed.”) Another is that two events may, as a matter of fact, be perfectly correlated in this way without any causality being involved. For example, suppose that a certain sporting event - the quiddich world championship, say - occurs every four years. But a presidential election also occurs every four years. And we can easily imagine that, by sheer chance, the first quiddich world championship occurred just after the first presidential election, and the last will occur right after the last such election. So we have a case where A (a presidential election) is always followed, and always will be followed, by B (a quiddich championship), but there is no causal connection between the two. Clearly, when we say that A causes B we intend to say not only that A has always been followed, and will always been followed, by B, but that if A were to occur (or to have occurred) at any specific place and time, B would follow (or have followed). But the “would” here is just as problematic as the “must” in original statement, and in just the same way. In both cases the term “must” or “would” is expressing some kind of necessity. That is, in each case we are saying “If A, necessarily B”. Or alternatively, “It is not possible that A should occur, but not be followed by B”. But what is the nature of the “necessity,” or “possibility” referred to in these statements? Clearly it is not logical necessity. If B were logically necessary given A, we would not speak of causality at all. For example, we do not say that an object’s being square causes it to be rectangular. Conversely, we do say that a solid steel ball hitting an ordinary window at high speed will cause it to shatter, even though it is clearly logically possible that it will go clean through the window without affecting it in any way. So statements like “If A, necessarily B” or “It is not possible that A should occur but not be followed by B” must be saying that there is something about this world that makes it “necessary” that the second event would happen if the first one did. In a sense we are not making a statement about this world at all, but about other possible worlds. Thus, the statement “If this solid steel ball had hit that window at high speed, the window would have shattered,” can be interpreted as meaning that in all possible worlds like this one in the relevant way, but in which (unlike in this world) the ball does hit that window, the window does shatter. (Or if you prefer, we can substitute “a ball like this one in the relevant respects” and “a window like that one in the relevant respects”.) Thus to say that A causes B is to say that there is some property of this world such that not only in this world, but in any possible world that has this same property, A is necessarily followed by B. And the property in question cannot be merely the fact that A is in fact always followed by B, because in this case the statement would reduce to the trivial tautology that in any possible world where A is always followed by B, A is always followed by B. Rather, it must be something that “underlies” the actual events in the world, constraining them to adhere to a certain pattern. The set of such properties is what I mean by an “underlying structure”. At this point it should be clear why it is correct to say both that “A causes B” means that given A, B must occur, and to say that it means that if an event of type A were to occur (or had occurred) at a given place and time it would be followed (or would have been followed) by an event of type B. Note: It seems to be useful to broaden the notion of a “cause” to include conditions or states of things as well as events. At any rate, no confusion seems to arise from doing so, so I will sometimes refer to conditions or states as causes in what follows. In particular, I will sometimes say that X “causes” Y when X is a thing (such as a person or a mind). This can be taken to mean that X’s current state, or events internal to X, or both, cause Y. 2. The nature of action, or what it means to “do” something. Now let’s consider what it means to say that X did Y. Surely it is obvious that it means that X caused Y to happen. This is perfectly straightforward in the case of nonsentient things. For example, to say that a tree knocked over a telephone pole is to say that the tree caused the telephone pole to fall. Or, when I say that the microwave heated up my lunch, I mean that it caused the lunch to become hot. The same is true of animals. For example, when I say that my dog retrieved the ball, I mean that he caused the ball to return to me. And finally, it is true of people. When we say that John killed Robert, we mean that John caused Robert’s death. However, in the case of people (or even animals) there is an ambiguity that does not exist in the case on inanimate objects. Thus, when we say that John killed Robert, we could mean ether of two very different things. For example, suppose that the two are standing at the edge of the Grand Canyon, John pushes Robert over the edge. In this case we would say without hesitation that John killed Robert. But suppose instead that a gust of wind catches John and forces him into Robert, causing him to fall to his death. We would be inclined to say that in a sense John killed Robert, but that he “really” didn’t. This is because when we talk about “so-and-so,” we can be referring to his body or to his mind. Thus, in our example, Robert was killed as a result of John’s body moving in a certain way, but since this motion was not caused by John’s mind, we are inclined to say that John didn’t “really” kill him. Here’s another example: suppose that John is knocked unconscious and driven to Chicago. We would be inclined to say that he “went” to Chicago, and would probably not be inclined to say that he “really” didn’t. The difference between this and the first example, of course, is the killing someone has moral significance, whereas going to Chicago per se doesn’t. Thus in the first case we are concerned to be careful to distinguish between a death caused by John’s mind and one that is merely caused by his body when it is not “under the control” of his mind. Note that both meanings still refer to causation. In the one case we are talking about physical causation: the person’s body causes the event in question. In the second we are talking about mental causation: the event is caused by the person’s mind. But in either case we are talking about causation. Also, note that the second sense is stronger than the first, not weaker, as one might be inclined to think at first sight. The only way that a person’s mind can cause events in the “real world” is by causing movements of his body. Thus mental causation always entails physical causation. But since physical causation has no moral significance in itself, I am going to use only the second sense. That is, when I say that a person “did” something, I will always mean that the event was caused by the person’s mind. Since the claim that "X did Y" means, or at least entails, "X caused Y" is crucial to the argument, it would be wise to stop and review this for a moment. Is it possible that, when we say that “X did Y”, we sometimes mean something other than “X’s body caused Y to occur” or “X’s mind caused Y to occur”? I honestly can’t think of any other meaning that people might plausibly have in mind when they say this. When we say that X did Y, are we not saying that X is responsible for Y’s occurring? And we are certainly not saying that he is morally responsible, since we quite often say that someone did something, but is not morally responsible for it. What other sense is there of being responsible for an event besides being causally responsible? Or we can look at it another way. When we say that X did Y, are we not saying that there is a connection between X and Y? And we are certainly not saying merely that there is some kind of correlation between the two – that it happens to be the case that whenever X is around under certain circumstances, an event like Y always occurs. No, when we say that X does these things, we are saying that there is a real connection between X and the “Y-like” events, not a mere correlation. And what other kind of connection could we be referring to but a causal connection? Now let’s turn to your comments (which you apparently consider to be crucial) about what happens when an agent “repeatedly returns to the exact same situation”. You suggest that it is meaningful to say that he might choose the same option every time without any causation being involved: Quote:
So when we talk about “rolling back the tape” to see if anything different might happen, we are really talking about looking at other possible worlds in which the events up to this point were exactly the same as events in this world. We’re asking, “Does something different happen at this point in some of these worlds than what happened in this world?” Obviously the answer to the question, if we put no restrictions on the “possible worlds” to be considered, is “yes”. It is logically possible that almost anything happened at that point given what came before, and there are possible worlds corresponding to each of these logical possibilities. But the vast majority of these possible worlds are utterly chaotic from this point forward (even though they were quite orderly up to that point). We’re not really interested in all possible worlds, not even all possible worlds where the events up to this point are exactly the same as in ours. We’re only interested in the ones that are “like” ours in the relevant ways. Specifically, they must be alike in the sense of having the same underlying structure. That is, any uniform regularities (and hence any causal relationships) that hold in our world must hold in the other possible worlds that we’re interested in. Thus the question that we really want to ask is, “Does something different happen at this point in time in any of these possible worlds than what happened in ours?” In the case of something that someone did, we are now in a position to answer this question definitively: no, nothing different happens at this place and time in any of these possible worlds. The reason is simple. Let the event in question be Y, and assume that X did Y. To say that X did Y means that X caused Y to occur. And to say that X caused Y to occur is to say that there is some property of this world such that not only in this world, but in any possible world that has this property, this state of X and/or events internal to X are necessarily followed by Y. But for reasons explained earlier, we have restricted our attention to worlds that do have this property – i.e., possible worlds in which the relevant causal relationships hold. Therefore Y occurs in all of these possible worlds. In other words, if we could “roll back the tape” and observe what happens from that point in any other possible world that is sufficiently “like” ours to make the question meaningful, we will observe Y. Note: Nothing whatever has been said about determinism in the above analysis. Its validity does not depend on whether the world is deterministic or not. 3. Libertarian Free Will (LFW), the Principle of Alternative Possibility (PAP), and the Principle of Ultimate Responsibility (PUR). A. It follows immediately from the analysis above that, whenever we can truly say that X did Y, X could not have done otherwise in any sense that advocates of LFW would consider relevant (i.e., a sense incompatible with determinism). Thus the concept of LFW is logically incoherent. B. The PAP says that no one is morally responsible for an action unless he could have acted differently. In the libertarian interpretation this means that the action cannot be caused. But we have seen that to say that someone did something entails that it was caused. So the PAP (on the libertarian interpretation) says that no one is morally responsible for doing anything that he can truly be said to have done. This is so absurd that it cannot be plausibly held to be a moral intuition or “self-evident truth”. C. Let me repeat Kane’s definition of the PUP: Quote:
(I know that you don’t need this demonstration, Kip, but there may be some following this thread who do.) Suppose that an agent performs an act Y as a result of the “character and motives” (C&M) that he has now. According to the PUR, in order to be (morally) responsible for Y, this C&M must be the result of “choices or actions voluntarily performed (VCA) in the past. But as we have seen, to say that he “performed” these choices or actions is to say that some aspects of his prior mental state caused them. Thus to say that the C&M that the agent has now are the result of VCA in the past is to say only that they are the result of the C&M that he had at some time(s) in the past. And if this is so, surely in order to ultimately responsible for his actions he must be responsible, by virtue of still earlier VCA, for having the C&M that led him to perform the original set of VCA. But this clearly leads to an infinite regress, which is impossible because of the fact that the agent has only existed for a finite time. Thus it is logically impossible for an agent to be ultimately responsible for his actions. All of should be blindingly obvious to anyone who has taken the trouble to think about it. A being who came into existence at a definite time in the past cannot be “ultimately responsible” in this sense; his behavior must clearly be the product of some combination of “nature and nurture,” heredity and environment, original and acquired characteristics. There is simply no way to “bootstrap” yourself into existence. At some point you simply find yourself existing, with whatever qualities you have. Whatever qualities you have from that point on are the product of the interaction of these primordial qualities and whatever “happens to you”. Even if some uncaused events (whether internal or external) are a part of this mix, they’re just part of what “happens to you”; you are clearly not “ultimately responsible” for the results of uncaused events. So this whole way of thinking about ethical questions collapses in the end into total incoherence. At this point we have the choice of abandoning or rejecting morality altogether, or finding a new way of thinking about it. [ September 17, 2002: Message edited by: bd-from-kg ]</p> |
||
09-17-2002, 09:46 PM | #118 |
Junior Member
Join Date: Aug 2002
Location: San Diego
Posts: 15
|
Hello,
I have read (as much as I can ) of this post and I just want to say that I totally agree with Kip. It seems to me (and I could be wrong) that people are arguing for the notion of moral accountability because the consequences of such a view appear to be moral anarchy. I believe that this is not the case, as has been shown in this post. I also believe that the robot example is clearly persuasive. I wonder if part of the disagreement we have on this issue is in how we are looking at this problem. When I think about choice I do so purely from the vantage of an external observer. In strictly observing another person, it is clear to me that they are a state machine and whatever they did could not have been otherwise. I believe that the naturalism.org site expains Kip's position brilliantly (as I believe Kip did!)<a href="http://www.naturalism.org/resource.htm#Writings" target="_blank">naturalism.org</a> [ September 17, 2002: Message edited by: Marcion ]</p> |
09-18-2002, 12:17 PM | #119 |
Senior Member
Join Date: Sep 2002
Location: San Marcos
Posts: 551
|
I think a lot of the issue centers around how we view morality. If we define morality as something that requires, by definition free will and choice, then of course determinism will be incompatable with morality. However if we view morality as something else; such as a universal law, a relfection of our self-interests, utilitarian and/or character judgement; then the existence/nonexistence of free will becomes irrelevant.
In the Robot example, I would be willing to hold the robot morally responsible and hence worthy of destruction or punishment, depending on if it could feel pain or not. Depending for the most part on the make-up of the machine. This is because my moral standards do not require free will 'a priori' to be implemented. My moral code only requires the possesion of certain traits. Call this 'absurd' if you like but keep in mind that pure incredulity makes for a weak argument. Oh yesm I also think it very necessary to destinguish between the different meanings of 'could' in such discussion; between the word 'could' used to designate epistemic ignorance and expectations; and the word 'could' meant to designate actual physical randomness. When I as a determinist say something like " if you go into the ocean you could get attacked by a shark" I am not saying such an event is random in the existential sense. I mean to say, given what I know of oceans, sharks, and past human/shark encounters combined with my ignorance of future events; the idea of a shark attack may happen. I cannot say you will be attacked because I don't know, I cannot say you will not be attacked for the same reason. But given that I don't know all the factors in the ocean at any given time, and shark attacks have happened in the past, I would have to say it can. Hence the distinction between "can" in the sense of prediction and "can" in the sense of the actual random. Now what sense then does it mean to say "one could have done otherwise?", I think it means that given I was placed back in the state of ignorance of the event in question, I could not have fully predicted the outcome. What this entails for morality is perhaps questionable, but the words 'could' obviously can be used in different senses and this should be recognized at the very least. Lastly, it also means that a given thing 'is' able to or warranted, hence then three meanings. For example, one 'can' digest certain foods, meaning it is expected that one is able to digest foods. [ September 18, 2002: Message edited by: Primal ]</p> |
09-18-2002, 12:52 PM | #120 |
Veteran Member
Join Date: Aug 2002
Location: Chicago
Posts: 1,485
|
Again, holding entities morally responsible is strictly utilitarian. If holding those entities responsible will protect society from further harm, they are held responsible. If holding them responsible will not protect society, they are not held responsible. This is a perfect reason for holding individuals morally responsible (or just responsible if you like) event though their actions may be deterministic.
Here are some of the utilitarian purposes of punishment for immoral acts. 1. Deterrent Input - Since the brains of individuals cause action which is base on inputs from the environment, the knowledge that one could be punished if caught would be an input to the decision making process. 2. Classical Conditioning - By providing an unpleasant stimulus for immoral behavior, the structure of the state machine could be changed so that the individual would seek to avoid the immoral behavior. 3. Protection of Society - Removal of the offender from society protects society from further immoral acts by the individual. 4. Removal from Gene Pool - Execution prevents the individual from reproducing. If the immoral behavior was the result of genetics, this societal selection reduces the chance that the behavior will continue in succeeding generations. By examining the four utilitarian reasons for punishment listed above, it should be clear that holding an invididual morally responsible is perfectly compatible with determinism. Some social animals punish group members for the same reasons (I know chimps and wolves do - and I assume there are others). And we hold pets morally responsible for some of their actions (for reasons 2, 3, and possibly 4). If punishing a robot would make sense for one of the above reasons, you can bet we would punish it. |
Thread Tools | Search this Thread |
|