FRDB Archives

Freethought & Rationalism Archive

The archives are read only.


Go Back   FRDB Archives > Archives > IIDB ARCHIVE: 200X-2003, PD 2007 > IIDB Philosophical Forums (PRIOR TO JUN-2003)
Welcome, Peter Kirby.
You last visited: Today at 05:55 AM

 
 
Thread Tools Search this Thread
Old 09-11-2002, 05:59 PM   #111
Veteran Member
 
Join Date: Jul 2002
Location: Overland Park, Kansas
Posts: 1,336
Post

Greetings:

It's true; dictionaries are seldom written by philosophers.

(Sorry, but it's true.)

Dictionaries show the current, common usage of a word (and there are often several current usages, and often they contradict one another)--what the 'average' person means when he or she uses a given word.

Now, do you trust your epistemology to those folks?

(Didn't think so...)

Keith.
Keith Russell is offline  
Old 09-12-2002, 12:15 PM   #112
Kip
Regular Member
 
Join Date: Jan 2001
Location: not so required
Posts: 228
Post

Patience, bd-from-kg.

Would you like for me to reply to your most recent post, your previous two posts, or all three?

Not all of us have the abundant reserves of time that you apparently do.

Kip
Kip is offline  
Old 09-12-2002, 09:33 PM   #113
Veteran Member
 
Join Date: Jul 2000
Location: King George, VA
Posts: 1,400
Post

Quote:
Originally posted by Kip:
Would you like for me to reply to your most recent post, your previous two posts, or all three?
All three, of course. You might try consolidating replies rather than answering point by point, but that's your call. I'll wait.
bd-from-kg is offline  
Old 09-14-2002, 04:33 PM   #114
Kip
Regular Member
 
Join Date: Jan 2001
Location: not so required
Posts: 228
Post

bd:

Let's begin with some core ideas that we can both reference throughout the discussion:

1. The Possible/Conceivable Distinction [PCD]

This includes the distinction between possible and logically possible.

2. The Cause/Correlation Distinction [CCD]

This refers to Leibniz's clocks and the Humean Problem of Induction.

3. The LFW (Freedom of Spontaneity)/Freedom of Action Distinction [FFD]

This is the PCD applied to the idea of free will.

4. The Appeal to the Impossible [AI]

This is, after you are done criticizing all of the ideas I present, the only argument you make for any other idea of moral responsibility. The essence of the appeal is:

P. The requirements for X can never be satisfied
C. There must be other requirements for X.

In this discussion, X is moral responsibility.

5. Appeal to Internal Causation [AIC]

This is the appeal to the idea that physical laws operate internally, as well as externally, and therefore we are, nevertheless, responsible. The AIC is always met with the reply that, at some point, we had no bodies, and therefore the determining force must ultimately have been external, not internal.

Now, allow me to respond your posts.

Quote:
In this context philosophers use the term “possible worlds” to mean “logically possible worlds”. The point is that a set of propositions is consistent if and only if it has a “model” – i.e., a possible world in which all of the propositions are true. Thus if someone claims that such-and-such must be the case, it is often useful to point out that there are possible worlds in which it is not the case, and therefore it is false that it must be the case.
I understand your meaning now. This is the PCD. By possible, you mean "logically possible", which is the same (to the best of my knowledge) as "conceivable" (which is quite different than possible). I remember assume that you use "possible" in this sense now.

Quote:
What you can and cannot imagine is irrelevant. Your ability to conceptualize the universe is not a necessary condition for its existence. Since nondeterministic worlds are possible and it is clearly impossible to determine whether this world is deterministic or not by observation, what are your grounds are for “strongly doubting” that it is not?
For one, the trend towards deterministic explanations is rather suspicious. Second, how would an indeterminate universe generate randomness if not according to some rule? Would we not reject the idea of an indeterminate universe for the same reason that we reject LFW? To say that I can conceive of an unpredictable world is not to say that I can conceive of an indeterminant world (although perhaps I can).

To be honest, these are ideas are inspired by Wolfram's book A New Kind of Science and the idea that the universe is a sort of deterministic, discrete computer. But all of this is conjecture and I do not pretend to know that the universe is either deterministic, indeterminate, or mixed.

Quote:
Yes. Since you agree that LWF is logically incoherent, I’m at a loss as to why or how you would challenge this statement. LWF covers pretty much any conception of free will that is incompatible with determinism. And if all such conceptions are logically incoherent, it follows that all logically coherent conceptions of free will are compatible with determinism.
I was not questioning your logic so much as raising an eyebrow at the liberties you take in defining the term "free will". You appear to "picking and choosing" whichever definition pleases you most and satisfies your need for moral responsibility. This is an example of Persuasive Definition. Walter Lippman quoted a Christian fundamentalist describing Persuasive Definition as:

"that weasal method of sucking the meaning out of words, and then presenting the empty shells in an attempt to palm them off them off as giving the Christian faith a new and another interpretation."

I suspect that compatibilists may not be entirely innocent of the same.

Quote:
On the contrary, it is compatible with determinism. In fact, the person who wrote this definition obviously intended to make it clear that it was compatible with determinism. Why do you think he listed the causal factors that would make a choice “unfree” in the sense he had in mind if any causal factors would make it unfree? He clearly intended to include choices that are determined by the agent’s personality, character, desires, etc. as “free”.
I agree that the author may have intended a compatibilist definition (although I find the definition to be appropriate for libertarians too). The question, of course, is whether the agent's constitution, despite the author's opinion, are truly "free".

Quote:
Dictionary definitions are hardly useful guides to such questions. In this context “produce” is just a synonym for “cause”. In fact, the American Heritage dictionary (from which your definition may have come) defines “cause” [as a noun] as “producer of an effect” and [as a verb] as “to be the cause of” – i.e., to be the producer of an effect, or to produce an effect. But when we look up “produce”, the relevant definition is “to cause to occur or exist”. In other words, to “cause” is to “produce” and to “produce” is to “cause”. Isn’t that helpful? Along the same lines, I once had a dictionary that defined “pagan” as “heathen” and “heathen” as “pagan”. The purpose of a dictionary is not to provide an analysis of every word, but to give someone who is unfamiliar with a word an idea of what it means.

If we set out to define or analyze the notion of causality logically, we soon find that it means one thing and one thing only: to say that A causes B means that given A, B must occur. The only possible evidence that events like A cause events like B is that we find that events like A are always followed by events like B. Any further connection between them exists only in our imaginations. And any such connection (of two kinds of events being regularly conjoined in this way) is always regarded as evidence of a causal relationship. Thus from an operational point of view, to say that A causes B means at least this, and only this.

(However, this is not quite as simple as it looks at first sight, because of that crucial word “must”. In other words, when we say that A causes B, we are not saying merely that A is always followed by B, but that it must always be followed by B. The implications of this are discussed a little later.)
Excellent point about the definitions! I needed the reminder that definitions can be tautologies (indeed the entire dictionary is but one big tautology). However, having read your definitions again, I realize that the entire controversy is, as you note parenthetically, about the word “must”. That is a very important word and I agree that, granted the word “must”, this is a decent definition of determinism. The controversy arose, however, because the use of the word “must” is not entirely consistent. For example, later in your post (which I print, read, and underlined as appropriate) you define a cause as:

Quote:
It means that if an event of type A were to occur (or had occurred) at a given place and time it would be followed (or would have been followed) by an event of type B.
So, "would" or "must", which is it? A definition of cause or determinism that only references would is too weak and a definition that references must is not compatible with LFW. That is all I claim so perhaps we have only argued words.

Quote:
Yes. It seems to be impossible in principle to establish that there is even such a thing as a “cause,” much less that A “caused” B in any specific case. In other words, any statements about a supposed “underlying structure” are not factual; they are explanatory hypotheses or conceptual frameworks. But once you start talking about causes, or determinism, you are committed to the position that counterfactual conditionals are meaningful.
This is the CCD. We are discussing (what I would not have expected to be relevant to a discussion of moral responsibility) the Problem of Induction. Now, I grant, for the sake of argument and practical sense, that all physical laws may in fact be causes (and not only correlations). However, once we hypothesize about an agent’s repeated return to the exact same situation, I do not feel the same Principle of Induction is warranted. The libertarian may claim, without contradiction, that the agent is only freely willing the same decision eternally. His actions are not determined, because he there is no “must”, only “would”. This is the distinction I have been mentioning repeatedly.

Quote:
Once again you’ve lost me completely. How can you possibly imagine that LFW does not assert that we sometimes make choices that could have been different than they were given what came before? This is fundamental to the idea of libertarian free will.
I may have misread what you wrote. You write:

Quote:
The only possible point that I can see here is that perhaps you are saying that LFW asserts only that we have the ability to make choices that are not determined by what came before, but doesn’t actually assert that anyone ever does make such a choice! But I doubt that there has ever been an advocate of LFW who believed that no one ever actually has made, or ever will make, a free choice, even though we have the ability to do so. So even if it’s true that technically LFW doesn’t entail that anyone ever makes a free choice, this seems to be pointless nitpicking at best.
That is exactly my position. I do not understand how that is nitpicking, though. From the claim that many libertarians subscribe to this further idea (that libertarians actually would chose otherwise if returning to the same exact situation), it does not follow that LFW entails this idea, anymore than it follows that, from the claim that most atheists believe in purple dinosaurs, atheism entails a belief in purple dinosaurs. I assure you that I, for one incompatibilist, do not subscribe to this further idea about what libertarians “would” do.
However, right after that you write:

Quote:
B says nothing about what the agent would do if presented with the same choice again and again. (This is a meaningless question anyway. No one is ever presented with the exact same choice twice, much less over and over again. If nothing else, conditions are different the “second time around” in at least one crucial way: the agent has faced this choice before, whereas he hadn’t the first time.) It says only that some of the actions that A claims to exist – i.e., actions that are not determined by what came before – are choices. If no such actions were choices, LFW would have no content. For example, if an action were caused by a random quantum event in the brain, no reasonable person would say that it was a choice in any meaningful sense, and therefore that it was an instance of the exercise of LFW. What B says is that some of the actions that are not determined by what came before are not of this kind; that they can meaningfully be regarded as choices rather than as the result of things that “just happened” to the agent.
This suggests to me that I have misread your B. Does the “actions” mentioned in B refer to all the actions a libertarian commits, or only the “other” actions? I had taken B to mean only the latter but I may have misread your post. If you only claim the former, I do not dispute your claim.

At this point, at your request, I returned to your original argument to reconsider. Upon inspection I discover that, besides the obvious objection I raised (if by actions you meant “other” actions), I also disagree with the logic of your argument (and not only the premises). At the risk of oversimplification, your argument seems to be:

P1. A libertarian’s actions are caused (by the libertarian).
P2. LFW entails that a person's actions have no cause.
C. LFW is false.

My reply is to simply deny P2. Libertarians do maintain, without contradiction, that all of their actions are caused: caused by the libertarians themselves. The libertarian only claims that his choices are unconstrained. You seem to have anticipated this objection by saying:

Quote:
But, you might say, perhaps when we say that X chose to do Y, we don’t mean that X could have done something other than Y right at the moment he did it, but that the act of choosing to do Y could have been different than it was. But this has the same problem. If the act of choosing to do Y was a random event, it wasn’t really X who did the choosing. In fact, no one did the choosing; it “just happened”.
Again, you assume the cause/random dichotomy, which the libertarian may deny (although I do not). By using the word random instead of “uncaused” you imply that unconstrained behavior must be unpredictable and noisy. But the libertarian would maintain that there is no constraint on unconstrained behavior and that behavior could just as well be orderly and somewhat predictable.

Now you mention the cause/responsibility example:

Quote:
All I can say is that this is a novel and creative use of the term “cause”. By this standard, if my neighbor had an accident caused by the fact that his brakes failed, I caused his death by virtue of the fact that I failed to test his brakes that morning and warn him that they were about to fail. Or in the same spirit, if my subordinate Smith dies when the plane he takes to Seattle crashes, I caused his death by virtue of the fact that I failed to call him at the last minute to order him to go to Atlanta instead. I’m sorry, but to say these deaths are consequences, in a causal sense, of my inaction is ridiculous.
Again, we are getting way ahead of ourselves (has anyone established why someone would or would not be morally responsible yet?), but the obvious reply to this is that causation is a necessary, but not sufficient, condition for moral responsibility. An agent would not only need to cause, or fail to prevent, a disaster, but also possess knowledge of a probably threat. This knowledge is what distinguishes your bridge example from the car brake example. However, I do not yet subscribe to any idea of moral responsibility, and am only presenting the popular notions.

Quote:
Amazing. You manage to read “a great many people” as “most people,” then as “everyone”. A great many people live in San Francisco. Oddly enough, it does not follow that most people live in San Francisco, much less that everyone does. Please try to avoid this kind of pointless hyperbole.
Forgive the hyperbole and I will try to refrain from such exclamations. In my defense, you reference the “law” as well as “many” and later in your post you say “a great many (probably most) people believe all of them”, so perhaps I did not too grossly misrepresent your position?

Quote:
Anyway, as you’ve pointed out many times, many people are not consistent in their beliefs. It seems to clear to me that many people believe that:

(1) Some people are certain (at least some of the time) to commit a crime such as stealing an old lady’s purse if it seems clear that they will benefit significantly with no risk.

(2) Such people are morally responsible for such actions.

(3) A person cannot be morally responsible for an action if he could not (in your sense) have done otherwise.
Sorry, but I challenge this “revised” claim as well (at least your 1). I do not think the notion that anyone is ever certain to do anything is popular. Rather, the notion that a person is likely, even almost certain, is quite popular. But certainty itself is a strong word. People are sufficiently unpredictable, and LFW is sufficiently popular, that this claim of “certainty” would not be itself popular. Even a chronic thief would only be said to “certainly” steal a purse as an exaggerated manner of speaking. Of course, we are both providing unsupported assertions, and furthermore, these assertions are besides the point (who cares what most people think?).

Quote:
On the contrary, taking small steps and laying each step out for inspection smacks of a careful, responsible argument. I show exactly how I get from point A to point B. This is the first time I’ve ever been criticized, much less accused of intellectual dishonesty, for doing so.
Please excuse me once again. You are quite right that articulating each step of logic is a virtue and not a vice. And yet I cannot help feeling that your argument was somehow misleading. Your comparison to Euclid is telling, however, and upon reflection I suspect that the distinction between mathematics and moral philosophy is the cause of the my concern. The notions of mathematics are fixed and certainty is easily found, but the language of moral philosophy is not so cooperative (Socrates asked his whole life what is the just, a word that you use twice in your demonstration, and the words "should" and "answerable" are equally quarrelsome). But in hindsight I should not chastise you for your method.

You illustrate how I contradict myself:

Quote:
But at step 6 we have already reached an equivalence between “Smith is morally responsible for killing Jones” and “Smith should be punished for killing Jones”. So unless you can offer some actual reasons for doubting that the first five steps are valid, you must agree that if anyone should ever be punished for anything, people are sometimes morally responsible for their actions. And you have agreed many times that people should sometimes be punished for their actions.
Very good! I thank you for drawing attention to my inconsistency and in hindsight I do not only question the equivalence of your 6 and 7, but also your 5 and 6. For, as you remind me, deterrents are quite necessary even without blame, and I would include "deserve" in the moral domain. The word "should", however, can refer to "this deterrent should be enforced" as much as "this morally responsible person should be punished". So, although the resemblance between 5 and 6 is remarkable, I must insist that the two are not exactly equivalent.

Later, you say:

Quote:
I don’t understand why you recognize that we must punish certain kinds of actions to deter people from doing them, yet think it would be a good idea to abolish the “cherished deterrent” of blame. It would seem that it’s all right to put a noose around the murderer’s neck and hang him by the neck until dead, but we mustn’t tell him “that was a bad thing that you did”!
Yes, I said that blame is a cherished deterrent, cherished by others, for surely I do not endorse the use of blame as a deterrent, anymore than I endorse the use of cutting off hands to prevent theft. That an action is a deterrent is not a sufficient justification for its action, for obviously, many deterrents would go much too far. Consider the village where people are regularly burned as witches - aren't the citizens of this village more likely to be well-behaved than those of say, New York City? So must we start burning witches? Furthermore, I do not subscribe to your last statement at all, we would both hang the man and tell him what he did was bad (if what he did was bad). We would only not hold him morally responsible for the crime, we would not "blame" him, anymore than we blame the river that floods our house, even though we dam the river (and so we must stop the man too).

Quote:
What do you think it means to say that someone is morally responsible, if not that he is subject to such a system (or more precisely, that he should be subject to it)?
I think that to hold someone morally responsible entails that he or she should be subject to such a system (being responsible entails rewards and punishments) but the reverse is not necessarily true. The two would only be equivalent if you only subjected someone to such a system if he or she were morally responsible. But that “only” is not granted, because we often provide punishments and rewards without granting moral responsibility.

For example, we praise puppies but no one holds a puppy morally responsible. No one thinks that a puppy has free will. And if a puppy attacked a baby, we would punish the puppy (perhaps even execute the puppy) as a deterrent to prevent more attacks. But we would not blame or condemn the puppy. The same hold for robots and toddlers.

Quote:
The last statement seems to be based on the belief that I think that the “rightness” of an action depends on its effects on the agent. I can’t imagine how you could have gotten this impression. Surely you didn’t think that I was claiming that “Smith should be punished” is logically equivalent to “Smith would find the effects of punishing him preferable to the effects of not punishing him?” Obviously by “preferable” consequences I meant preferable on the whole, taking into account the interests of everyone affected, either directly or indirectly, either immediately or in the distant future.
I totally reverse my position that “effects should not be considered”. That was an overstatement that I meant to edit before your reply. The truth is that some effects should not be relevant to the question of whether an act is moral, for example, whether or not I am blamed or praised for raping a woman, the rape should nevertheless be condemned. Of course, it does not follow that all effects, such as the effect of the rape upon the woman, should not be considered.

So, allow me to distance myself from that claim which must have seemed quite absurd. Also, I was reading your use of the word “preference” from a subjectivist perspective. Now that I understand you meant the preference of an “objective moral system”, obviously your use of “should” and “preference” (in 6 and 7) are equivalent (almost by definition).

Finally, about the distinction between consequentialism and deontological theories, I am not a moral philosopher and do not pretend to understand these ideas (or even the consequences of my own position) as well as you do. I subscribe to neither theory (indeed we have yet to establish any system of either morality or responsibility). The morality of an action cannot be a function of only its consequences, because, how do you measure the consequences? Would you measure the consequences in terms of their consequences are so on ad infinitum? Likewise, I do not subscribe to deontological theories because, obviously, if a woman is raped, the rape is wrong at least partly because of the pain and suffering the rape causes. If the rape caused everyone to be happy and healthy, we might reconsider.

The truth, I suspect, is some mix. If your position (as you say) is consequentialism, at some point you must measure consequences not in terms of further consequences but something else. Hume is far more eloquent than I could be:

Quote:
V. It appears evident that the ultimate ends of human actions can never, in any case, be accounted for by reason, but recommend themselves entirely to the sentiments and affections of mankind, without any dependance on the intellectual faculties. Ask a man why he uses exercise; he will answer, because he desires to keep his health. If you then enquire, why he desires health, he will readily reply, because sickness is painful. If you push your enquiries farther, and desire a reason why he hates pain, it is impossible he can ever give any. This is an ultimate end, and is never referred to any other object.

Perhaps to your second question, why he desires health, he may also reply, that it is necessary for the exercise of his calling. If you ask, why he is anxious on that head, he will answer, because he desires to get money. If you demand Why? It is the instrument of pleasure, says he. And beyond this it is an absurdity to ask for a reason. It is impossible there can be a progress in infinitum; and that one thing can always be a reason why another is desired. Something must be desirable on its own account, and because of its immediate accord or agreement with human sentiment and affection.
So, consequentialism is a recursive function that really only delays the inevitable. For you must admit that, eventually, some consequence is measured only by its property of “ought-to-be-doneness” that you so dread. This may be pleasure or happiness or, to be more sophisticated, genetic proliferation. Modern biology suggests that even happiness only serves to further the survival of our genes. But even genetic proliferation would only be an end unto itself because eventually your means run dry.

This position is neither consequentialism nor a deontological theory, but some mix (as best as this amateur philosopher can tell). At some point, I feel, you must admit some metaphysical “ought-to-be-doneness” and consequentialism can become a function of not only consequences but also how well those consequences approach some “ought-to-be-doneness”.

Quote:
So unless you are a theist, I don’t see how you can avoid accepting consequentialism in the end as the only rational foundation of moral judgments. And in that case, your objection to the transition from 6 to 7 disappears
I neither subscribe to your consequentialism (as defined above) nor to the God hypothesis. And my objection remains (as well as the objection you deemed too trivial to address and my new objection to the equivalence of 5 and 6).

[ September 14, 2002: Message edited by: Kip ]</p>
Kip is offline  
Old 09-14-2002, 05:28 PM   #115
Kip
Regular Member
 
Join Date: Jan 2001
Location: not so required
Posts: 228
Post

bd:

Your previous post began quite well. Although I note that have become less patient! I agree with your summary of the dispute and everything you say until:

Quote:
So far as I can see, your rejection of B is based solely on a supposed moral intuition that UR is a condition of moral responsibility.
This is mistaken, however. My rejection of B is based upon the lack of establishing any such system that holds people responsible for their consistutions. You say that:

Quote:
"Certainly it is impossible to produce a demonstrative argument showing that A or B is “correct.”"
That is the reason I am an amoralist. Not because I subscribe the PAP but the PAP is never satisfied, but because I do not subscribe to any system for moral responsibility.

Quote:
Kip said: "... a thing is responsible for actions if the actions are done
autonomously, as a source, without "guidance" and "on one's own"."

But this criterion can easily be satisfied by a madman, a baby, or a turtle. Surely there is something more to being morally responsible than this?
Thank you for correcting me. Autonomous choice is a necessary but not sufficient condition for moral responsibility (according to the popular notion). I do not pretend to define the other necessary conditions sufficient for moral responsibility.

Quote:
So what’s your point? It seems that you have rejected in advance any possible grounds for holding any entity responsible for its actions. This rules out any meaningful discussion of such matters.
My rejection is only "in advance" in the sense that the conclusion of a deductive argument is implied by the premises. However, if you deny the premises, we could discuss other alternatives to the PAP. Of course establishing any of them will be difficult (impossible) so perhaps there cannot be any meaningful discussion of morality. Hume suggested all such metaphysical notions were "sophistry" and "illusion".

Quote:
Anyway, this approach to the question of moral responsibility is perverse. Rather than saying “We hold humans responsible, but not (existing) machines; what does this tell us about the conditions required for moral responsibility?” you say “We don’t hold machines responsible; since there is obviously no relevant difference between machines and humans, isn’t is obvious that we shouldn’t hold humans responsible either?” You don’t seem to be willing to even entertain the possibility that machines that were sufficiently like humans in the relevant ways should be held responsible. But why is this so obviously unreasonable?
I can entertain that possibility. I think Data from Star Trek is the textbook example (and even Data is gratuitously "robotic" and emotionless). However, the question is not so much whether or not humans would hold Data responsible (as opposed to saying "Data is malfunctioning") as to whether we SHOULD. In particular, the question is WHY we exempt robots from moral responsibility but not humans.

The reason, I submit, is that humans are sufficiently complex to establish the illusion that humans are not mechanical, whereas robots cannot maintain such deception. We understand every single part of how an Intel chip processes data, but there are many territories of the mind uncharted. The notion of vitalism or Platonic souls still resides in the back of the public consciousness. I maintain that the sole reason we exempt robots from responsibility is that they are mechanical and that we cannot deny robots are mechanical. Or do you suggest any alternative distinction? And why would that distinction be relevant?

Wouldn't it be easier to say "You damn computer, I blame you for that mistake, how could you be dumb?" That would be so much easier than admitting that the computer's creator, or even worse, the user, has done something wrong which we must repair. But in the case of humans, who would we blame? Most of the time if a human malfunctions, say by going on a killing spree, we cannot repair the human. We cannot open his skull and reprogram his brain. So we take the easy way out and simply blame the human.

Now to entertain the real question: would we hold Data responsible? I am not sure. There are strong reasons for both claims. On the one hand, humans are machines themselves, and so as technology progressed, we would cross along the range of complexity until reaching a creation just as complex (and soon to be more complex) than humans themselves. That would be compelling. However, no matter how complex the creation became, the robots would always be distinguished by being a creation. In the back of every human's mind, we would remember that the robot is not some natural, mysterious product of evolution but an artificial human construct, as are houses and televisions. So, although robots will become sufficiently complex (we could even talk of granting robots that exceed humans in complexity and intelligence "more" free will than we possess ourselves), we may nevertheless hold reservations because we are reminded of the robot's artificial nature (whether or not that issue should be, rather than would be, relevant is debatable).

Quote:
Have you read what I wrote on the subject of causation? I went to great lengths to distinguish causation from correlation. What do you think was the point of talking about “underlying structure” and “possible worlds”? Unless you show some minimal understanding of what I’ve already said there is little point in discussing this subject further.
To be honest, I have read your discussion of causation and determinism several times, and I do not quite understand what you have said. I freely confess that I do not know what you meant by "possible worlds" or "underlying structures" (if anyone else on this message board did, please say so). The fault, surely, is my own, for if you have failed to teach me your meaning, that is not through lack of effort. What I do know is that you attempted to "operationalize" the idea of cause and, I suspected, provided a weaker definition than the standard.

Quote:
First, as usual you’re using totally inappropriate language to create a misleading impression. When Susie chooses strawberry over chocolate because she prefers strawberry, she is not being forced to choose strawberry; she is choosing strawberry because she prefers it. This is the very opposite of being forced. When the cause of a choice lies within oneself it’s absurd to say that this cause “forces” you to make that choice, because the cause is you. You’re saying, in effect, that Suzie is forcing herself to choose strawberry. Do you imagine that somewhere inside her, the “real Susie” is saying “Please, please, not that! Don’t make me choose strawberry!”? If not, please stop using terms like “force” in this context.
I understand your point but I think the use of the word "force" is enlightening rather than misleading. The problem, I suspect, is one of personal identity. I am close to, as Hume did, simply denying that there are selves, only sums of sensations. Indeed, the claim that I am defined by where my skin ends and the air begins seems to be rather arbitrary, especially considering the effect that the air and environment have upon me. If we are to be strict about our determinists, would we be more consistent by, instead of referring to discrete persons, simply referring to the universe? Would that clarify this problem of Susie internally choosing according to consistution which was externally determined?

Quote:
Perhaps you have some satisfactory answer to this. Perhaps you really have some intelligible meaning of “Perfect, Autonomous Correlation” in mind that I haven’t thought of. If so, perhaps you would like to share it with us. But my experience has been that, although advocates of LFW always claim to have a coherent concept in mind, they never seem to be able to communicate it to the rest of us.
By perfect, autonomous correlation I was referring to the idea of Leibniz's clocks being the same because of "their own exactitude". According to your definition (that references "would" and not "must") one clocks must be the "cause" of the other clock's behavior. I was simply distinguishing between cause and correlation. I may have, however, misunderstood your entire definition of cause and determinism (depending upon whether you mention "must" or "would"), and this talk of "perfect correlation" (referring to the hypothetical many occurances of a libertarian meeting the same exact situation again) would be beside the point.

At this point, I am only sure that we are sure of very little. If you wish to suggest some condition for moral responsibility or reply to selections from my posts I would love to continue. But I feel we are digressing rather than progressing.

[ September 14, 2002: Message edited by: Kip ]</p>
Kip is offline  
Old 09-16-2002, 06:53 PM   #116
Senior Member
 
Join Date: Sep 2002
Location: San Marcos
Posts: 551
Post

I think to answer this one has to look into the very nature of morality itself; to me morality is a matter or looking at consequences and character judgement, so there are no quarrels with the determinist position in my moral theory. Hoever my moral theory would actually be usurped by the free will/randomist position. Because one could not judge character then, because character would never in any sense be constant as character would at any moment be open to radical change for no causal reason whatsoever. What sense does it make to praise a Nobel Prize winner when they apear at random only to maybe become bloodthirsty bigots the enxt for no predicable reason whatsoever?

In my theory deciding on whether or not a given person is responsible for something is for the most part based on whether or not the given consequence happened as a result of that person's character or some external accident, beyond that person's intention or ability to manipulate. For example, if you blame something for sleeping in and being late, it is the laziness of the person being condemned, a certain intrinsic trait.However a person who's car broke down cannot be blamed because that's not a character flaw but a matter of external circumstance.

Now do I not realize that these people had no choice in regards to whether they were of a certain character or not? Of course I do, but that's not relevant. What is relevant is their character now.

I think that other factors determine whether an organism can be seen as morally responsible apart from free will. Traits like intention.

What puzzles me though is why a determinist would even argue against humans holding each other responsible,blaming,praising etc. Because weren't such humans determined to punish and blame others?

Oh yes, btw I am not compatibilist, as I in no sense believe in free will, a concept I see as somewhat religious and randomist i.e. that human minds are somehow free of causality. I believe that people can reason and cognitively select using their frontal lobes but such things are completely within causal laws. I believe that morality has nothing to do with free will. I also likewise do not adhere to the is/ought dichotomy.

[ September 16, 2002: Message edited by: Primal ]

[ September 16, 2002: Message edited by: Primal ]</p>
Primal is offline  
Old 09-17-2002, 01:12 PM   #117
Veteran Member
 
Join Date: Jul 2000
Location: King George, VA
Posts: 1,400
Post

Kip:

Once again there seems to be little point in replying to your posts in detail since they consistently reflect a near-total lack of understanding of what I’ve been trying to say. In this post I’m going to go through my analysis of causality and what it means to “do” something, in some detail, and point out the implications for LFW, PAP, and “ultimate responsibility”.

In the end I hope that you will at least come to an understanding (if you haven’t already) that the issues that are troubling you have nothing to do with determinism or free will. In reality they spring from a total misconception of the nature of morality.

I’ll discuss my ideas about morality in a following post.

1. The nature of causation.

What do we mean when we say that A causes B? A typical answer given by professional philosophers (in fact the wording here is taken directly from Brand Blanshard) is that A is so connected to B that given A, B must occur. The problem, of course, lies in that word “must”. What does it mean to say that B “must” occur?

One interpretation that has been offered is that it means that A has in fact always been followed by B. But on reflection it’s clear that this is not what is meant. When we say, for example, that a large steel ball hitting an ordinary window at high speed will cause it to shatter, we do not mean merely that windows hit by large steel balls have, as a matter of fact, always shattered in the past. For one thing, causal statements sometimes involve conditions that have never in fact occurred. For another, they often refer to possible future events. (An example of both: “If a football were to fall into the sun, it would be destroyed.”) So perhaps it means that A not only has always been followed by B, but that it always will, as a matter of fact, be followed by B? (This is what you call “perfect correlation”.) But this doesn’t work either. Among other problems, the statement may refer to an event that not only never has, but almost certainly never will, occur. (Example: “If the Earth were to fall into Sirius, it would be destroyed.”) Another is that two events may, as a matter of fact, be perfectly correlated in this way without any causality being involved. For example, suppose that a certain sporting event - the quiddich world championship, say - occurs every four years. But a presidential election also occurs every four years. And we can easily imagine that, by sheer chance, the first quiddich world championship occurred just after the first presidential election, and the last will occur right after the last such election. So we have a case where A (a presidential election) is always followed, and always will be followed, by B (a quiddich championship), but there is no causal connection between the two.

Clearly, when we say that A causes B we intend to say not only that A has always been followed, and will always been followed, by B, but that if A were to occur (or to have occurred) at any specific place and time, B would follow (or have followed). But the “would” here is just as problematic as the “must” in original statement, and in just the same way. In both cases the term “must” or “would” is expressing some kind of necessity. That is, in each case we are saying “If A, necessarily B”. Or alternatively, “It is not possible that A should occur, but not be followed by B”. But what is the nature of the “necessity,” or “possibility” referred to in these statements? Clearly it is not logical necessity. If B were logically necessary given A, we would not speak of causality at all. For example, we do not say that an object’s being square causes it to be rectangular. Conversely, we do say that a solid steel ball hitting an ordinary window at high speed will cause it to shatter, even though it is clearly logically possible that it will go clean through the window without affecting it in any way.

So statements like “If A, necessarily B” or “It is not possible that A should occur but not be followed by B” must be saying that there is something about this world that makes it “necessary” that the second event would happen if the first one did. In a sense we are not making a statement about this world at all, but about other possible worlds. Thus, the statement “If this solid steel ball had hit that window at high speed, the window would have shattered,” can be interpreted as meaning that in all possible worlds like this one in the relevant way, but in which (unlike in this world) the ball does hit that window, the window does shatter. (Or if you prefer, we can substitute “a ball like this one in the relevant respects” and “a window like that one in the relevant respects”.)

Thus to say that A causes B is to say that there is some property of this world such that not only in this world, but in any possible world that has this same property, A is necessarily followed by B. And the property in question cannot be merely the fact that A is in fact always followed by B, because in this case the statement would reduce to the trivial tautology that in any possible world where A is always followed by B, A is always followed by B. Rather, it must be something that “underlies” the actual events in the world, constraining them to adhere to a certain pattern. The set of such properties is what I mean by an “underlying structure”.

At this point it should be clear why it is correct to say both that “A causes B” means that given A, B must occur, and to say that it means that if an event of type A were to occur (or had occurred) at a given place and time it would be followed (or would have been followed) by an event of type B.

Note: It seems to be useful to broaden the notion of a “cause” to include conditions or states of things as well as events. At any rate, no confusion seems to arise from doing so, so I will sometimes refer to conditions or states as causes in what follows. In particular, I will sometimes say that X “causes” Y when X is a thing (such as a person or a mind). This can be taken to mean that X’s current state, or events internal to X, or both, cause Y.

2. The nature of action, or what it means to “do” something.

Now let’s consider what it means to say that X did Y. Surely it is obvious that it means that X caused Y to happen.

This is perfectly straightforward in the case of nonsentient things. For example, to say that a tree knocked over a telephone pole is to say that the tree caused the telephone pole to fall. Or, when I say that the microwave heated up my lunch, I mean that it caused the lunch to become hot. The same is true of animals. For example, when I say that my dog retrieved the ball, I mean that he caused the ball to return to me. And finally, it is true of people. When we say that John killed Robert, we mean that John caused Robert’s death.

However, in the case of people (or even animals) there is an ambiguity that does not exist in the case on inanimate objects. Thus, when we say that John killed Robert, we could mean ether of two very different things. For example, suppose that the two are standing at the edge of the Grand Canyon, John pushes Robert over the edge. In this case we would say without hesitation that John killed Robert. But suppose instead that a gust of wind catches John and forces him into Robert, causing him to fall to his death. We would be inclined to say that in a sense John killed Robert, but that he “really” didn’t. This is because when we talk about “so-and-so,” we can be referring to his body or to his mind. Thus, in our example, Robert was killed as a result of John’s body moving in a certain way, but since this motion was not caused by John’s mind, we are inclined to say that John didn’t “really” kill him.

Here’s another example: suppose that John is knocked unconscious and driven to Chicago. We would be inclined to say that he “went” to Chicago, and would probably not be inclined to say that he “really” didn’t. The difference between this and the first example, of course, is the killing someone has moral significance, whereas going to Chicago per se doesn’t. Thus in the first case we are concerned to be careful to distinguish between a death caused by John’s mind and one that is merely caused by his body when it is not “under the control” of his mind.

Note that both meanings still refer to causation. In the one case we are talking about physical causation: the person’s body causes the event in question. In the second we are talking about mental causation: the event is caused by the person’s mind. But in either case we are talking about causation. Also, note that the second sense is stronger than the first, not weaker, as one might be inclined to think at first sight. The only way that a person’s mind can cause events in the “real world” is by causing movements of his body. Thus mental causation always entails physical causation.

But since physical causation has no moral significance in itself, I am going to use only the second sense. That is, when I say that a person “did” something, I will always mean that the event was caused by the person’s mind.

Since the claim that "X did Y" means, or at least entails, "X caused Y" is crucial to the argument, it would be wise to stop and review this for a moment. Is it possible that, when we say that “X did Y”, we sometimes mean something other than “X’s body caused Y to occur” or “X’s mind caused Y to occur”? I honestly can’t think of any other meaning that people might plausibly have in mind when they say this. When we say that X did Y, are we not saying that X is responsible for Y’s occurring? And we are certainly not saying that he is morally responsible, since we quite often say that someone did something, but is not morally responsible for it. What other sense is there of being responsible for an event besides being causally responsible? Or we can look at it another way. When we say that X did Y, are we not saying that there is a connection between X and Y? And we are certainly not saying merely that there is some kind of correlation between the two – that it happens to be the case that whenever X is around under certain circumstances, an event like Y always occurs. No, when we say that X does these things, we are saying that there is a real connection between X and the “Y-like” events, not a mere correlation. And what other kind of connection could we be referring to but a causal connection?

Now let’s turn to your comments (which you apparently consider to be crucial) about what happens when an agent “repeatedly returns to the exact same situation”. You suggest that it is meaningful to say that he might choose the same option every time without any causation being involved:

Quote:
The libertarian may claim, without contradiction, that the agent is only freely willing the same decision eternally. His actions are not determined, because he there is no “must”, only “would”.
This is meaningless as it stands, because (as I’ve pointed out many times) an agent never does “return to the exact same situation” – not even twice, much less “repeatedly.” What you seem to have in mind is that we can imagine “rolling the tape back” to the same point in time repeatedly and observing what the agent does each time. But this is still meaningless. We can no more “roll back the tape” than the agent can “return to the exact same situation”. But even if we could “roll back the tape”, obviously we would just observe the same events if we then proceeded to simply “play the tape forward”. In this world, one thing and only one thing occurred at this place and time, and that’s what we’ll observe no matter how many times we go back in time to observe events at that place and time again. This would be just as true in an utterly chaotic world as in a deterministic one (or anything in between). If we want to be able even to talk about the possibility of something else happening than what did happen, we must necessarily talk about other possible worlds. It is only in other possible worlds that anything else might happen than what did happen.

So when we talk about “rolling back the tape” to see if anything different might happen, we are really talking about looking at other possible worlds in which the events up to this point were exactly the same as events in this world. We’re asking, “Does something different happen at this point in some of these worlds than what happened in this world?” Obviously the answer to the question, if we put no restrictions on the “possible worlds” to be considered, is “yes”. It is logically possible that almost anything happened at that point given what came before, and there are possible worlds corresponding to each of these logical possibilities. But the vast majority of these possible worlds are utterly chaotic from this point forward (even though they were quite orderly up to that point). We’re not really interested in all possible worlds, not even all possible worlds where the events up to this point are exactly the same as in ours. We’re only interested in the ones that are “like” ours in the relevant ways. Specifically, they must be alike in the sense of having the same underlying structure. That is, any uniform regularities (and hence any causal relationships) that hold in our world must hold in the other possible worlds that we’re interested in. Thus the question that we really want to ask is, “Does something different happen at this point in time in any of these possible worlds than what happened in ours?”

In the case of something that someone did, we are now in a position to answer this question definitively: no, nothing different happens at this place and time in any of these possible worlds. The reason is simple. Let the event in question be Y, and assume that X did Y. To say that X did Y means that X caused Y to occur. And to say that X caused Y to occur is to say that there is some property of this world such that not only in this world, but in any possible world that has this property, this state of X and/or events internal to X are necessarily followed by Y. But for reasons explained earlier, we have restricted our attention to worlds that do have this property – i.e., possible worlds in which the relevant causal relationships hold. Therefore Y occurs in all of these possible worlds. In other words, if we could “roll back the tape” and observe what happens from that point in any other possible world that is sufficiently “like” ours to make the question meaningful, we will observe Y.

Note: Nothing whatever has been said about determinism in the above analysis. Its validity does not depend on whether the world is deterministic or not.

3. Libertarian Free Will (LFW), the Principle of Alternative Possibility (PAP), and the Principle of Ultimate Responsibility (PUR).

A. It follows immediately from the analysis above that, whenever we can truly say that X did Y, X could not have done otherwise in any sense that advocates of LFW would consider relevant (i.e., a sense incompatible with determinism). Thus the concept of LFW is logically incoherent.

B. The PAP says that no one is morally responsible for an action unless he could have acted differently. In the libertarian interpretation this means that the action cannot be caused. But we have seen that to say that someone did something entails that it was caused. So the PAP (on the libertarian interpretation) says that no one is morally responsible for doing anything that he can truly be said to have done. This is so absurd that it cannot be plausibly held to be a moral intuition or “self-evident truth”.

C. Let me repeat Kane’s definition of the PUP:

Quote:
The basic idea is this: to be ultimately responsible for an action, an agent must be responsible for anything that is a sufficient reason (condition, cause or motive) for the action's occurring. If, for example, a choice issues from, and can be sufficiently explained by, an agent's character and motives (together with background conditions), then to be ultimately responsible for the choice, the agent must be at least in part responsible by virtue of choices or actions voluntarily performed in the past for having the character and motives he or she now has.
One would have thought that it was self-evident that this principle is logically incoherent in the sense that the condition it gives for moral responsibility cannot be satisfied, even in principle, in any possible world. But for those to whom this is not self-evident, we are now in a position to describe the nature of the incoherence more clearly.

(I know that you don’t need this demonstration, Kip, but there may be some following this thread who do.)

Suppose that an agent performs an act Y as a result of the “character and motives” (C&M) that he has now. According to the PUR, in order to be (morally) responsible for Y, this C&M must be the result of “choices or actions voluntarily performed (VCA) in the past. But as we have seen, to say that he “performed” these choices or actions is to say that some aspects of his prior mental state caused them. Thus to say that the C&M that the agent has now are the result of VCA in the past is to say only that they are the result of the C&M that he had at some time(s) in the past. And if this is so, surely in order to ultimately responsible for his actions he must be responsible, by virtue of still earlier VCA, for having the C&M that led him to perform the original set of VCA.

But this clearly leads to an infinite regress, which is impossible because of the fact that the agent has only existed for a finite time. Thus it is logically impossible for an agent to be ultimately responsible for his actions.

All of should be blindingly obvious to anyone who has taken the trouble to think about it. A being who came into existence at a definite time in the past cannot be “ultimately responsible” in this sense; his behavior must clearly be the product of some combination of “nature and nurture,” heredity and environment, original and acquired characteristics. There is simply no way to “bootstrap” yourself into existence. At some point you simply find yourself existing, with whatever qualities you have. Whatever qualities you have from that point on are the product of the interaction of these primordial qualities and whatever “happens to you”. Even if some uncaused events (whether internal or external) are a part of this mix, they’re just part of what “happens to you”; you are clearly not “ultimately responsible” for the results of uncaused events.

So this whole way of thinking about ethical questions collapses in the end into total incoherence. At this point we have the choice of abandoning or rejecting morality altogether, or finding a new way of thinking about it.

[ September 17, 2002: Message edited by: bd-from-kg ]</p>
bd-from-kg is offline  
Old 09-17-2002, 09:46 PM   #118
Junior Member
 
Join Date: Aug 2002
Location: San Diego
Posts: 15
Thumbs up

Hello,

I have read (as much as I can ) of this post and I just want to say that I totally agree with Kip.
It seems to me (and I could be wrong) that people are arguing for the notion of moral accountability because the consequences of such a view appear to be moral anarchy. I believe that this is not the case, as has been shown in this post.

I also believe that the robot example is clearly persuasive.

I wonder if part of the disagreement we have on this issue is in how we are looking at this problem. When I think about choice I do so purely from the vantage of an external observer. In strictly observing another person, it is clear to me that they are a state machine and whatever they did could not have been otherwise.

I believe that the naturalism.org site expains Kip's position brilliantly (as I believe Kip did!)<a href="http://www.naturalism.org/resource.htm#Writings" target="_blank">naturalism.org</a>

[ September 17, 2002: Message edited by: Marcion ]</p>
Marcion is offline  
Old 09-18-2002, 12:17 PM   #119
Senior Member
 
Join Date: Sep 2002
Location: San Marcos
Posts: 551
Post

I think a lot of the issue centers around how we view morality. If we define morality as something that requires, by definition free will and choice, then of course determinism will be incompatable with morality. However if we view morality as something else; such as a universal law, a relfection of our self-interests, utilitarian and/or character judgement; then the existence/nonexistence of free will becomes irrelevant.

In the Robot example, I would be willing to hold the robot morally responsible and hence worthy of destruction or punishment, depending on if it could feel pain or not. Depending for the most part on the make-up of the machine. This is because my moral standards do not require free will 'a priori' to be implemented. My moral code only requires the possesion of certain traits. Call this 'absurd' if you like but keep in mind that pure incredulity makes for a weak argument.

Oh yesm I also think it very necessary to destinguish between the different meanings of 'could' in such discussion; between the word 'could' used to designate epistemic ignorance and expectations; and the word 'could' meant to designate actual physical randomness.

When I as a determinist say something like " if you go into the ocean you could get attacked by a shark" I am not saying such an event is random in the existential sense. I mean to say, given what I know of oceans, sharks, and past human/shark encounters combined with my ignorance of future events; the idea of a shark attack may happen. I cannot say you will be attacked because I don't know, I cannot say you will not be attacked for the same reason. But given that I don't know all the factors in the ocean at any given time, and shark attacks have happened in the past, I would have to say it can. Hence the distinction between "can" in the sense of prediction and "can" in the sense of the actual random. Now what sense then does it mean to say "one could have done otherwise?", I think it means that given I was placed back in the state of ignorance of the event in question, I could not have fully predicted the outcome. What this entails for morality is perhaps questionable, but the words 'could' obviously can be used in different senses and this should be recognized at the very least. Lastly, it also means that a given thing 'is' able to or warranted, hence then three meanings. For example, one 'can' digest certain foods, meaning it is expected that one is able to digest foods.

[ September 18, 2002: Message edited by: Primal ]</p>
Primal is offline  
Old 09-18-2002, 12:52 PM   #120
K
Veteran Member
 
Join Date: Aug 2002
Location: Chicago
Posts: 1,485
Post

Again, holding entities morally responsible is strictly utilitarian. If holding those entities responsible will protect society from further harm, they are held responsible. If holding them responsible will not protect society, they are not held responsible. This is a perfect reason for holding individuals morally responsible (or just responsible if you like) event though their actions may be deterministic.

Here are some of the utilitarian purposes of punishment for immoral acts.

1. Deterrent Input - Since the brains of individuals cause action which is base on inputs from the environment, the knowledge that one could be punished if caught would be an input to the decision making process.

2. Classical Conditioning - By providing an unpleasant stimulus for immoral behavior, the structure of the state machine could be changed so that the individual would seek to avoid the immoral behavior.

3. Protection of Society - Removal of the offender from society protects society from further immoral acts by the individual.

4. Removal from Gene Pool - Execution prevents the individual from reproducing. If the immoral behavior was the result of genetics, this societal selection reduces the chance that the behavior will continue in succeeding generations.


By examining the four utilitarian reasons for punishment listed above, it should be clear that holding an invididual morally responsible is perfectly compatible with determinism. Some social animals punish group members for the same reasons (I know chimps and wolves do - and I assume there are others). And we hold pets morally responsible for some of their actions (for reasons 2, 3, and possibly 4). If punishing a robot would make sense for one of the above reasons, you can bet we would punish it.
K is offline  
 

Thread Tools Search this Thread
Search this Thread:

Advanced Search

Forum Jump


All times are GMT -8. The time now is 07:08 PM.

Top

This custom BB emulates vBulletin® Version 3.8.2
Copyright ©2000 - 2015, Jelsoft Enterprises Ltd.