FRDB Archives

Freethought & Rationalism Archive

The archives are read only.


Go Back   FRDB Archives > Archives > IIDB ARCHIVE: 200X-2003, PD 2007 > IIDB Philosophical Forums (PRIOR TO JUN-2003)
Welcome, Peter Kirby.
You last visited: Today at 05:55 AM

 
 
Thread Tools Search this Thread
Old 09-10-2002, 11:12 AM   #101
K
Veteran Member
 
Join Date: Aug 2002
Location: Chicago
Posts: 1,485
Post

Keith:

That's exactly what I mean. I believe that if two absolutlely identical brains were given the exact same inputs for the entirety of their existence, they would be in exactly the same state at any given time. If you don't believe this to be the case, what do you think would make the states different between the two?
K is offline  
Old 09-10-2002, 11:18 AM   #102
Veteran Member
 
Join Date: Jul 2002
Location: Overland Park, Kansas
Posts: 1,336
Post

K:

I don't believe that there could be two 'absolutely identical' brains: a thing is only 'absolutely identical' to itself.

Keith.
Keith Russell is offline  
Old 09-10-2002, 11:38 AM   #103
K
Veteran Member
 
Join Date: Aug 2002
Location: Chicago
Posts: 1,485
Post

Keith:

It was a thought experiment. I don't believe there could be two identical brains either. I also believe it's even less likely that they could be given the same exact inputs for their entire existences.

The big question is how the brain operates. Does it behave like other physical machines in the universe where the input + the current state yields the output? Or is it somehow tuned to translate the desires of a non-naturalistic will? I'm not trying to limit the options to two choices. I'd be perfectly willing to entertain other ideas of how a brain functions. I realize that we don't know entirely how brains work, but do we have any reason to believe that they don't follow the causality rules of the rest of the macroscopic universe?
K is offline  
Old 09-10-2002, 01:05 PM   #104
Banned
 
Join Date: Sep 2001
Location: Eastern Massachusetts
Posts: 1,677
Post

This is one of those questions bring up the issues I have with certain practices of both philosophy and law.

Being moral does not depend upon some theoretically seamless, perfect conception.

As proof, I am simply a moral person, despite having little interest in this debate nor any particular opinion about its outcome.

Just as science, unlike faith, does not require an answer to everything in order to work, so, it seems to me, being morally good and ethical and principled does not require a meticulous pholisiphical answer to matters of free will, determinism, etc., etc.

Not trying to dampen the enthusiasm for the debate, just making a point.
galiel is offline  
Old 09-10-2002, 01:19 PM   #105
Veteran Member
 
Join Date: Jul 2000
Location: King George, VA
Posts: 1,400
Post

Quote:
Originally posted by Keith Russell:
... it seems like ... both parties agree that there exists something which could be called 'free will', but we either aren't sure exactly what it is, or ... we can't agree that 'free will' is the proper term to use to describe it.
Correct, but this is hardly a trivial dispute over words. The concept of free will is intimately bound up with the concept of moral responsibility. The principle that Kip is appealing to is really the tautology: “One is not morally responsible for an act unless one did it of one’s own free will”. The sticking point, of course, is what it means to say that one acted of one’s own free will. Thus the question of what we mean by “free will” is really a question of what actions we are morally responsible for.

Typically everyone agrees that there is such a thing as free will, because everyone (more or less) agrees that people are sometimes morally responsible for their actions. The problem is that the LFW advocates say that one is morally responsible for an act only when one acted of one’s own free will in the LFW sense, whereas compatibilists typically reply (as I do) that this must be wrong because it would imply that no one is ever morally responsible for their actions, since LFW does not exist. LFW advocates often reply by saying that the claim that LFW does not exist is a wicked doctrine because it implies that morality itself does not exist. It’s hardly surprising that such arguments often produce more heat than light!

Quote:
Originally posted by K:
It really boils down to whether or not someone believes that the output of the brain is reducible to its current state (based on biology and all past experiences) and the current inputs. Or whether one believes there is something else contributing to the decision. I believe the former.
No, that is not what it boils down to. It doesn’t matter what the constituents of the “self” are, or through what mechanisms the state of the self is translated into action.

Let’s say for the sake of argument that there is “something else” to the “self” (i.e., the entity who is said to act, and about which we are asking whether he/she/it is “morally responsible” for the act) besides a “brain state” - consciousness, soul, whatever. The question is still: did the “self” choose to do X rather than Y? If so, by virtue of what it means to “choose”, the state of the “self” at that moment caused act X rather than act Y. If something else caused X to be done rather than Y (or if there was no cause), the “self” did not choose X.

But to say that the self caused act X is to say that the self could not have done otherwise at that moment. And according to Kip, that means that the self is not morally responsible for doing X.

[I developed this argument more fully in my Sept. 6 post in Section 3: On libertarian free will. I would recommend that anyone who wants to reply to it read this more complete exposition first.]

This is the unresolvable paradox of the concept of libertarian free will: if A chose to do X, then by virtue of what it means to say that A chose, A could not (in Kip’s sense of “could”) have done otherwise, and hence is not morally responsible for doing X. There’s no getting around this. The PAP, as Kip (and advocates of LFW generally) interpret it, implies that no one is ever morally responsible for an action. Either one didn’t choose to do it, and hence is not responsible (one is not responsible for what one did not choose) or one did choose to do it, and hence is not responsible (one is not responsible for an act if one could not have done otherwise).
bd-from-kg is offline  
Old 09-10-2002, 01:31 PM   #106
K
Veteran Member
 
Join Date: Aug 2002
Location: Chicago
Posts: 1,485
Post

bd-from-kg:

I believe the problem stems from the first premise - not the state machine nature of the brain. We get bogged down trying not to hold people responsible for acts they had no control over. I think a better way to look at it is that we hold people morally responsible when doing so will help prevent detrimental behavior in the future. Therefore, if someone slips and accidentally knocks someone over, they are not morally responsible unless their slip was caused by their own negligence. Holding them morally responsible won't reduce the odds of a similar act happening in the future. However, if someone's brain state causes them to knock somebody over, we do hold them morally responsible. Doing so has the possibility of curbing such actions in the future.
K is offline  
Old 09-10-2002, 03:51 PM   #107
Veteran Member
 
Join Date: Jul 2002
Location: Overland Park, Kansas
Posts: 1,336
Post

Greetings:

People who are mentally ill are generally conceded not to be 'responsible' (legally, if not morally) for their actions. They are viewed as being unable to participate in their own defense, should they be accused of crimes. If convicted of a crime, they are generally not given as harsh a sentence as people who are judged 'sane' at the time they committed their crime(s).

But, we still don't allow people who are 'not responsible for their actions' to run around loose, if strong evidence exists to support the claim that these folks are a danger either to themselves and/or others.

If a person is removed from society and permanently placed in a mental hospital, given a life sentence with no possibility of parole, or summarily executed, there really isn't any difference to me. I am, indefinitely, protected from this person.

So, are there any real ramifications to me if a person does something out of choice, or because 'the voices' said to, or because 'their brain was wired that way'?

Maybe there are some people who claim that we cannot, should not, punish (or even incarcerate) people, if human beings don't have 'free will'.

I haven't met them yet...

Keith.
Keith Russell is offline  
Old 09-10-2002, 04:39 PM   #108
K
Veteran Member
 
Join Date: Aug 2002
Location: Chicago
Posts: 1,485
Post

Keith:

I agree totally. Free will is a non-issue in the crime / punishment / societal protection scheme.
K is offline  
Old 09-11-2002, 08:46 AM   #109
Kip
Regular Member
 
Join Date: Jan 2001
Location: not so required
Posts: 228
Post

bd:

I am glad that you have the patience to entertain such a slow learner. I am reading An Introduction To Western Philosophy by Antony Flew and he summarizes the dispute quite well (from page 269 of my hardcover):

Quote:
So if anyone urges that he cannot properly be held responsible because he did not choose his own original desires, then this must be taken as an attack on the (or a) whole concept or pseudo-concept of responsibility rather than a protest that as a matter of contingent fact the preconditions for the application of that notion are not satisfied. The upshot is that we have to choose between:

A. abandoning such a notion of responsibility as a pseudo-concept, on the grounds that it presupposes the logical absurdity of a choice without desires;

B. or else admitting that it may be entirely proper to hold people responsible for what they do even where their tastes and dispositions are not the outcome of their own original choices.
This is the original dispute and despite the breadth of our exchange I dare say that we have not progressed one inch towards answering this question. Moreover, my strong suspicion is that, in the absence of any sacred text or mathematics to settle the dispute between preferences towards A or B, any resolution is impossible. Thus, I am tempted to conclude, a priori, that actions are not moral or immoral, but simply amoral and that we may as well follow the advice of Hume and commit all works of moral philosophy "to the flames" for they "can contain nothing but sophistry and illusion."

Now I will post some ideas that I have. I will reply to (both of) your posts later.

Moral Responsibility
When I reflect upon the idea of moral responsibility, I wonder whether the word "moral" is redundant. Moral considers notions of "right" and "wrong" and responsible, what does responsible signify? I immediately appeal to the dictionary which says that:

1. Liable to be required to give account, as of one's actions or of the discharge of a duty or trust.
2. Involving personal accountability or ability to act without guidance or superior authority: a responsible position within the firm.
3. Being a source or cause.
4. Able to make moral or rational decisions on one's own and therefore answerable for one's behavior

And further that "account for" means:

To constitute the governing or primary factor in

The essential items to notice, I think, are the emphases placed upon autonomy and "source". Thus a thing is responsible for actions if the actions are done autonomously, as a source, without "guidance" and "on one's own". The fundamental question (summarized by Flew above), is whether a person's natural inclinations and character, constitute "guidance" in a sense that would remove responsibility. Is a person who acts according to the character he is given acting "on his own"? Thus, the question of responsibility is also a question of personal identity: are preferences "external", "guides" branded upon a metaphysical "person" or does personal identity already include preferences?

I must say that I am inclined toward the latter view that a person is only the sum of his preferences and that there is no metaphysical person that is given preferences. And yet, we do not hold artifical machines (as opposed to biological machines), which also are the sum of the preferences and do not choose their character, responsible. Indeed, I submit that we do not hold them responsible precisely because they are robotic (and I invite suggestions of alternative distinctions), and determined by characters that the robot did not and could not, choose. So, I suspect that this definition of responsibility may be somehow lacking and not sufficiently "metaphysical" and could possibly agree that determinism is compatible with this dictionary "responsibility".

Indeed, you write that:

Quote:
The principle that Kip is appealing to is really the tautology: “One is not morally responsible for an act unless one did it of one’s own free will”.
And I am inclined to agree with you that the idea of LFW is essential to my idea of moral responsibility, I think that this is similar to, but distinct, from a tautology. The tautology would be to define moral responsibility and LFW in terms of each other. But I have provided a definition of LFW unique unto its: action without constraint. Here is the dictionary definition I am also using:

"The power of making free choices that are unconstrained by external circumstances or by an agency such as fate or divine will."

So, upon reflection, I am inclined to say that an agent is responsible for whatever action he or she freely wills. This is to distinguish the human and divine "metaphysical" domain from the domain of puppies, ill people, and robots (which we do not grant responsibility).

Perfect, Autonomous Correlation

The phenomenon that we frequently allude to is that of "constant conjuction" as Hume would say, or (with less ugliness), perfect correlation. I feel that you repeatedly fail to distinguish between cause and correlation, and then utilize this failure to label as determined that which could be entirely causeless and thus to define determinism is a weaker sense which is compatible with LFW. This motive is also found in your mentioning probabilities and statistics, which, by definion, only measure actual correlations and make no claims about causality whatsoever.

For example, if I threw a dice twice and "2" came up both times, there was a 2 / 2 = 100% chance that the dice would be a 2 and thus the result of 2 was "determined". Obviously, however, the result was completely random. The problem with your statistical definition is that "determined" occurances can be produced not only by causation but also by perfect, autonomous correlation. You may object that you were not referring to such a small sample, but that difference is one of only number and not kind. For I may just as well extend the length of time to ETERNITY and extend the possible dice results to every possible state of the universe and the principle still holds. The problem of perfect, autonomous correlation creeps back into your definition each time.

You may say that this is a preposterous and trivial objection. I would agree with you that in reality no such events actually occur, but hypothetically, such demands are being made of LFW especially when we consider "rolling the tape back" and playing the same event many times. You wish to accuse LFW with a failure to meet these demands, but the phenomenon of perfect correlation is exactly how the libertanian can do so. Indeed, perfect correlation is fundamental and necessary to any concept of LFW because, although the libertarian may wish to have the power to freely choose, he also needs the power to make the same choice eternally, or else he must admit intolerable inconsistency. You threaten the libertarian with this inconsistency but perfect correlation is precisely what allows him to make the same choice eternally.

Leibniz gave this classic illustration of perfect correlation:

Quote:
Imagine two clocks or watches which agree perfectly with each other. That can come about in three ways. The first is the mutual influence of one clock upon the other; the second is the care of a man who looks after them; the third is their own exactitude.
The problem with this part of your argument is that you identify the same phenomonon, perfect correlation, and assume the "causal" explanation and forbid the other explanation of "their own exactitude" (perfect and autonomous correlation).

In particular, I do not want to be met with this reply again:

Quote:
Kip said: "Free will ... is perfectly compatible with a zero possibility. The agent need only "freely choose" some possibility 100% of the time."

I agree completely. This is what’s known as “compatibilism”. I’m glad to see that you’ve finally seen the light. Since the only thing that determinism says about human choices is that one choice has 100% probability and all others have zero probability, if zero probability of making any choice but one is “perfectly compatible” with free will, then determinism is compatible with free will.
Perhaps I did not explicitly distinguish between LFW and other compatibilist redefinitions but, obviously, if no cause forces a person to choose (according to LFW) that cannot be compatible with determinism (that references causes). Although I freely grant that other, more controversial, definitions of either free will or determinism may be compatible.

More later.
Kip is offline  
Old 09-11-2002, 04:16 PM   #110
Veteran Member
 
Join Date: Jul 2000
Location: King George, VA
Posts: 1,400
Post

Kip:

1. Ultimate responsibility

The idea that you’ve been appealing to more and more in your campaign to absolve everyone of all moral responsibility is not PAP, but the principle of “Ultimate Responsibility”. David Kane defines it as follows in his paper <a href="http://www.ucl.ac.uk/~uctytho/dfwVariousKane.html" target="_blank">Reflections on Free Will, Determinism and Indeterminism</a>:

Quote:
The basic idea is this: to be ultimately responsible for an action, an agent must be responsible for anything that is a sufficient reason (condition, cause or motive) for the action's occurring. If, for example, a choice issues from, and can be sufficiently explained by, an agent's character and motives (together with background conditions), then to be ultimately responsible for the choice, the agent must be at least in part responsible by virtue of choices or actions voluntarily performed in the past for having the character and motives he or she now has.
In your quotation from Flew he is clearly referring to this concept. He says:

Quote:
... we have to choose between:

A. abandoning such a notion of responsibility as a pseudo-concept, on the grounds that it presupposes the logical absurdity of a choice without desires;

B. or else admitting that it may be entirely proper to hold people responsible for what they do even where their tastes and dispositions are not the outcome of their own original choices.
What Flew is saying here is that, if one is only morally responsible for an act if one is ultimately responsible for it, then no one is morally responsible for anything, since the notion of UR (like LFW) is logically incoherent.

You comment:

Quote:
This is the original dispute and despite the breadth of our exchange I dare say that we have not progressed one inch towards answering this question. Moreover, my strong suspicion is that, in the absence of any sacred text or mathematics to settle the dispute between preferences towards A or B, any resolution is impossible.
Well, it’s not the original dispute, but it does seem that it’s now the main dispute.

Certainly it is impossible to produce a demonstrative argument showing that A or B is “correct,” and I have made no attempt to do so. But, like Flew, I have tried to demonstrate that these are indeed the only alternatives.

As for your apparent rejection of B (thereby accepting A by default), it seems to me that your thinking is confused. So far as I can see, your rejection of B is based solely on a supposed moral intuition that UR is a condition of moral responsibility. I don’t share this intuition; I don’t even see why you think it plausible, much less why you put so much stock in it. But in any case there is another, more fundamental moral intuition: that some people are sometimes morally responsible for their actions. It appears that you are prepared to abandon this moral intuition on the grounds that it conflicts with the former one. It would seem to me to be far more reasonable to abandon the former on the grounds that it conflicts with the latter. It seems to me to be downright perverse to conclude on the basis of a moral intuition that there is no such thing as morality.

Of course, if you think of morality as a set of “objective truths” independent of all human desires and preference, that exist in some mystical realm beyond the reach of either logic or experience, then you should conclude that it doesn’t exist, because it doesn’t.

2. Moral responsibility

Although, as Flew notes, your acceptance of the UR condition makes all talk about moral responsibility nonsensical, you continue to talk about it anyway. Your next move is to (yet again) cite a dictionary definition. You really need to abandon this obsession with dictionary definitions; they are often completely useless in philosophy.

Here your reliance on the dictionary leads you to the absurd conclusion that:

Quote:
... a thing is responsible for actions if the actions are done
autonomously, as a source, without "guidance" and "on one's own".
But this criterion can easily be satisfied by a madman, a baby, or a turtle. Surely there is something more to being morally responsible than this?

Next, you actually propose as a serious question:

Quote:
Is a person who acts according to the character he is given acting "on his own"?
Well, duh. What do you think it means to be acting on one’s own if acting to satisfy one’s own desires and preferences in conformity to one’s nature and character doesn’t qualify?

And this:

Quote:
... are preferences "external", "guides" branded upon a metaphysical "person" or does personal identity already include preferences?
Well, if one’s own preferences are “external,” what, pray, would qualify as “internal”? And if one’s personal identity doesn’t include such things as one’s preferences, what sorts of things does it include?

Finally, you bring up yet again the subject of robots:

Quote:
And yet, we do not hold artificial machines (as opposed to biological machines), which also are the sum of the preferences and do not choose their character, responsible.
Actually it’s not hard to explain why this is so. But of course you’ll reject any such explanation on the grounds that (based on Flew’s A) we shouldn’t hold biological machines responsible for their actions either. So what’s your point? It seems that you have rejected in advance any possible grounds for holding any entity responsible for its actions. This rules out any meaningful discussion of such matters.

Anyway, this approach to the question of moral responsibility is perverse. Rather than saying “We hold humans responsible, but not (existing) machines; what does this tell us about the conditions required for moral responsibility?” you say “We don’t hold machines responsible; since there is obviously no relevant difference between machines and humans, isn’t is obvious that we shouldn’t hold humans responsible either?” You don’t seem to be willing to even entertain the possibility that machines that were sufficiently like humans in the relevant ways should be held responsible. But why is this so obviously unreasonable?

3. “Perfect, Autonomous Correlation”

You say:

Quote:
I feel that you repeatedly fail to distinguish between cause and correlation...
Have you read what I wrote on the subject of causation? I went to great lengths to distinguish causation from correlation. What do you think was the point of talking about “underlying structure” and “possible worlds”? Unless you show some minimal understanding of what I’ve already said there is little point in discussing this subject further.

Later, along the same lines, you say:

Quote:
The problem with this part of your argument is that you identify the same phenomenon, perfect correlation, and assume the "causal" explanation and forbid the other explanation of "their own exactitude" (perfect and autonomous correlation).
This is a grotesque misrepresentation of what I’ve said. Please read what I have actually written about causation. I pointed out explicitly that it’s quite possible for A-type events and B-type events to be perfectly correlated in this world without being causally related.

Quote:
For example, if I threw a dice twice and "2" came up both times, there was a 2 / 2 = 100% chance ...
Stop right there. Have you read what I wrote on the subject of probability? I pointed out that all references to probability could be “translated out” of what I had said (and explained carefully how this could be done). So if you don’t like my “statistical” comments, you already know how to eliminate them. And once again, if you read what I wrote, you’ll find that I would not say in this case that there was a 100% probability. I do not subscribe to the “frequency interpretation” of probabilities. I defined as clearly as I know how what I mean by “probability zero” and “probability one” (which are the only probabilities that actually enter into this discussion).

Quote:
You threaten the libertarian with this inconsistency but perfect correlation is precisely what allows him to make the same choice eternally.
I’m not at all clear as to what inconsistency you’re referring to, but it appears that this claim is based on your near-total misunderstanding of my extensive comments about causality, probability, and determinism. Anyway, I never “threatened” anyone with anything. Please try responding to my actual statements.

Quote:
Perhaps I did not explicitly distinguish between LFW and other compatibilist redefinitions but, obviously, if no cause forces a person to choose (according to LFW) that cannot be compatible with determinism (that references causes).
First, as usual you’re using totally inappropriate language to create a misleading impression. When Susie chooses strawberry over chocolate because she prefers strawberry, she is not being forced to choose strawberry; she is choosing strawberry because she prefers it. This is the very opposite of being forced. When the cause of a choice lies within oneself it’s absurd to say that this cause “forces” you to make that choice, because the cause is you. You’re saying, in effect, that Suzie is forcing herself to choose strawberry. Do you imagine that somewhere inside her, the “real Susie” is saying “Please, please, not that! Don’t make me choose strawberry!”? If not, please stop using terms like “force” in this context.

Second, as I argued at some length in my Sept. 6 post, if (as you put it) no cause forces the agent to choose (or more properly, if the choice has no cause) he cannot properly be said to choose at all. In order for it to be a choice, it must have a cause, and this cause must reside in the agent. [Please don’t respond to this statement; respond to the argument to this effect given earlier.]

Now let’s go back to what seems to be the critical point here:

Quote:
... perfect correlation is fundamental and necessary to any concept of LFW because, although the libertarian may wish to have the power to freely choose, he also needs the power to make the same choice eternally, or else he must admit intolerable inconsistency.
Let’s try to parse this. The “perfect correlation” that you refer to is obviously not a perfect correlation between the choices that the agent actually makes when presented with the exact same choice while in the exact same state many, many times. There are no such “choices”; any such situation will always occur at most once. So of course the agent will always make the same choice on 100% of the relevant occasions (there being only one such occasion).

So what events is this “perfect correlation” a correlation between? What choices are these that, if different, would involve an “intolerable inconsistency”? What occasions (plural) are these in which the agent “must have the power to freely choose”, yet must also have “the power to make the same choice” in every case?

Isn’t it obvious that you’re talking about different possible worlds here? Worlds that are exactly like this one up to some critical point preceding the act in question? It appears that you’re saying that in all of the relevant possible worlds, the agent makes the same choice as in this one, yet that it is possible that he could make some other one. But what could this mean? What sense does it make to say that it is possible that a certain event will occur under certain conditions, but that in all possible worlds which satisfy these conditions the event does not occur?

Perhaps you have some satisfactory answer to this. Perhaps you really have some intelligible meaning of “Perfect, Autonomous Correlation” in mind that I haven’t thought of. If so, perhaps you would like to share it with us. But my experience has been that, although advocates of LFW always claim to have a coherent concept in mind, they never seem to be able to communicate it to the rest of us.

[ September 12, 2002: Message edited by: bd-from-kg ]</p>
bd-from-kg is offline  
 

Thread Tools Search this Thread
Search this Thread:

Advanced Search

Forum Jump


All times are GMT -8. The time now is 07:08 PM.

Top

This custom BB emulates vBulletin® Version 3.8.2
Copyright ©2000 - 2015, Jelsoft Enterprises Ltd.