FRDB Archives

Freethought & Rationalism Archive

The archives are read only.


Go Back   FRDB Archives > Archives > IIDB ARCHIVE: 200X-2003, PD 2007 > IIDB Philosophical Forums (PRIOR TO JUN-2003)
Welcome, Peter Kirby.
You last visited: Today at 05:55 AM

 
 
Thread Tools Search this Thread
Old 02-08-2003, 08:26 PM   #181
Veteran Member
 
Join Date: Jul 2000
Location: King George, VA
Posts: 1,400
Default

rainbow walking, K, and others:

It seems that Kenny is right: the notion that no belief which cannot be inferred from evidence can be rationally justified seems to be an article of faith for many here. Instead of going over and over the arguments as to whether this or that belief can or cannot be justified by evidence or is rationally justified, it may be instructive to see why many philosophers consider this idea to be untenable.

One of the best explanations of this at an introductory level that I know of is the one by Larry Sanger in “Larry’s Text”, which is part of the online Wikipedia. The relevant article, from which the excerpt below is taken, can be found http://www.wikipedia.org/wiki/The_regress_argument_in_epistemology]here[/url].

Quote:
All of the foregoing has prepared us to come to grips with a very important argument in epistemology, called the regress argument or the infinite regress argument. It goes like this:

(1) Suppose that the belief that Q is justified by the belief that P; so P justifies Q.

(2) But if P justifies Q, then P is justified.

(3) So P is justified.

(4) But if P is justified, then it must be justified some other belief; and that belief must be justified by some other belief; and so on. There is a chain of justifying beliefs. And then there are three possibilities: (a) the chain goes on forever; (b) the chain loops around on itself, forming a circle; or (c) the chain begins with a belief that is justified, but which is not justified by another belief.

(5) Possibility (a) (called regressism) is obviously incorrect.

(6) Possibility (b) (called coherentism) is incorrect (for various reasons, explained below).

(7) Possibility (c) (called foundationalism) is the only possibility left and must be correct.

(8) Therefore, there must be some beliefs that are justified, but which are not justified by other beliefs: these are called basic beliefs. All other beliefs are justified by basic beliefs.

So the regress argument, as I've presented it here (and as it is usually presented) is an argument for a theory of justification called foundationalism. But I don't want to talk about foundationalism yet. Let's go back over this argument carefully, first.

Now, we've already explained steps (1) through (3). It advances that principle we talked about before, that if a belief is justified by another belief, then the justifying belief itself must be justified. Now let's look at premise (4). This is the really important step of the argument: "If P is justified, then it must be justified some other belief; and that belief must be justified by some other belief; and so on. There is a chain of justifying beliefs. And then there are three possibilities: (a) the chain goes on forever; (b) the chain loops around on itself, forming a circle; or (c) the chain begins with a belief that is justified, but which is not justified by another belief."

Let me try to put this premise in entirely different words. Pick a belief of yours - any old belief; call it the belief that Q. Suppose you say that Q is justified. OK, then there's something that justifies it. Suppose you want to say it is justified by another belief, P. But in that case, you'd have to know what justifies P. And what justifies that. And so on. You can't have an infinite regress of justifying beliefs -- you don't have an infinite number of beliefs supporting any given belief of yours! So there's another choice; you could say that the belief justifies itself somehow; either directly, so that Q justifies Q; or indirectly, so that Q justifies R, S, and so forth, down to Z say, and Z then justifies Q. The whole set of beliefs then forms a circular chain of justification. That's at least a possibility. And there is one other possibility, namely, that Q is justified not by another belief, but by something else, something that isn't a belief. We'll have to see what that something else might be later. Anyway, all this is in explanation of what premise (4) says.

Now to premises (5) and (6). These two premises eliminate options (a) and (b), respectively, leaving us with option (c). To begin with premise (5), which eliminates option (a). Premise (5) says: "Possibility (a) (called regressism)" -- which is the possibility that the chain goes on forever -- "is obviously incorrect." In other words, it is obviously wrong to say that a chain of justification goes on forever. I could but will not elaborate; virtually no philosophers take it seriously. I think you should be able to see generally why this is so obviously wrong, though. I mean, for one thing, we don't have an infinite number of beliefs.

Now for premise (6), which eliminates option (b). Premise (6) says: "Possibility (b) (called coherentism)" -- which is the possibility that the chain loops around on itself, forming a circle -- "is incorrect." Now, coherentism is taken much more seriously than regressism. So I'm going to discuss coherentism at greater length.

Remember that the word "coherentism" can mean either a theory of truth or of justification. Here is a definition of the coherence theory of justification, or coherentism for short:

A belief is justified if and only if it is part of a coherent system of mutually supporting beliefs (i.e., beliefs that support each other)...

But there have been a lot of powerful objections to coherentism. I want to review three of the objections.

The first objection is that coherentism seems to imply that circular justification is just fine. There's nothing wrong with saying that P supports P, ultimately. But that seems wrong. Think of it like this. If you say, for example, that P justifies Q, and Q justifies R, and R justifies P, then it seems like you're saying that you could argue: P, therefore Q; Q, therefore R; R, therefore P. If all that is true, then we could argue: P, therefore P. Well, that's obviously a fallacious sort of argument; it's called "begging the question" or "arguing in a circle." But coherentism seems to imply that such an argument is just fine -- that there's nothing wrong with it. But there is something wrong with it; therefore, coherentism must be rejected.
Sanger goes on to explain further very cogent objections to coherentism, but it seems to me that this one objection is sufficient, since it is completely fatal. It’s rare to be able to absolutely refute a philosophical theory even once; three times looks like overkill to me. And anyway, it doesn’t seem that anyone here is advocating coherentism.

So we’ve now eliminated logical possibilities (a) and (b), leaving (c) as the only logical possibility still standing. And this leads directly to (8):

Quote:
There must be some beliefs that are justified, but which are not justified by other beliefs: these are called basic beliefs. All other beliefs are justified by basic beliefs..
This is actually stronger than saying that some beliefs are justified, but are not justified by evidence, because saying that a belief is justified by evidence involves at least two beliefs: a belief that the evidence is true, and a belief that the evidence actually justifies the belief in question.

Now an obvious answer to all this is roughly as follows: “Yes, there are beliefs that are justified but not justified by other beliefs. But the only such beliefs are beliefs to the effect that I am right now having such-and-such a mental experience. All other justified beliefs must be justified solely from beliefs of this sort.” But a moment’s thought shows that there are few, if any, valid conclusions that can be drawn solely from such beliefs. (If you don’t believe this, try it. All you’ll get is stuff like this: From the fact that I am having mental experience M1, I can conclude that either I am having mental experience M1 or P is true (where P is any proposition). Or, from the fact that I am having M1 and the fact that I am having M2, I can validly conclude that “I am having M1 and M2” is true. Or, from the fact that I am having M1 I can validly conclude that there is at least one mental experience that I am having. Pretty exciting stuff, eh? ) If you want to get beyond this sort of thing you simply must accept some premises about what it is valid to infer from such premises. (Of course these inferences will typically only be probable.) But in terms of having rationally justified beliefs, this doesn’t help unless these additional premises are themselves rationally justified, since conclusions based on premises that are not rationally justified are not thereby rationally justified.

That’s where things like belief in the principle of induction, the general reliability of one’s memory, and Ockham’s Razor come in. If you don’t accept the first two, what do you propose to use in their place to get from beliefs about what you are experiencing right now to statements about what you have experienced or will experience? And if you don’t accept something like Ockham’s Razor, how do you propose to get from statements about your mental experiences to statements about the “real world” ?

So you have two choices: either embrace radical skepticism and say that there are no (nontrivial) rationally justified beliefs at all, or accept that there are rationally justified beliefs other than beliefs about what one is now experiencing, and that among these must be beliefs about what can be validly inferred from those experiences.

Kenny:

I was hoping to have a reply to your first Jan 31 post ready last night, but I didn’t make it, and I was away all day today. It should be ready (finally) tomorrow.
bd-from-kg is offline  
Old 02-08-2003, 08:32 PM   #182
Veteran Member
 
Join Date: Jul 2000
Location: King George, VA
Posts: 1,400
Default

K:

You say:

(1) There is no evidence that any tool that is useful today will be useful in the future.

You also apparently believe:

(2) No belief which cannot be inferred from evidence can be rationally justified.

But from these premises it follows that

(3) No belief that any tool that is useful today will be useful in the future is rationally justified.

And it follows rigorously from this (combined with one or two other pretty uncontroversial assumptions) that:

(4) No belief about the future is rationally justified.

Do you really believe this?
bd-from-kg is offline  
Old 02-08-2003, 09:28 PM   #183
K
Veteran Member
 
Join Date: Aug 2002
Location: Chicago
Posts: 1,485
Default

bd-from-kg:

You got me there. I got carried away and very sloppy. Let me rephrase a little.

There is no evidence that a tool that is useful today will be useful in the future. There is evidence to suggest that it is useful to make the assumption that a tool that is useful today will be useful in the future (until evidence shows otherwise).

You've convinced me. I'm willing to concede the point that there are rational beliefs that have no evidence - but only where there is evidence that shows that holding those beliefs aids in creating a functional model of the environment.

I guess that leads to the question, does evidence of the utility of a belief constitute evidence for the belief itself? That probably all boils down to precise definitions of evidence, belief, and utility.
K is offline  
Old 02-09-2003, 11:19 AM   #184
Veteran Member
 
Join Date: Jul 2000
Location: King George, VA
Posts: 1,400
Default

Kenny:

Here finally is a reply to your first Jan 31 post. (Well, most of it anyway. I’ll talk about the stuff about belief in other minds in yet another post, but this should be mercifully short.)

1. On my description of rational justification

Quote:
Here you define ‘functioning effectively’ as ‘being able to make choices that have a good chance of bringing about desired results’ but I am curious as to what “desired results” you are referring to
This reference to “desired results” seems to have opened the door to a number of misconceptions. Perhaps it would have been better to say that being able to function effectively (in the sense relevant to the “rational project” ) means being able to predict the results of various possible courses of action (significantly better than we could do by making random guesses). It has nothing to do with achieving any specific goal. Only strategies which are aimed at the goal of acquiring true beliefs qualify as part of “rationality itself” – at least in the sense relevant to “rational justification”. A belief that is adopted because one thinks that it will contribute to some other goal is not rationally justified in this sense, although it might be pragmatically justified. (Another way of making this point is to distinguish between being “epistemically justified”, or “within one’s epistemic rights” in believing something, and being “pragmatically justified”.) To say that a belief is rationally justified because it serves a goal that you might have, but which a being that obviously qualifies as “rational” might well not have, is an abuse of language. A belief is rationally justified (roughly speaking) if it can be arrived at by following a “rational strategy” and the person who has it got it via a procedure that can be reasonably interpreted as following this strategy.

My description of rational justification is not intended to be stipulative, but descriptive. In other words, my intent is not to “make up” a new meaning of “rationally justified” to serve my purposes, but to analyze the concept that most people have in mind when they talk about whether a belief is rationally justified. It seems clear to me that they do not mean that there is reason to think that the belief will serve some end other than maximizing one’s true beliefs. I take it as self-evident that any rational being will have the end of maximizing his true beliefs and minimizing his false ones. Perhaps I was a bit sloppy in explaining why this is so, since it’s so self-evident, but this minor sloppiness is not an excuse for trying to import all kinds of reasons for believing things ino the notion of rational justification that clearly don’t belong there – reasons that, in fact, are fundamentally opposed, in spirit and effect, to the end of maximizing true beliefs.

Quote:
... there might be a considerable amount of tension between the dual goals of maximizing true beliefs and minimizing false ones in as many worlds as possible.
True. Perhaps the best way of dealing with this is to say that a strategy is rational if it’s aimed at maximizing true belief and minimizing false ones and no other strategy is “clearly better” in the sense that it can be expected to produce more true beliefs and fewer false ones. This means, of course, that there is more than one “rational strategy”. As to which one to follow, this would seem to be a truly pragmatic decision – i.e., it depends on circumstances. For example, it may be that one doesn’t have enough information to decide that either P or not-P is true with more than (say) 60% probability. Ordinarily the rational choice is to believe neither P nor not-P. But it’s often necessary to act now, and the choice depends critically on whether P or not-P is true. In that case it’s rational to believe whichever of P and not-P is more likely.

Quote:
Second, there is need of some further qualifications, it would seem, on the condition “in as many worlds as possible.” The God of classical monotheism, for instance, is understood to be a necessary being.
The only sense that I can make of the claim that God is a necessary being is that the statement “God exists” is a tautology. If that is what you mean, and you can see clearly that this statement is a tautology, you are prefectly within your epistemic rights to consider it a “properly basic belief”. I see no harm in considering beliefs in tautologies that one can “see” are true (even Godel’s incompleteness theorem, or the independence of the continuum hypothesis) to be properly basic beliefs. I don’t see any point in trying to distinguish (as Kant did) between tautologies that are “self-evident” and ones that aren’t.

If what you mean by calling God a necessary being is something other than that “God exists” is a tautology, I haven’t the faintest idea what you’re talking about.

Quote:
Third, you further restrict the range of possible worlds to which the rational strategy is to be tailored to those worlds in which “there is a reasonable possibility of functioning effectively”
The rational strategy isn’t “tailored” to such worlds. There is no strategy that would work in other worlds. If there are no discoverable patterns or regularities, we might occasionally happen to act in ways that schieved our aims, but depending on luck cannot be called a “strategy”.

Quote:
...as I pointed out, it is possible to accomplish this goal without believing in things such as the reliability of inductive reasoning, the reliability of memory, etc. One could be an agnostic about all such things ...
Actually it’s possible in principle to accomplish any goal that can be achieved by following the rational strategy in this way. But as I commented in a previous post, I don’t think that this is really possible for humans. We cannot consistently act “as if” something is true without believing, or coming to believe, that it is true.

Quote:
Even if beliefs such as belief in the principle of induction or belief in the reliability of memory are necessary for functioning effectively, however, there are many other types of beliefs which we would like to consider properly basic which would seem excluded from this analysis.
Let’s see.

Quote:
One such belief would be the belief that reality is as it appears to be.
Well, in one sense it’s meaningless even to ask whether reality is as it appears to be.

For example, does a juicy steak “really” taste the way we “think” it tastes? What could a question like that mean? What possible state of affairs would be represented by the statement “This steak doean’t really taste the way it seems to me to taste”? Or, does that blue ball really have a “blue” appearance? What does that mean? What would it mean to say that although it looks blue to everyone who isn’t color-blind, it doesn’t “really” have a blue appearance? The same problem exists for every question of this nature. These are pseudo-questions.

But judging from your example of a “matrix-type simulation”, what you have in mind is the possibility that the conceptual frameword, or ontology, that we’ve created to account for our experiences is radically incomplete – that the “reality” it corresponds to is embedded in a much larger reality. But the belief that “reality is as it appears to be” in this sense is just a special case of the “default” belief that “real-world” entities that we have no evidence for don’t exist. And the justification for this is simply Ockham’s Razor, which is one of the basic “principles of rational action”. It doesn’t need some additional “pragmatic” justification.

[quote]Perhaps, however, you mean more by the phrase ‘desired results.’ Perhaps you mean for ‘desired results’ to include not just getting by, but also to have a reasonable chance of gaining a significant understanding of the reality in which one lives.

I think what you’re getting at is that we desire to “understand reality”, not just because this wil help us to attain other ends, but as an end in itself; an ultimate end or “intrinsic good”. I would certainly agree with that.

2. On the rationality of believing in God

Although I hope I’ve made it clear that the kinds of argument that you advance here for believing in God do not consitute “rational justification” for this belief (at least in any ordinary sense), it’s worth examining whether they are valid pragmatic justifications.

I feel compelled to say at the very outset that I regard all such arguments with scorn. I regard them as representing a willful rejection of rationality; an abandonment of intellectual integrity. I think that it is downright wicked to advocate believing things because doing so will serve your purposes rather than because your best judgment is that they are true. I’ll give reasons for this attitude later. But in truth I don’t need reasons. My commitment to the disinterested search for truth is at least as deep as your commitment to believing in God.

Anyway, let’s look at your arguments.

Quote:
Certainly, if God exists, then missing out on believing that fact means missing out on believing something very deep and significant and important about reality.
And if God doesn’t exist, believing that He does means missing out on believing something very deep and significant about reality. What’s your point?

Quote:
Furthermore, whether one believes in God or does not believe in God has a significant impact on the beliefs that one has about everything else.
Yup. And this is supposed to be an argument for adopting this belief without demanding any evidence?

Quote:
So, if I find myself with a strong inclination to believe that God exists, why shouldn’t I go ahead and believe it?
Well, my answer would be: because this is not a rational reason for thinking that it’s true. But since (based on your comments here) you seem to have abandoned intellectual integrity to the extent of believing or disbelieving things based on what’s in it for you rather than on your best rational judgment as to whether it’s true, perhaps you won’t find this reason very cogent.

Quote:
After all, if God does exist then it is likely that He might have placed such an inclination in me and if I don’t believe in God in response, I might drastically undermine my goal of coming to a significant understanding of the reality in which I live.
And if God doesn’t exist it is likely that this inclination is either the result of a personal idiosyncracy or an accidental byproduct of some tendencies produced by natural selection because of their survival-enhancing effects in other contexts, and if you believe in God in response, you might drastically undermine your goal of coming to a significant understanding of the reality in which you live.

Quote:
Of course you might argue that believing in God entails the same risk, but the risk is at least as great either way, so why shouldn’t I believe as I am inclined in this matter?
Rationality. Intellectual integrity. Self-respect.

Anyway, the risk isn’t “equally great either way”. The “riskiness” of each choice depends no only on the expected loss if you’re wrong, but on the likelihood of being right – i.e., the a priori probability that God exists.

Quote:
Furthermore, if ‘desired results’ includes more than just mere survival and getting by with a reasonable amount of pleasure and comfort, what limits are to be placed on what should be included?
“Desired results” was a placeholder in my description of what “rationality” means, like X and Y in “If X is true and X implies Y, then Y is true.” It didn’t refer to any specific outcomes.

Quote:
Finally, from a Christian perspective, our ‘desired results’ are themselves skewed by the reality of sin. What we want for ourselves is not what we should want (and given the prevalence of greed and oppression in this world, I do not find this at all difficult to believe)...
Neither do I. In fact, up to this point I agree with you completely. I think that what we should desire is what we would desire if we had the most complete possible knowledge and understanding, and that this is indeed probably radically different from what we do desire. In fact, I think that our desires would be far more altruistic if we had enough knowledge and understanding.

Quote:
Even without a belief in Christianity, it seems plausible that there might be moral norms for what we ought to desire.
Once again I agree, although I suspect that we’d disagree about what it means for something to be a “moral norm”.

Quote:
And, all my observations of the world confirm to me that the affections of most human beings are horribly skewed toward the wrong things. Thus, it seems that “functioning effectively” likely involves something else besides being able to fulfill one’s desires – it also requires having oneself orientated toward the right desires.
Agreed, although here you’re using “function effectively” in a different sense than I was using it in my description of “rationality”. There I was talking about being able to fulfill whatever desires one has, which was just another way of saying “being able to achieve whatever one chooses to try to achieve” (so far as knowledge can help one do this). You’re using it to mean something like “being in a position to achieve what one would choose to achieve if one had enough knowledge and understanding”. I don’t see any real incompatibility here as long as we keep in mind that we’re talking about different things.

Quote:
And, as I said, according to Christianity, the only way we can have our desires reoriented to the right things is by means of God’s grace which enters our lives by means of faith (i.e. trust) in God.
But here we must part company, not only because I don’t believe in Christianity, but because you’re once again arguing that we should abandon intellectual integrity and believe something, not because our best judgment is that it’s true, but because believing it might get us something that we want.

3. The nature of the fallacy in these arguments

The rest of your post just keeps making the same kind of argument over and over again, so it’s more efficient to examine what’s wrong with this kind of argument rather than going over each instance in detail.

Each of thes arguments goes like this:

I desire D.
If P is true, I can get D only by believing P.
Therefore it’s rational for me to believe P, regardless of whether there’s any evidence that P is true.

The first thing to note about this argument is that it’s a purely pragmatic argument rather than (like the arguments for accepting the principle of induction, etc.) an existential one. An existential argument says that I have no choice this side of madness but to accept the proposition in question; a pragmatic argument says that if I get really lucky I’ll get something I want if I accept it. The existential argments establish the essetial conditions for rationality; the pragmatic arguments advocate abandoning rationality in hopes of achiving some specific desired goal.

Your basic theme seems to be that if we accept existential arguments for accepting certain beliefs, we must, if we are to be consistent, accept pragmatic arguments for believing things as well. But surely it’s clear that accepting things that are essential prerequisites for rationality doesn’t commit us to abandoning rationality.

Second, accepting this type of argument as valid would commit us to accepting a lot of other arguments along the same lines which I think that you would be inclined to reject. Some examples:

I desire to be able to control my fate.
If my fate is strongly intertwined with the stars and planets, I can control my fate only by believing in astrology.
Therefore it is rational for me to believe in astology regardless of whether there’s any evidence that it’s true.

I desire to live forever in perfect happiness.
If religion X is true, I can do so only by accepting religion X.
Therefore it is rational for me to accept religion X regardless of whether there’s any evidence that it’s true.

I desire that the rest of mankind to live in peace and harmony for all time.
If what this nice Martian is telling me is true, I can achive this by giving him permission to torture me merciless until I’m dead.
Therefore it’s rational for me to give him permission to torture me mercilessly until I’m dead regardless of whether there’s any evidence that what he’s telling me is true.

To give up your intellectual integrity in order to believe things for such reasons is to knowingly and deliberately enter the intellectual ghetto. It is to give up everything worth having in order to devote your life (or death) to delusions and fantasies.

Third, it’s a really bad idea to start thinking in terms of “what’s in it for me” when deciding what to believe. When you do this for one belief, it radically undermines your commitment to intellectual integrity with respect fo forming beliefs in general.

This point was expressed eloquently in W. K. Clifford’s classic essay The Ethics of Belief:

Quote:
Every time we let ourselves believe for unworthy reasons, we weaken our powers of self-control, of doubting, of judicially and fairly weighing evidence. We all suffer severely enough from the maintenance and support of false beliefs and the fatally wrong actions which they lead to, and the evil born when one such belief is entertained is great and wide. But a greater and wider evil arises when the credulous character is maintained and supported, when a habit of believing for unworthy reasons is fostered and made permanent. If I steal money from any person, there may be no harm done from the mere transfer of possession; he may not feel the loss, or it may prevent him from using the money badly. But I cannot help doing this great wrong towards Man, that I make myself dishonest. What hurts society is not that it should lose its property, but that it should become a den of thieves, for then it must cease to be society. This is why we ought not to do evil, that good may come; for at any rate this great evil has come, that we have done evil and are made wicked thereby. In like manner, if I let myself believe anything on insufficient evidence, there may be no great harm done by the mere belief; it may be true after all, or I may never have occasion to exhibit it in outward acts. But I cannot help doing this great wrong towards Man, that I make myself credulous. The danger to society is not merely that it should believe wrong things, though that is great enough; but that it should become credulous, and lose the habit of testing things and inquiring into them; for then it must sink back into savagery.

The harm which is done by credulity in a man is not confined to the fostering of a credulous character in others, and consequent support of false beliefs. Habitual want of care about what I believe leads to habitual want of care in others about the truth of what is told to me. Men speak the truth of one another when each reveres the truth in his own mind and in the other's mind; but how shall my friend revere the truth in my mind when I myself am careless about it, when I believe things because I want to believe them, and because they are comforting and pleasant? Will he not learn to cry, "Peace," to me, when there is no peace? By such a course I shall surround myself with a thick atmosphere of falsehood and fraud, and in that I must live. It may matter little to me, in my cloud-castle of sweet illusions and darling lies; but it matters much to Man that I have made my neighbours ready to deceive. The credulous man is father to the liar and the cheat; he lives in the bosom of this his family, and it is no marvel if he should become even as they are. So closely are our duties knit together, that whoso shall keep the whole law, and yet offend in one point, he is guilty of all.

To sum up: it is wrong always, everywhere, and for anyone, to believe anything upon insufficient evidence.
4. Further objections to “pragmatic” arguments fo believing in God

Finally, there are special problems with the specific arguments of this type that you offer, which can be illustrated by considering your final argument for belief in God:

Quote:
Suppose I desire a meaningful personal relationship with my Creator or with the Ultimate Reality which underlies the universe. Such would only be possible if my Creator or the Ultimate Reality were in some sense personal, so why should I not believe that it is personal – why shouldn’t I believe in God?
The fatal flaw here is that you’re proposing to have some sort of dealings with a supernatural being – one that cannot be seen, or heard, or felt, or touched. How are you going to make contact with it? Say that you observe something that you interpret as a physical manifestation of God; how do you know that it was caused by a being with the characteristics you attribute to God? In fact, forget about attributes like omnipotence and omniscience; the real point is: how do you know that the being in question is benevolent? How can you know what his purposes are; whether his “values” are even remotely compatible with yours? How do you know that having a “personal relationship” with it (even if such a thing happened, by an amazing chance, to be possible) would serve any purposes of yours?

One possible answer might be that you know that God exists (via some version of the ontological argument, say), and that His nature is such that He wouldn’t allow any other being to take advantage ofyour desire to have a relationship with God to lure you into a personal relationship with it. But for this to work you must already know that God exists before you start.

Similarly, you suggest:

Quote:
Suppose my desire is to ... come to a deep understanding of the purpose and destiny of the universe ...
But even assuming that you can somehow communicate with God and know that you’re doing so, how can you know that you’re learning anything about the “purpose and destiny of the universe” (supposing that this phrase even means anything)? In other words, how can you tell whether God is telling you the truth? Because God, being beneolvent, wouldn’t lie? But how do you know that being benevolent entails never lying? Don’t you have to know somehow that truthfulness is “intrinsically good” before you can know that? And how can you know that truthfulness is intrinsically good? Well, presumably by using your cognitive faculties (or some other faculties given to you by God). But unless you know by some other means that God is truthful, how can you know that He gave you cognitive faculties designed to predispose you to true beliefs about such things (or about anything else for that matter)?

In fact, if God values truth so highly (especially about really important stuff like the ultimate purpose and destiny of the universe), why didn’t He just implant this knowledge in all of us from the start? If He desires that we know that He exist, why doesn’t He just tell us so? Why does He remain hidden? Why does He leave us in ignorance?

A standard answer to this is that God may have other, unknown purposes that cause Him to act as He does. Fine, but unless we assume that being truthful with us puny humans is His highest purpose of all, we then have no way of knowing that He wouldn’t lie to us about anything and everything, to further these unknown purposes.

In short, unless we already know a great deal about the “ultimate purpose and meaning of the universe”, and this knowledge does not derive ultimately from God, we have no rational grounds for believing anything that God tells us, or anything that we are predisposed to believe because God designed us that way.

Also, it’s possible that if we choose to believe in God because we believe that this belief might be instrumental in achieving some personal goals (such as having a personal relationship or acquiring understanding), God will turn away from us on the grounds that our purposes are unworthy.

Finally, it’s quite possible that what God really wants is for us to use the rationality that He endowed us with, to believe only what we have sufficient evidence for (aside from “metaphysical axioms” such as the principle of induction, for which no evidence is possible because they are the “rules” for evaluating and interpreting evidence). Perhaps He will favor only those who resist the temptation to believe things for “pragmatic reasons”. Perhaps he doesn’t want us to believe in Him until the appropriate time; perhaps to believe in God (at this time anyway) is to oppose the “ultimate purpose of the universe”.

It seems to me that there is no real answer to objections of this sort.
bd-from-kg is offline  
Old 02-09-2003, 04:03 PM   #185
Veteran Member
 
Join Date: Jul 2000
Location: King George, VA
Posts: 1,400
Default

Kenny:

OK, here’s the last installment of my comments on your Jan 31 posts: the stuff about the belief in other minds.

I should note that Plantinga has written a lot of stuff on this subject, none of which I’ve read. So you have me at a distinct disadvantage. In effect I’m going against one of the foremost philosophers of the day with nothing but my own, almost “off-the-cuff” thoughts on the subject. Anyway, here goes.

There seem to be four basic issues here:

(1) Is there strong empirical evidence that human beings have desires, beliefs, purposes, motives, etc.?

(2) Is this the same thing as saying that humans have minds, or is this an additional claim?

(3) If it’s an additional claim, is there good empirical evidence for it?

(4) Can the belief that other humans have minds be accounted for in straightforward naturalistic terms, or does it require a supernatural explanation?

Let’s take these one at a time.

(1) Is there strong empirical evidence that human beings have desires, beliefs, purposes, motives, etc.?

The answer to this seems to me to be so obviously “yes” that I’m tempted to just go on to the next point. but one or two of your remarks suggest that you don’t agree. In particular, you say:

Quote:
We don’t need to say that other beings function “just as if they were controlled by minds.” ... All we need to say is that certain types of stimuli for certain types of entities correlates to certain types of behaviors. To say more is to go beyond what the evidence warrants.
Just the same, I’m inclined to avoid arguing about this point. I think that most likely we’re just misinterpreting one another, and that we don’t really disagree on this.

Perhaps it will help if I say that what I mean by having desires, beliefs, purposes, etc. doesn’t entail (or at least it’s not at all obvious to me that it entails) consciousness. A robot who is sufficiently complex that its behavior is best understood in terms of desires, purposes, etc., would have these things by definition as I use these terms.

(2) Is this the same thing as saying that humans have minds, or is this an additional claim?

The physicalist claim, as I understand it, is that saying that an individual has a mind is identical to saying that he has such things as desires, beliefs, purposes, etc. (or more precisely, that the second statement logically entails the first). Of course it’s not obvious that first statement is entailed by the second; in fact it seems obvious to most people that it isn’t; that saying that someone has a mind is saying something in addition to this. But the fact that it’s not obvious doesn’t mean that it isn’t true. For example, it’s not obvious that “the sum of the squares on the two shorter sides of a triangle equals the square on the longest side” says nothing more than “this is a right triangle”, but it’s true.

Just the same, I don’t really understand the physicalist position. It’s true that it can be defended very successfully, but it seems to me that it can only be believed if one ignores the obvious fact of one’s own consciousness.

So I’m not prepared to take a position on this question.

(3) If it’s an additional claim, is there good empirical evidence for it?

Here we definitely disagree. I think there is good empirical evidence that other human beings have minds.

To see why, let’s imagine that I discover life on another planet which is clearly unrelated to human life (i.e., it didn’t descend from it or vice-versa). I find one particular species especially interesting because of a curious exhilarating sound that its members make from time to time. I examine a great many of them externally – i.e., without dissecting them – and find that all of them are identical in terms of basic architecture and structure; they have very similar genetic material, etc. Moreover, I observe that they all interbreed and thus share essentially the same genetic makeup, and that this genetic makeup controls their development from the very beginning – from the creation of the single cell from which every thing else develops. From all of this I come to expect that all of them will be essentially similar internally as well. Eventually I begin to dissect them, and notice right away that they are indeed essentially similar internally as well with respect to the first few things I check: their digestive organs are structurally identical, as are their organs of sight, of hearing, etc. Finally I examine one of them closely and locate the structure responsible for that curious sound that I noticed early one. Unfortunately I have to leave the planet immediately without having a chance to examine any more specimens. Just the same, on the basis of this one specimen, I conclude that all of the members of this species (except possibly for a few malformed ones) have this structure, and this it’s what produces this sound in all cases.

Is my belief rationally justified? I say that it is. The fact that I only observed this particular structure in one specimen is just a small part of the picture. The fact that I observed that all of the individuals of this species are very much alike in all important respects that I was able to observe is an extremely relevant fact as well. The fact that I understand why they’re so similar, and can predict on this basis that they should be similar in all important respects, is also highly relevant. It’s not as though the observation of this one feature of this specimen is the only observation ever made of any member of this species. Taking all of the relevant evidence into account, it seems to me that Ockham’s Razor dictates that pending further evidence, I must assume that all members of the species have this feature. Otherwise I’d have to explain why this particular specimen is different in this respect from the others, and why I happened to select this “outlier”.

(4) Can the belief that other humans have minds be accounted for in straightforward naturalistic terms, or does it require a supernatural explanation?

It seems very clear to me that this belief can be accounted for naturalistically. The simplest explanation is that, when we try to understand another person’s behavior, the most natural thing in the world is to imagine what we would do under similar conditions. If this doesn’t work, we try imagining what we’d do if we had certain traits (a quick temper, or a low IQ, for example) that the person in question has and we don’t. In other words, we don’t build a conceptual framework for understanding how other people behave “from scratch”; we use ourselves as a “template” – a starting point. But of course, this “template” incorporates the fact of our consciousness. Imagining how we would behave in someone else’s place almost unavoidably means imagining that person as having consciousness. And we’ve been doing this from very early childhood, and it has been enormously successful. The overwhelming success of this model tends to produce an almost unshakable belief that it “corresponds to reality”.

(Studies of cognitive development in young children confirm this picture of how we come to understand other people’s behavior. I don’t have time to go into this here; if you’re interested, you can read the relevant literature.)

So it seems that there is indeed a straightforward naturalistic explanation of why we have such a very strong belief that other people have minds, and there is a solid rational justification for this belief. There’s no justification for regarding this belief as a manifestation of divine design or as evidence of God’s existence.

Some final notes. You say:

Quote:
Furthermore, given how strongly we all believe in other minds (in fact, believe that we know that there are other minds), I would consider any analysis that leads to the conclusion that no one knows that there are other minds to thereby have been reduced to an absurdity.
1. It is false that we all believe that we know that there are other minds, and I can prove it: I do not believe that we know that there are other minds. QED. So on this point you are decisively refuted.

2. How exactly do you get from “we all believe that we know that there are other minds” to “any analysis that leads to the conclusion that no one knows there are other minds has thereby been reduced to an absurdity”?
bd-from-kg is offline  
Old 02-10-2003, 08:51 AM   #186
Senior Member
 
Join Date: Jul 2000
Location: South Bend IN
Posts: 564
Default

Quote:
Well, my answer would be: because this is not a rational reason for thinking that it’s true. But since (based on your comments here) you seem to have abandoned intellectual integrity to the extent of believing or disbelieving things based on what’s in it for you rather than on your best rational judgment as to whether it’s true, perhaps you won’t find this reason very cogent.
Hello bd-from-kg,

I will get to the rest of your posts as soon as I can (but, again, it may be a while). But, I did want to point out that this and other comments on your part represent a radical misunderstanding of my position (perhaps as a result of poor communication on my part?). I do not hold to a pragmatic view of rational justification. I do not think one would be rational to believe something merely because believing it would benefit her or in some way aid that one in fulfilling her desires. I thought that I had made that clear in my previous post when I said that pragmatic types of justification were completely irrelevant to rational justification and when I told luvluv that I do not see rational justification in terms of practical utility. Furthermore, as I already stated, I believe that any strategy of forming beliefs based on our current human desires is likely to lead to all sorts of irrational beliefs because human affections are skewed, by sin, to the wrong desires.

My whole long discussion on pragmatic justifications for God’s existence was simply meant to show that, even on (what I took to be) your analysis of rational justification (which seemed to me to be a sort of pragmatic view), belief in God still might be construed as properly basic. I then went on to explain why I thought such a pragmatic view of justification was wrong. I never intended to advocate such a view and I thought that I had done a sufficient job in making that clear (though perhaps not). You most recent posts help to clear up your views of rational justification, and I will adjust my responses accordingly.

God Bless,
Kenny
Kenny is offline  
Old 02-10-2003, 04:02 PM   #187
Senior Member
 
Join Date: Jul 2000
Location: South Bend IN
Posts: 564
Default

Quote:
Originally posted by rainbow walking
Actually I did give that impression by my use of "evidence". To rephrase let me ask if you have any type of sound argument to support a contention that our cognitive faculties were created as opposed to evolved?
I regard that as a false dichotomy. The God of Christianity is the sovereign Lord over all natural processes and every single detail of creation. It doesn’t matter, with respect to my argument, whether God created us via directing some “natural” process or whether he created us by means of special creation.

Quote:
I don't know Kenny, you made the assertion part and parcel of the defense of your argument, thus incorporating another premise into the mix. A sound argument requires true premises...yes?
That’s true, but my argument does not depend on accepting the premise that God exists. My argument is basically “If God exists then there is a reasonably high objective probability that belief in God is properly basic with respect to warrant for many of its adherents, and so the question of whether theism is rational cannot be divorced from the question of whether theism is true.” My argument need not make any commitment to whether or not God does in fact exist in order to establish the above conclusion.

Quote:
So you would posit an argument that our cognitive faculties were designed to develop naturally?
Well, I’m quite skeptical of the whole natural/supernatural distinction to begin with, since I have a very high view of God’s providence and see the whole of nature as being directed by God’s sovereign rule.

Quote:
Yes, you've made that claim. Beliefs that cannot be inferred from evidence, especially in the absence of inductive qualifiers, remain just beliefs from which true premises cannot be formed or inferred.
I’m not sure what this means. True premises can be inferred even from false beliefs; though such inferences will generally not convey any warrant to those premises. If you mean by this that no knowledge can be derived by means of inference from beliefs for which there is no evidence, then I’ve already shown that such a claim leads to radical skepticism, since such beliefs include belief in the reliability of induction, belief in the reliability of memory, belief in the reliability of the senses, etc.

Quote:
Perhaps in the formulation of hypotheticals, yes, but in the substantiation of these formulations, especially where those substantiations are reaching for so high a standard as "warranting" there is no properly basic manner to accomplish this without resorting to "evidence" of some kind.
Well, that’s what this whole debate is about. Merely asserting the debated proposition won’t fly.

Quote:
Also, if He wanted to make knowledge of Himself readily available, then it is likely that he would have made knowledge of His own existence properly basic.

And if he hasn't? Are you prepared to acknowledge this as a defeater?
A defeater for what? If God has not made knowledge of his own existence properly basic and one could somehow prove that, then that would certainly undercut a properly basic defense for the rationality of theism, but I haven’t seen any such proof.

God Bless,
Kenny
Kenny is offline  
Old 02-10-2003, 04:22 PM   #188
Veteran Member
 
Join Date: Jul 2000
Location: King George, VA
Posts: 1,400
Default

Kenny:

Quote:
I do not hold to a pragmatic view of rational justification. I do not think one would be rational to believe something merely because believing it would benefit her or in some way aid that one in fulfilling her desires.
OK. I thought that you were making a kind of “second-order rationality” argument going something like this: Belief in God is probably warranted (hence rationally justified) if God exists, but disbelief in God is probably warranted (hence rationally justified) if He doesn’t. So at this level we have deadlock. Under these circumstances, since there’s no objective basis for deciding whether to believe or disbelieve, it’s rational to base this decision on pragmatic considerations.

At any rate, this sort of argument is encountered very frequently, so perhaps it was worthwhile to lay out the objections to it. I’m glad to learn that you haven’t bought into this kind of thinking.

I have more to say about our “Martian gamma-ray mind-control” lunatic, but I’m taking a mini-vacation from this stuff today to tend to other pressing matters.
bd-from-kg is offline  
Old 02-11-2003, 08:02 PM   #189
Veteran Member
 
Join Date: Jul 2000
Location: King George, VA
Posts: 1,400
Default

Kenny:

In this post I want to explore the concept of “rational justification”.

It seems to me that the following is plainly an essential, fundamental feature of what is ordinarily meant by rational justification, which I propose to call the Fundamental Principle of Rational Justification:

FPRJ: If X and Y (in different worlds, perhaps) are both presented with exactly similar evidence and arrive at exactly similar beliefs in the same way, either both beliefs are rationally justified or neither of them is.

If you disagree with this principle, all that I can say is that you mean something fundamentally different by “rational justification” than I do. In that case, of course, it may well be the case that belief in God is “rationally justified” in terms of what you mean by “rational justification”, but not in terms of what I mean.

Now let’s look at two implications of the FPRJ:

(1) RJ is internal. That is, it doesn’t depend on whether the belief in question is true, or on how it is related to the state of affairs that makes it true or false. All that matters is how it is related to the evidence that one has.

(2) RJ is objective. Thus, if X and Y are presented with the same evidence and reach the same conclusion via the same thought process, either both conclusions are rationally justified or they aren’t. It doesn’t matter whether, say, this thought process is one that X was “designed” to have whereas in Y that same thought process represents an aberration from the way he was “designed” to think.

Now let’s look at your definition of rational justification (or more precisely, of what you call an “internally rational” beliefs, since as I said earlier I think the term “externally rational” is an oxymoron, or at any rate has nothing to do with rational justification.) You say:

Quote:
If a being’s cognitive faculties are part of a well designed plan aimed at the production of true beliefs in a particular type of cognitive environment, then that being could be said to be internally rational so long as its cognitive faculties are functioning as they were designed to function.
To see exactly what’s wrong with this, let’s take another concrete example of an RN being. Matt is an entertainer (in a technologically advanced society). One day he has a great idea for a new show. He designs Jack (a creature who looks rather like Howdy Doody but is actually much more complex) so that his beliefs about the past and present are based on his memory and his current observations (both of which are perfectly reliable, at least in the type of environment he’s designed to function in). But his beliefs about the future are based on algorithms along the lines of “Susie’s name begins with S; S is the 19th letter of the alphabet; so Susie will be the 19th to arrive on stage.” However, Matt (who has total control over Jack’s environment) is introducing a new twist to amuse the audience: all of Jack’s beliefs about the future will turn out to be true because Matt will make them so. Note that Jack’s beliefs are never based on the Principle of Induction. In fact, he doesn’t even notice whether all of his beliefs about the future turn out to be true; the idea of comparing his predictions to what actually happens is simply not part of his cognitive apparatus.

Now Jack has a twin, Jacob, who functions in the exact same way. But Matt has decided that it will be more amusing if all of Jacob’s beliefs about the future turn out to be false, and so he sees to it that they are. (To avoid conflicts, he puts them in separate shows, so that they never form the same belief about the same future event.)

Now obviously Jack’s beliefs are RJ if and only if Jacob’s are. (In fact, to make this tighter we can set up situations where they form exactly similar beliefs based on exactly similar evidence.) But clearly Jacob’s beliefs are not RJ, and therefore neither are Jack’s. Yet according your criterion Jack’s beliefs are RJ: his cognitive faculties are part of a well designed plan aimed at the production of true beliefs in a particular type of cognitive environment, and those cognitive faculties are functioning as they were designed to function. (Note: You didn’t mention “defeaters”, but perhaps you understood the absence of defeaters to be implied by the context. However, even so, it’s not clear what would constitute a “defeater” here. And surely the first belief formed by Jack in this way couldn’t possibly have any defeaters – or at least none that Jack could know about.)

Thus your definition of RJ violates the FPRJ. In fact, it is neither internal nor objective. According to this definition, whether a belief is RJ depends on how the agent acquired his cognitive faculties, and whether his beliefs were acquired as a result of the “proper” functioning of those faculties – both of which are external criteria. Moreover, a belief can be RJ even if it was acquired by means that would immediately be recognized by any ordinary person (i.e., anyone who hasn’t been unduly influenced by Plantinga) as completely irrational, and in fact would irrational if we used them. To put it another way, If I formed a belief about the future in exactly the same say that Jack did, based on exactly the same evidence, my belief would obviously not be RJ (by your criterion), but Jack’s would be – which is a violation of the FPRJ.

The strangeness of your conception of rational justification is reflected not only in your comments about the possibility that belief in things like the Great Pumpkin and voodoo might be “properly basic” (and hence RJ) in possible worlds where there is no more evidence for them than there is in this one (aside from the weak evidence provided by widespread belief), but also in your curious position on our believer (let’s call him Fred) in Martian mind-controlling gamma rays.

Before discussing your comments about Fred I should mention that I left out an important point: Fred’s belief that his mind is being controlled by gamma rays just “popped out of nowhere”: it had no apparent source. He has no “justification” for this belief at all in any ordinary sense; he just believes it. Nevertheless, this belief contains within itself an account of why it’s warranted (according to Plantinga’s criterion) if it’s true. I don’t see why this should change any of your answers though: they seem to be logically entailed by your definition of rational justification.

Quote:
I think the question of whether the “madman’s” (whether he really is mad, in this example, is unclear) beliefs are warranted is ambiguous.
Ambiguous! Fred’s beliefs are just as rationally justified as ours – at least as far as we can tell? Maybe we should let this guy out of the loony bin!

Quote:
If the Martian’s altered the original design plan ... the madman would have a different design plan than us, and would not make for a suitable comparison.
What forces you to this position is that your definition of rational justification is not objective. But does this really make sense? Most people (including me) would say that if Fred’s “design plan” has been altered in such a way as to be strongly disposed to believe without evidence that his mind was being controlled by Martian gamma rays, it has thereby been rendered irrational, and he has thereby been rendered insane. Design plans can be compared; some of them are designed to produce rational thought processes and others aren’t. Only the former can produce rationally justified beliefs.

But what is it exactly that allows us to say that Fred’s beliefs are irrational? If it were simply that we have good evidential reasons to believe that there are no such things as Martians or mind-controlling gamma rays, these facts would constitute defeaters for his belief. But I say that his belief would be plainly irrational even in the absence of any such defeaters. Here’s why:

All that Fred actually knows is that he has a strong disposition to believe in mind-controlling gamma rays. There are a great many possible explanations for this. Even ignoring the obvious ones like hypnosis or psychosis, it could have been caused by mind-controlling beta rays trained on him by aliens from Arcturus. Or he could have been given a mind-control drug by Russian agents. The possibilities are endless. All of these possibilities are at least as reasonable as the “Martian mind-controlling gamma rays” hypothesis. Thus, while it may be the case that his cognitive faculties are part of a well designed plan aimed at the production of true beliefs in the type of cognitive environment in which they were designed to function (namely if the Martian gamma-ray hypothesis is correct), there is a far larger number of at least equally plausible hypotheses that explain his disposition to believe in the gamma rays just as well, but which do not entail that his CF’s are part of a well designed plan, etc. So, while it’s possible that his belief is warranted, there is no rational reason (based on what he knows) to [i]believe[i]that it is. The hypothesis that implies that this belief is warranted is very far from being the most parsimonious one consistent with the facts. And therefore his belief is not rationally justified.

That’s why, as I commented some time ago, Plantinga’s criterion for “warranted belief” does not entail that the belief in question is rationally justified in any reasonable sense. It can only be made to entail this by adopting a definition of rational justification that violates the FPRJ, which is to say, one that is not even remotely close to what most people mean by rational justification.

Once we understand that Plantinga’s criterion for “warrant” does not entail rational justification, it is immediately clear that it won’t do, for reasons I explained some time ago. But even if it were a satisfactory criterion of “warrant”, the fact that it doesn’t entail RJ makes it completely irrelevant to whether a belief is rationally justified.
bd-from-kg is offline  
Old 02-13-2003, 09:10 AM   #190
Senior Member
 
Join Date: Jul 2000
Location: South Bend IN
Posts: 564
Default

Hello Bd,

It may be a while until I am able to get to my full response, but, since your most recent post brought up some fundamental issues, I thought I would address it in the mean time in order to clear the way for my other responses.

Quote:
Originally posted by bd-from-kg
[B]Kenny:

In this post I want to explore the concept of “rational justification”.

It seems to me that the following is plainly an essential, fundamental feature of what is ordinarily meant by rational justification, which I propose to call the Fundamental Principle of Rational Justification:

FPRJ: If X and Y (in different worlds, perhaps) are both presented with exactly similar evidence and arrive at exactly similar beliefs in the same way, either both beliefs are rationally justified or neither of them is.
You’re right in thinking that I reject FPRJ. I see FPRJ as fundamentally tied to the internalist/classical foundationalists paradigm which I, along with Quine and other epistemologists in the externalist/naturalized epistemology tradition, (to switch from Kuhn’s paradigm terminology to the terminology of Lakatos) regard as a failed research program. If FPRJ is intuitively felt to be a fundamental principle of rationality by many in our culture, it is because this failed research program still has a heavy impact on contemporary thought. In the end, however, this research program has led to nothing but skepticism over and over again throughout its various incarnations and has been a complete and utter failure in giving an adequate of human knowledge. If externalism seems counterintuitive and radical, it is merely because it is calling, IMHO, for (to switch back to Khun) a paradigm shit with respect to a formally dominate mode of western thought.

But, to soften the intuitive blow, I think there are examples where FPRJ seems false:

Suppose in another possible world there exists humanoid type creatures who live on a planet very similar to earth and have a very similar history and range of cultures to ours. These beings are like us in almost every respect except in a few interesting ways. First of all, these beings have extremely unreliable memories. Though the phenomenological aspects of memory in these beings is exactly similar to the phenomenological aspects of memory which we experience, the beliefs that would tend to be produced by these beings’ memory faculties are largely false. Second, and fortunately for these beings, however, these beings have an additional clairvoyant form of sensory perception which gives them direct and vivid perceptual access to every moment in their own personal pasts. Since the beliefs formed by this clairvoyant faculty are really just another form of sensory perception, they are properly basic for these beings just as beliefs formed on the basis of our sensory perception are for us. Furthermore, as part of these beings’ design plan, these clairvoyant beliefs automatically override any beliefs that would have otherwise resulted from their memory faculties. This adaptation has allowed these beings to survive and thrive for countless generations and through it they have developed science and technology to a degree comparable to our own stage of development, in spite of their unreliable memory faculties. Now, of course, some philosophers among these beings have raised questions about why these beings tend to trust this clairvoyant sense over and against memory, but doing so is simply so natural and so obvious to these beings that no one seriously doubts the utility in doing so (similar to the manner in which we might raise questions about the utility of inductive reasoning or the existence of other minds or whether we can tell if we’re awake or dreaming). It is a curious philosophical puzzle for these beings, but nothing more. Memory is simply an anomaly to these beings; they place no confidence in it.

Jim is one of these beings and he happens to find himself staying in a hotel while on a business trip. Jim’s memory seems to be telling him that his name is Bob and that he has a wife named Jill and that he lives out in the country in a place called “Ohio (apparent memories like this are treated, by these beings, very similar to the manner in which we might treat wild dreams). However, Jim directly perceives, through his clairvoyant sense, that his name is Jim, he is a bachelor and a city dweller, and he has never heard of any place called Ohio (who has?).

Bob, on the other hand, does live in out in the country in Ohio and does have a wife named Jill. Bob lives in a world, like ours, where there is no such clairvoyant sense as the one employed by Jim. Unfortunately, while in a hotel exactly identical to the one Jim found himself in, Bob finds himself suffering from a severe cognitive malfunction caused by a brain tumor. This tumor causes Bob to hallucinate that he has the same clairvoyant sense that Jim does. Furthermore, Bob comes to believe about himself all the same things that Jim believes about himself. In fact, Bob believes that he is Jim in the very same situation that was described above.

I would say, that in this case, Jim’s beliefs were rationally justified whereas Bob’s were not. But if that is so, then FPRJ is false. Now there’s really only two plausible ways out of this as far as I can see. First, one could claim that Bob was rationally justified in holding the beliefs that he did. To concede this, however, is also to concede that beliefs formed by people as the result of psychotic episodes, hallucinatory experiences, paranoid schizophrenia, Alzheimer’s, etc. are also rationally justified. This is clearly not desirable.

Second, one could claim that Jim’s beliefs were not rationally justified. But I don’t see any plausible reason to do so that would not also make many of our own beliefs which see transparently rationally justified as not rationally justified after all. It is true that Jim trusted his clairvoyant sense over his memory, but that is no different from the numerous occasions when we ourselves override or correct memory beliefs on the basis of conflicting sensory information (since Jim’s clairvoyance is simply another sensory mechanism for him). In fact, often there seems to be a complex interaction between our senses, memory, and other sources of belief, in which the beliefs that would have come from one source are overridden by another.

You might say that Jim was not rationally justified because, for all he knew, he might actually have been Bob. But I don’t see how that is supposed to work either. Suppose Bob’s brain tumor gets worse and makes Bob completely insane so that in some possible world Bob is in the corner of a mental hospital muttering to himself, completely deranged and detached from any sense of reality. At the moment, however, Bob’s hallucinations are causing him to believe that his name is Kenny and that he is sitting in front of the computer typing a post for an internet discussion board concerning epistemology – in fact, at the moment, Bob has all the same beliefs that I do. Does the mere logical possibility that someone like Bob exists in such a situation override the rational justification that I have for my beliefs?

So, it seems clear to me that Jim was in fact rationally justified, Bob was not, and that this shows us that FPRJ is false.

Quote:
If you disagree with this principle, all that I can say is that you mean something fundamentally different by “rational justification” than I do. In that case, of course, it may well be the case that belief in God is “rationally justified” in terms of what you mean by “rational justification”, but not in terms of what I mean.
Well, one can define a term like ‘rationality’ anyway one wants, but the real question is whether such a definition is meaningful in light of all that we might want to say about rationality at a fundamental level. For instance, at the beginning of this thread, some defined ‘rationality’ as ‘forming beliefs according to the evidence’ but we have seen how such a definition ultimately excludes all but trivial beliefs from being classified as rational and hence such a definition is clearly not desirable. Likewise, I think that what you define as FPRJ conflicts with more fundamental things that one would like to say about rationality – specifically that rational justification is an essential component of knowledge – because it seems clear, at least to me, that there are genuine cases knowledge in which FPRJ is violated if warrant is taken to entail rational justification.

Quote:
To see exactly what’s wrong with this, let’s take another concrete example of an RN being. Matt is an entertainer (in a technologically advanced society). One day he has a great idea for a new show. He designs Jack (a creature who looks rather like Howdy Doody but is actually much more complex) so that his beliefs about the past and present are based on his memory and his current observations (both of which are perfectly reliable, at least in the type of environment he’s designed to function in). But his beliefs about the future are based on algorithms along the lines of “Susie’s name begins with S; S is the 19th letter of the alphabet; so Susie will be the 19th to arrive on stage.” However, Matt (who has total control over Jack’s environment) is introducing a new twist to amuse the audience: all of Jack’s beliefs about the future will turn out to be true because Matt will make them so. Note that Jack’s beliefs are never based on the Principle of Induction. In fact, he doesn’t even notice whether all of his beliefs about the future turn out to be true; the idea of comparing his predictions to what actually happens is simply not part of his cognitive apparatus.
It’s not clear to me, in this example, whether Jack’s beliefs about the future are warranted. Specifically, it is not clear, in this example whether Jack’s beliefs are part of a well designed plan aimed at the production of true beliefs. Unlike other cognitive beings who form their beliefs in such a way as to conform to their environment, Jack’s environment is being externally manipulated to conform to Jack’s beliefs. It’s not at all clear that Jack’s beliefs were designed to produce true beliefs in that environment, since in most relevantly near by possible worlds, all of Jack’s beliefs about the future turn out to be false – only through careful manipulation of Jack’s environment do they turn out to be true. I’d say that this makes for another sort of strange ‘outlier’ type case where it is ambiguous as to whether Jack’s cognitive faculties meet the criteria for warrant.

Quote:
Now Jack has a twin, Jacob, who functions in the exact same way. But Matt has decided that it will be more amusing if all of Jacob’s beliefs about the future turn out to be false, and so he sees to it that they are. (To avoid conflicts, he puts them in separate shows, so that they never form the same belief about the same future event.)
Well, of course Jacob’s beliefs are not warranted, but it is not clear that Jack’s are either.

Quote:
Moreover, a belief can be RJ even if it was acquired by means that would immediately be recognized by any ordinary person (i.e., anyone who hasn’t been unduly influenced by Plantinga) as completely irrational, and in fact would irrational if we used them.
Well, you might as well blame Quine, Goldman, or heck, even Thomas Reid, for the “bad” influence – Plantinga isn’t the only externalist on the block Likewise, I think you have been unduly influenced (whether directly or indirectly) by the likes of Rene Descartes and John Locke . And yes, since I reject FPRJ, it is possible that the rational belief forming mechanisms of one being would be irrational if employed by another. I fully and freely accept that consequence.

As for Fred…

Quote:
Ambiguous! Fred’s beliefs are just as rationally justified as ours – at least as far as we can tell? Maybe we should let this guy out of the loony bin!
.

No, as far as we can tell, Fred is a lunatic. Every estimate of the probabilities on the background that we would have in such a situation would indicate to us that the most probable explanation for Fred’s beliefs are that Fred is insane. We should leave him locked up (if he is a danger to himself or others) and try to help him. The remote logically possibility that Fred’s beliefs are warranted does not change that.

Quote:
What forces you to this position is that your definition of rational justification is not objective.
I’m not sure what you mean by ‘objective’ here. My definition of rational justification is objective in the sense that any two agents with sufficient knowledge of Fred’s total circumstances (which would vastly exceed the knowledge which is possible for us) could agree concerning whether Fred’s beliefs were rationally justified or not.

Quote:
Design plans can be compared; some of them are designed to produce rational thought processes and others aren’t. Only the former can produce rationally justified beliefs.
They can be compared to see if they tend to reliably produce true beliefs in their respective cognitive environments. Perhaps some are more efficient than others in that respect.

Quote:
All that Fred actually knows is that he has a strong disposition to believe in mind-controlling gamma rays. There are a great many possible explanations for this. Even ignoring the obvious ones like hypnosis or psychosis, it could have been caused by mind-controlling beta rays trained on him by aliens from Arcturus. Or he could have been given a mind-control drug by Russian agents. The possibilities are endless. All of these possibilities are at least as reasonable as the “Martian mind-controlling gamma rays” hypothesis.
With suitable adjustment, all of these possibilities could be made to apply to even our most mundane sensory and memory beliefs – what’s the point?

Quote:
Thus, while it may be the case that his cognitive faculties are part of a well designed plan aimed at the production of true beliefs in the type of cognitive environment in which they were designed to function (namely if the Martian gamma-ray hypothesis is correct), there is a far larger number of at least equally plausible hypotheses that explain his disposition to believe in the gamma rays just as well, but which do not entail that his CF’s are part of a well designed plan, etc. So, while it’s possible that his belief is warranted, there is no rational reason (based on what he knows) to [i]believe[i]that it is. The hypothesis that implies that this belief is warranted is very far from being the most parsimonious one consistent with the facts. And therefore his belief is not rationally justified.
This argument could be easily adjusted to argue against the rational status of our sensory and memory beliefs. I’m not convinced that the existence of an external world replete with complex structures such as galaxies, molecules, and subatomic particles with very strange properties is necessarily the most parsimonious hypothesis to explain our immediate experiences either (but I will say more on this in my other responses).

God Bless,
Kenny
Kenny is offline  
 

Thread Tools Search this Thread
Search this Thread:

Advanced Search

Forum Jump


All times are GMT -8. The time now is 07:47 PM.

Top

This custom BB emulates vBulletin® Version 3.8.2
Copyright ©2000 - 2015, Jelsoft Enterprises Ltd.