FRDB Archives

Freethought & Rationalism Archive

The archives are read only.


Go Back   FRDB Archives > Archives > IIDB ARCHIVE: 200X-2003, PD 2007 > IIDB Philosophical Forums (PRIOR TO JUN-2003)
Welcome, Peter Kirby.
You last visited: Today at 05:55 AM

 
 
Thread Tools Search this Thread
Old 01-31-2003, 11:17 AM   #121
Veteran Member
 
Join Date: Jul 2000
Location: King George, VA
Posts: 1,400
Default

Theli:

As I said before, I’m not interested in how you personally have chosen to use words like “true”, “truth”, “objective” and “subjective”. Much of what you say is true, but there are different ways to say it depending on how one chooses to define the terms involved. I personally don’t find it useful to use the term” subjective” in such a way that all “truth” is subjective. For example, consider:

(1) Mozart was born in Salzburg in 1756.
(2) Mozart’s fortieth symphony is superior to anything composed by Salieri.

I prefer to use “objective” and “subjective” in such a way that I can say that the first statement is objectively true whereas the second is subjectively true. It seems to me that there is an important difference between the senses in which these statements are “true”, and I find it useful to use the terms “objective” and “subjective” to represent it. But if you find it more useful to use them in some other way, so that both statements are subjectively true, no problem. In the same way, if you prefer to use “truthful” in such a way that you can say that “the model most consistent with our observations is the most truthful one”, fine. I don’t use it that way. I use it in such a way that I can say “the model that corresponds most closely with reality is the most truthful one”. To each his own. Just be careful not to use these words in different senses in the same argument. This is called the “fallacy of equivocation”. It’s very easy to slip into this fallacy when you’ve adopted nonstandard terminology.

Quote:
Could you please show me this circle in my argument?
I’ve already done so, several times. As I said, getting you to see this circularity seems to be a hopeless undertaking, but I’ll give it one more try. The following passage is as good an illustration as any.

Quote:
... the idea of our memories being consistent gets proven every minute of my life. Every time I click on the powerbutton on my computer it starts, just as my memories tells me that it should.
Well, how do you know that? How do you know that every time you’ve clicked on the powerbutton on your computer it has started? Why, your memory tells you that it has! Your memory also tells you that on each occasion (after the first) it has told you that this had happened on all the previous occasions: you remember remembering that and using it as a basis for predicting that the same thing would happen the next time. In effect your memory is testifying on its own behalf: it’s telling you, “You can trust me, because I’ve been trustworthy in the past.” But what evidence do you have that it’s telling you the truth about having been trustworthy in the past? None whatsoever, except for the testimony of your memory. But unless you assume that you memory is generally reliable, this is no evidence at all.

Look. Suppose that you woke up one day with amnesia, and the first person you saw was a guy who claimed that you owed him $10,000. When you question his claim, he says, “But you can trust me. I always tell you the truth!” When you question this statement in turn he replies, “But I’ve told you lots of thing in the past, and they’ve always turned out to be true. How much more evidence do you need?” And when you question that statement, he says, “How can you doubt me? I always tell you the truth!” Would you be convinced and give him the $10,000? If so, I have a bridge to sell you.
bd-from-kg is offline  
Old 01-31-2003, 01:46 PM   #122
Senior Member
 
Join Date: Jul 2000
Location: South Bend IN
Posts: 564
Default

Hello bd-from-kg,

Your last post to me was indeed very interesting and thought provoking. I do not believe that you succeeded in refuting Plantinga’s characteristic of warrant, however. Furthermore, I do not believe that your characterizations of rationality are either necessary or sufficient for an analysis of warrant and that it follows from this the your characterization of ‘rationality’, while perhaps reflecting a certain noble ideal, is not necessary for an analysis of human rationality either. But, finally, it is not clear to me, given your comments concerning the ‘rational program’ that belief in God is not a belief which could be construed as properly basic – at least, excluding belief in God as properly basic but allowing for beliefs such as belief that reality is as it appears to be or belief in the existence of other minds as properly basic would seem to me to constitute an instance of special pleading without further clarification and refinement of your ideas. It is this last point I wish to concentrate on in my first set of comments regarding what you have written. In my second set of comments I will defend Plantinga’s analysis from the chargers you have raised against it.

Again, I apologize for the extreme length, but I do not think it can be avoided.

Part I: Your Characterization of Rationality and Properly Basic Belief in God

Quote:
Correct, or close enough anyway. Rationality consists of a commitment to following a strategy designed to optimize one’s ability to function effectively – i.e., to fulfill one’s desires.
I think there is significant vagueness in the phrases “optimize one’s ability to function effectively” and “fulfill one’s desires” and that this vagueness has a significant impact, on your analysis, for the question as to whether or not it is rational to believe in God in a properly basic manner. I will say more about this as my comments develop.

Quote:
The “epistemological” part of this is to follow a strategy best calculated to produce true beliefs (and avoid forming false ones). Obviously this means a strategy that produces true beliefs, etc. in as many worlds as possible (since we don’t know which one we are in) - in particular in those in which there is a reasonable possibility of functioning effectively.
Here I think it is worth making a few preliminary observations. First, it is worth noting that there might be a considerable amount of tension between the dual goals of maximizing true beliefs and minimizing false ones in as many worlds as possible. For instance, one could be extremely cautious in forming any beliefs whatsoever except those which are absolutely necessary to get along. And to “get along” one might not need to believe much of anything – one could be an agnostic concerning just about everything from the reliability of the senses, to the existence of other minds, to the reliability of inductive reasoning, to the reliability of memory – so long as one conditions one’s behavior as if (because one has no choice but to do so) certain facts about the world which would be necessary to have a reasonable chance of getting along were true (while simultaneously withholding judgment concerning whether or not these facts are true). By following such a strategy, one might avoid holding a number of false beliefs, but one might also miss out on holding a great number of true beliefs. On the other hand, the less stringent one becomes in one’s standards of beliefs, the more likely one is to obtain true beliefs but also the more likely one is to obtain false ones.

Second, there is need of some further qualifications, it would seem, on the condition “in as many worlds as possible.” The God of classical monotheism, for instance, is understood to be a necessary being. In other words, if classical monotheism is true, a God-like individual exists in all possible worlds. But, if that’s true, then a strategy that includes believing in God and taking that belief to be properly basic will yield a true belief in every single possible world. I’m not arguing that this would be a good reason to believe in God, but am pointing it out to show that you need some further restriction on your criteria besides following a strategy that is likely to maximize true beliefs (while avoiding false ones) in as many worlds as possible.

Third, you further restrict the range of possible worlds to which the rational strategy is to be tailored to those worlds in which “there is a reasonable possibility of functioning effectively” (and this is important because, perhaps, in the vast majority of possible worlds, there is no reasonable chance of functioning effectively). So, I suppose this restriction might make certain beliefs like belief in induction properly basic automatically (thereby making it a belief that our agnostic friend in the above example might as well just believe in). However, your notion of being able to “function effectively” is sufficiently vague so as to raise questions about many other types of beliefs which are often construed as properly basic. This leads me my comments on your definition of “function effectively” below.

Quote:
I’m not sure that the phrase “function effectively” adequately captures what I’m trying to get at, so let me try to expand on this a bit. Functioning effectively, in the most basic sense (which is what I’m talking about here) means being able to make choices that have a good chance of bringing about desired results.
Here you define ‘functioning effectively’ as ‘being able to make choices that have a good chance of bringing about desired results’ but I am curious as to what “desired results” you are referring to or whether there are any normative standards which delineate what these desired results should be (at least with respect to functioning rationally).

Perhaps what you have in mind by the phrase ‘desired results’ is our desire to survive and get by with a reasonable amount of pleasure and comfort. But, as I pointed out, it is possible to accomplish this goal without believing in things such as the reliability of inductive reasoning, the reliability of memory, etc. One could be an agnostic about all such things and merely behave as if they were true, yet you seem to affirm that such beliefs are properly basic. But, perhaps this is where the “those worlds in which there is a reasonable chance of functioning effectively” stipulation comes in. Beliefs such as belief in the reliability of induction and the reliability of memory are beliefs that are true in all worlds in which there is a reasonable chance of functioning effectively, so they ought to be considered to be properly basic – one might as well just believe them.

Even if beliefs such as belief in the principle of induction or belief in the reliability of memory are necessary for functioning effectively, however, there are many other types of beliefs which we would like to consider properly basic which would seem excluded from this analysis. One such belief would be the belief that reality is as it appears to be. One could be an agnostic, for instance, about whether the world is as it appears to be or whether one is in a matrix-type simulation. So long as the reality in which one finds one’s self operates according to predictable rules and there is the possibility of figuring out how to navigate through and manipulate those rules to one’s advantage, there would be a reasonable chance of “functioning effectively” -- as long as we are understanding the phrase ‘desired results’ to refer to being able to survive and get by with a reasonable amount of pleasure and comfort -- whether one’s reality is as it seems or not.

Perhaps, however, you mean more by the phrase ‘desired results.’ Perhaps you mean for ‘desired results’ to include not just getting by, but also to have a reasonable chance of gaining a significant understanding of the reality in which one lives. Certainly that would seem like a rather noble goal, and that goal would not be readily obtainable unless reality is in some sense as it appears to be. But, it is not clear to me that taking belief in God as properly basic would be excluded from this goal (if there were a successful version of the transcendental argument for the existence of God, it would be necessary, but that’s a whole other topic).

Certainly, if God exists, then missing out on believing that fact means missing out on believing something very deep and significant and important about reality. Furthermore, whether one believes in God or does not believe in God has a significant impact on the beliefs that one has about everything else. It makes a big difference in one’s beliefs whether one believes that the contents of the universe are as they are because of the will of a personal, loving, good being or whether those contents are just the result of the mindless shufflings of impersonal forces acting on mater. Such a difference is at least as significant as the difference between the belief that reality is as it appears to be and the belief that one is living in the matrix. Both belief systems might come to an agreement about certain superficial facts (just as they would in the belief system that says reality is as it appears to be and the one that says it is a matrix type simulation), but any deeper beliefs about those facts and their significance will be completely different. So, if I find myself with a strong inclination to believe that God exists, why shouldn’t I go ahead and believe it? After all, if God does exist then it is likely that He might have placed such an inclination in me and if I don’t believe in God in response, I might drastically undermine my goal of coming to a significant understanding of the reality in which I live. Of course you might argue that believing in God entails the same risk, but the risk is at least as great either way, so why shouldn’t I believe as I am inclined in this matter?

Furthermore, if ‘desired results’ includes more than just mere survival and getting by with a reasonable amount of pleasure and comfort, what limits are to be placed on what should be included? Suppose my desire is to align myself with the ultimate purpose and meaning of the universe (if there is such) or to engage in activities which have some sort of eternally enduring value (if it is possible to do so), or come to a deep understanding of the purpose and destiny of the universe. Suppose I am convinced that the only way such desires could be fulfilled is if God exists and if I believe that God exists. Why shouldn’t I just go ahead and believe in God?

Finally, from a Christian perspective, our ‘desired results’ are themselves skewed by the reality of sin. What we want for ourselves is not what we should want (and given the prevalence of greed and oppression in this world, I do not find this at all difficult to believe), and we require God’s grace to reorient us to those things which we ought to desire. Even without a belief in Christianity, it seems plausible that there might be moral norms for what we ought to desire. And, all my observations of the world confirm to me that the affections of most human beings are horribly skewed toward the wrong things. Thus, it seems that “functioning effectively” likely involves something else besides being able to fulfill one’s desires – it also requires having oneself orientated toward the right desires. And, as I said, according to Christianity, the only way we can have our desires reoriented to the right things is by means of God’s grace which enters our lives by means of faith (i.e. trust) in God. Thus, we see the entanglement of the de facto question and the de jure question once again with respect to Christianity. If Christianity is true, then we cannot “function effectively” without having faith in God.

Before I move on to the next part of my essay, I would like to also address your comments on belief in other minds in the context of the above discussion. I think that there are very deep relationships between the problem of other minds and the objections that atheists often raise towards the view that theism is rational, and that some of these relationships show up in your comments. So, I think it is worth spending some time here. It should be noted from the outset that I consider belief in other minds (or at least belief that the tendency to infer their existence based on certain sets of criteria is reliable) to be a properly basic belief that need not be inferred from evidence. Furthermore, given how strongly we all believe in other minds (in fact, believe that we know that there are other minds), I would consider any analysis that leads to the conclusion that no one knows that there are other minds to thereby have been reduced to an absurdity.

Quote:
Finally, let’s consider the belief in the existence of other minds. It seems to me that this is simply a powerful unifying hypothesis that helps us to make sense of our experiences. Thus it’s no more “irrational” than the belief that the house I left this morning will still be there when I return tonight - or more generally, my belief in the persistence of physical objects.
There is one major difference between the hypothesis that physical objects persist and the hypothesis that other minds exist – the former can be properly concluded on the basis of inductive evidence while the latter cannot. Experience tells us that objects that exist at one point in time, all else being equal, are generally capable of being found at another point in time (of course there are complications – objects can be destroyed, inexplicably lost, some objects (such as certain subatomic particles) spontaneously decay or even spontaneously pop in and out of existence, etc. – but the hypothesis of the persistence of physical objects -- when surrounded by a suitable network of auxiliary hypotheses -- is repeatedly confirmed by experience). We have a number of illustrations of the persistence of physical objects (at least in terms of their being “findable” – whether they exist when no one’s looking is another question and is perhaps closer to the question of other minds), but such is not the case with other minds.

As far as minds are concerned, each of us is experientially aware of one and only one example. We observe correlations between certain sets of external stimuli, certain sets of personal behaviors, and certain sets of mental phenomena in the case of our own person. We have a strong tendency to infer that such correlations hold in others who appear to be like ourselves in certain relevant ways. However, such an inference is not appropriate on the regular cannons of inductive reasoning. We do observe similar correlations between certain sets of external stimuli and certain sets of behaviors in others that are similar to the correlations we observe in ourselves, but we definitely do not observe these sets of stimuli and sets of behavior to correlate to mental states in others in a way that is similar to the correlations that we observe in ourselves. To inductively infer that such correlations hold in others, then, is to make an inductive generalization from one and only one observed case to a universal proposition, but such a generalization is entirely out of place – like assuming that all trees grow apples because one has observed one tree that grows apples – in the proper cannons of evidence.

Now, you refer to belief in other minds as “a powerful hypothesis that helps us make sense of our experience.” If this is a claim that the existence of other minds is somehow justified via something like inference to the best explanation, then I think the criticism is essentially the same as the one above – we only have one observed case upon which to base our explanatory hypotheses. If this claim is more modest and is simply the claim that belief in other minds is a useful way for us to make predictions about the behaviors of others, then I would argue that such an hypothesis is entirely superfluous to that end. All we need for predictive purposes is a good map of stimuli/behavior correlations, and such is easily obtained from experience without speculating about hidden unobservable correlations with ethereal phenomena (i.e. mental events) to which we have no empirical access. Furthermore, this claim doesn’t seem to square well with what most of us actually believe about other minds. We are confident that other minds exist – certain, in fact (at least as much as we are of any other contingent fact), but such a confidence would not be justified if belief in other minds were simply taken as a good working assumption for making predictions.

Quote:
Of course you might object that we can get along without it; that we could just think of other people (and animals) as complex machines that function just as if they were “controlled” by minds.
Actually, we don’t need to think in any such way to get along with out believing in other minds. We don’t need to say that other beings function “just as if they were controlled by minds.” In fact, many philosophers and scientists who see mental phenomena as just a sort of artifact of deterministic bio-physical processes (a view which I do not share, BTW) would deny that anyone acts as if he or she were controlled by a minds (because minds, on this view, have no causal efficacy). All we need to say is that certain types of stimuli for certain types of entities correlates to certain types of behaviors. To say more is to go beyond what the evidence warrants.

Quote:
It may be that thinking of other people as having minds is the most natural (for us) way of conceptualizing and understanding their behavior, and that that there are other hypotheses that would “work” just as well, and are just as simple (if not simpler) in some sense, but that they’re highly uncongenial to the human brain. But even if so, why should we worry about them? We have a way of understanding other people that is highly congenial and intuitive. As long as it works, why think about discarding it in favor of something else that we’d have trouble “getting our minds around”?
I do think that belief in other minds is the most natural way for us to understand the behavior of others (because we are hardwired to do so in such a manner). We believe that other people have minds, not because the evidence warrants such a conclusion or that we need such a hypothesis to make successful predictions, but because that is how we are predisposed to believe. But, if such is the case, it would seem that such belief is the perfect example of what your analysis would deem a “nonrational belief.” We don’t need it to function effectively; we can’t infer it from the evidence; we can make successful predictions without it, etc. The only reason we believe it is because that is how we are predisposed to believe.

Quote:
The point of hypotheses of this sort about the “real world” is that they further the “rational project” I mentioned earlier – that is, they help us predict the effects of our actions. Hypothesis that we have trouble working with would interfere with this project rather than helping it along.
As I already pointed out, we could easily make predictions about the actions of others without believing in other minds. We don’t need such a belief for the “rational project” as you have described it. Perhaps human beings are not capable of consistently thinking about the behaviors of others in any other way, however. Does that make such a belief rational? Well, elsewhere you have argued that just because a being is not capable of believing other than it does that does not mean its beliefs are rational if they do not conform to the principles of rationality which you have set forth, and as we have seen, belief in other minds does not conform.

But, perhaps there is a way out here. Perhaps we could broaden the scope of “desired results” included in the notion of “functioning effectively” to include the possibility of meaningful relationships with others. Such relationships are only possible in worlds where other minds exist and where we are capable of recognizing them, so such beliefs would thereby be included by this set of criteria. Once again, however, this sort of broadening opens up the question of what boundaries ought to be placed on what should be included in the relevant set (with respect to functioning effectively) of desired results. Suppose I desire a meaningful personal relationship with my Creator or with the Ultimate Reality which underlies the universe. Such would only be possible if my Creator or the Ultimate Reality were in some sense personal, so why should I not believe that it is personal – why shouldn’t I believe in God?

Now, on to the second part…
Kenny is offline  
Old 01-31-2003, 01:49 PM   #123
Senior Member
 
Join Date: Jul 2000
Location: South Bend IN
Posts: 564
Default

Part II: Warrant: Plantinga’s Analysis and Yours

First, I want to look at your characterizations of rationality from the perspective of what it means for a belief to have warrant. Specifically, I want to focus on the questions of whether your analysis is necessary and whether it is sufficient for a characterization of warrant.

Given what you have said about the Gettier problem, I think you will agree with me that your analysis is not sufficient for a characterization of warrant. Gettier-type examples show that it is possible for one to meet appropriate standards of rational justification and still not obtain warrant for one’s belief. But, I think your account fails to be sufficient in an even deeper manner than the manner in which Gettier-type examples illustrate.

Essentially, your justification for holding beliefs such as belief in induction and belief in the principle of other minds is a pragmatic justification. You describe this as follows:

Quote:
First, we’re developing a strategy that involves assumptions that are justified on the grounds that they are essential to what I call the “rational project”: the project of figuring out how to function effectively (if possible). Second, this strategy is “world-independent”. It doesn’t matter what world we happen to find ourselves in; the strategy I’m talking about is optimal in the sense that it will tend to produce good results in any world where it is possible to get good results, and although some other strategy might work better if we happen to live in a world to which it is “tuned”, there’s no reason to believe a priori that we inhabit such a world.
Along these lines, you say elsewhere “we have no way of knowing a priori that we are in a world where such processes tend to produce true beliefs.” In other words, we don’t know that we are in a world where the principle of induction holds or that memory is reliable (and to say that we don’t know these things is to say that these beliefs lack warrant), but we are justified in believing these things on the pragmatic grounds that we must believe them to have any hope of functioning effectively. In short, your analysis turns out to be a sort of epistemological version of Pascal’s wager. Now, such pragmatic justifications are fine in some contexts, but such justifications are not the sort that are really relevant for rational justification.

Say, for instance, that Joe is being chased through the woods by a psychotic serial axe murderer. Joe is just barely staying ahead of the axe murderer and finds himself approaching a very deep ravine. Joe must jump the ravine to escape being chopped to pieces by the axe murderer, but if Joe fails to make it across the ravine in his jump, Joe will plummet to his death. Joe has no rational means of determining whether he can make the jump or not, though he has no choice but to try. Furthermore, if Joe does not convince himself that he can jump the ravine, he will hesitate and will definitely not have enough momentum to make the jump. Now, in this particular set of circumstances, Joe has no idea whether he is in a world where it is possible for him to jump the ravine or not, but only in those worlds where it is possible is it also possible for Joe to continue “functioning effectively.” Furthermore, Joe must believe that he is in a world where it is possible for him to jump the ravine in order to continue “functioning effectively.” So, Joe goes ahead and believes that he can make the jump and makes his attempt (does he succeed or fail – find out next week ). Now, certainly there are many senses in which Joe could be said to be “justified” in believing as he does, but does that mean that Joe is rationally justified? In fact, it does not. Joe still does not know whether he can jump the ravine. The basis on which Joe believes that he can jump it has no relevance whatsoever Joe’s knowledge concerning the matter.

Likewise, you have provided a pragmatic justification for believing in the principle of induction, the reliability of memory, etc., but at the end of the day, on your account, we still do not know whether these beliefs are true. In fact, the type of justification you present has no relevance whatsoever to the type of justification involved in knowing things. And since we really have no knowledge of these basic beliefs, we really have no knowledge of any of the beliefs that are based on them. So it seems that, in the end, your analysis bottoms out into a radical skepticism.

On Planting’s externalist type of analysis, however, we do know that we are in a possible world where the principle of induction holds, where our memories, are reliable, etc. These beliefs are warranted (and therefore, if they are true, known) because they are formed as a result of the proper functioning of our cognitive faculties which are aimed at the production of true beliefs in the type of environment where those faculties were designed to function and there are no sufficient defeaters for these beliefs. Now, we may not know that we know that such beliefs are true, but why should knowing that we know something be a requirement for saying that a belief is warranted? To say such would only throw us into an unprofitable infinite regress. We can’t get behind our cognitive faculties and the nature of the world to independently check out whether or not our beliefs are warranted, but what does that matter to whether or not our beliefs are in fact warranted?

Before I say more about Platinga’s analysis, however, I want to explore the question of whether or not your characterization of rationality is necessary for an account of warrant (even though it is not sufficient). Here my above comments still have some relevance because if I am correct in my evaluation that the type of justification you describe is irrelevant to the type of justification involved in knowledge, it is difficult to see how it could be a necessary condition for knowledge. Beyond that, however, it seems clear to me that your characterization of rationality is not necessary for warrant on the fact that beings who do not make use of the type of rationality you describe still know things.

Most people (even very intelligent ones) have never given such matters any deep thought. Most people believe in the principle of induction and the reliability of memory, not because they’ve thought through the matter and have come up with the type of justifications you offer in your analysis, but simply because they are predisposed to believe such things. Yet, it would seem absurd to say that the beliefs of such persons based in inductive reasoning and reliance upon memory are not warranted (which would further entail that they do not know much of anything). This becomes even more acute when we consider the fact that beings which are incapable of thinking through the types of justifications you offer still know things. Infants, small children, and mentally handicapped individuals obviously still know things about the world. In fact, it seems apparent (as anyone who has had pets or closely studied animal behavior could attest) that even many of the higher forms of non-human animals have warranted beliefs about the world.

So, it seems to me that your analysis is neither necessary nor sufficient for an analysis of knowledge -- that it is, in fact, irrelevant to such an analysis. But what about Plantinga’s analysis? Can it withstand the criticisms that you have made of it?

First, I think a basic confusion may have resulted from an earlier comment I made and that some clarification is in order. Earlier, I said:

Quote:
I would argue that if the belief forming processes of RN beings have a high objective probability of producing true beliefs in the cognitive environment in which they find themselves, then they are rational, at least with respect to that particular cognitive environment.
to which you seem to have inferred that I was saying that to “have a high objective probability of producing true beliefs in the cognitive environment in which [one finds oneself]” is a sufficient condition for warrant and rationality. However, I was not claiming that such a condition is sufficient. Rather, I was saying this in the context of the background of our discussion where it was stipulated that the belief forming mechanisms of these beings were properly functioning and that they were part of a design plan aimed at the production of true beliefs in the type of environment in which they were designed to function. The “high objective probability of producing true beliefs” condition is the “well” condition placed in front of “designed plan” in Plantinga’s characterization of warrant.

I would not claim (nor would Plantinga) that mere reliability of one’s cognitive faculties in a particular environment is sufficient for warrant. Clearly, there are many sorts of processes which are only accidentally reliable. A thermometer stuck on 70 degrees, for instance, may be 100% reliable in an environment in which it is always 70 degrees, but clearly the thermometer is not functioning properly nor does its report of temperature have any causal relationship to its environment. Likewise, belief forming mechanisms might be accidentally reliable though they are either not the result of proper function or not the result of a well designed plan aimed at the production of true beliefs. It is this latter condition that is relevant to your ‘c(n) beings’ counter example.

If the belief forming mechanisms of a being are part of a well designed plan, but that design plan is not aimed at the production of true beliefs, then the beliefs of such a being formed on the basis of such mechanisms are not warranted, even if they are reliable. I already alluded to such a possibility in my essay when I said:

Quote:
Then again, perhaps they are not part of a well designed plan aimed at the production of true beliefs. Perhaps my tendency to believe in other minds and the principle of induction is merely some sort of evolutionary adaptation aimed solely at my survival and indifferent to whether or not it produces true beliefs. In that case, my beliefs in these areas would not have warrant.
Such would be true whether the above processes were reliable or not. So how do we determine whether the design plan is aimed at the production of true beliefs (and not merely indifferent to the aim of producing true beliefs)? As a first approximation (I think there probably needs to be some refinements to this, but it works for now), I would say we can analyze this issue by asking the counterfactual question: “If the sort of beliefs the cognitive faculties in question were designed to produce had turned out to be largely false, would the designer (whether a conscious being or an unconscious process) still have furnished those cognitive faculties with the tendency to produce such beliefs?” If the answer to the above question is yes, then those cognitive faculties which are being referred to are part of a design plan which is indifferent to, and not aimed at, the production of true beliefs. As a result, beliefs formed on the basis of such cognitive processes lack warrant, regardless of whether or not they are reliable.

Now, how does this impact your ‘c(n) being’ counter example? You made the following statements about the design plan of the c(n) beings:

Quote:
But the aliens didn’t “program” C(10,001) to come to believe this because it’s true. They couldn’t care less what happens to the Earth; they have their own reasons for wanting T(10,001) killed. Just the same, they happen to know that it is true…

But the reasons that C(10,001) has the beliefs that he does about T(10,001) have nothing at all to do with the states of affairs that make them true. They certainly don’t have the “right kind” of relationship, because they don’t have any relationship.
It is clear from these descriptions that C(10,001)’s cognitive faculties in this respect are part of a design plan which is completely indifferent to whether or not it results in the production of true beliefs. Consequently, C(10,001)’s beliefs in this respect fail to fulfill the “part of a well designed plan aimed at the production of true beliefs”condition in Plantinga’s analysis and therefore fail to meet Plantinga’s conditions for warrant. So, your counterexample does not apply.

Quote:
Presumably if you believe that the fact that someone’s belief forming processes have a high objective probability of producing true beliefs in the cognitive environment in which they find themselves makes those beliefs rational, you also believe that the fact that someone’s belief forming processes have a high objective probability of producing false beliefs in the cognitive environment in which they find themselves makes those beliefs irrational.
Not really. I think it is helpful to distinguish between internal rationality and external rationality. If a being’s cognitive faculties are part of a well designed plan aimed at the production of true beliefs in a particular type of cognitive environment, then that being could be said to be internally rational so long as its cognitive faculties are functioning as they were designed to function. However, this being’s beliefs might fail to be externally rational, even if they are functioning properly, if there is something amiss in that beings external cognitive environment. This does mean, however, internal rationality is design plan relative and may vary from being to being and that external rationality is environment relative and may also vary from being to being.

Quote:
Are you familiar with the movie The Matrix?
Of course! Science fiction, philosophy, religion, kick-butt action sequences, cool special effects – what more could one ask for in a movie?

Quote:
OK, now let’s say that Peter really did live in late-twentieth-century America, whereas Paul thinks that he’s living in it because he’s plugged into the Matrix. Both of them have (or had) a number of identical beliefs about America based on their experiences. However, while the vast majority of Peter’s beliefs were true, almost all of Paul’s are false. According to your concept of rational justification, Peter’s beliefs were rational, but Paul’s are not. Yet this flies in the face of what we ordinarily take to be rational justification for a belief.
I would say that what we usually have in mind when we speak of ‘rationality’ is internal rationality. Paul’s beliefs are internally rational, but not externally rational.

God Bless,
Kenny
Kenny is offline  
Old 01-31-2003, 02:48 PM   #124
Veteran Member
 
Join Date: Jan 2002
Location: Sweden
Posts: 2,567
Default

bd-from-kg...

Quote:
I personally don’t find it useful to use the term” subjective” in such a way that all “truth” is subjective. For example, consider:
What definition of the word you use should reflect what point you are trying to convey, but I don't see how yours can be more usefull than mine on this topic. That is, observations, concepts, evidence and reality.
If we were to use a flat dictionary-definition we would get:

Proceeding from or taking place in a person's mind rather than the external world.
Particular to a given person; personal
Existing only in the mind; illusory.


The example you gave me seem to point at opinions and facts. Saying that opinions are subjective and facts are objective. But from what I understand they are of the same nature and originate from the same source. From observing an object you can recognize certain patterns and sensations related to that object. Let's for example take a rose, from observing the rose we extract 2 attributes, red and beautiful. And from your own reasoning, red is an objective fact, while beautiful is subjective opinion held by you.
Now, what I don't understand is what (except from how we define those concepts) is the difference between those attributes.
They are both drawn from sensory input.
They are both consistent, you draw the same conclution each time you look at the rose.
Neither of the attributes are inherit in the object itself, but is infact a representation (concept) in your mind.
The way I see it is that the opinion vs. fact version that we mostly use is very illusive and unclear. As I stated before, the more consistent information a model/claim provides, the more truth it has. And also, the more "objective" it gets.
If you were to find a rock on a road and call it "big", it's unclear weither you are stating a fact or an opinion. So, am I correct in understanding that your definiton of objective is based on how descriptive and precise a claim is?

Quote:
I use it in such a way that I can say "the model that corresponds most closely with reality is the most truthful one"
This would be a very good definition if it was possible.
We do not know the nature of reality outside of our sensory input and logic. So, reality for us is whatever our observations provide.

Quote:
Just be careful not to use these words in different senses in the same argument. This is called the “fallacy of equivocation”.
Yes, I know. When I have used a different definition, I have tried my best to make it clear. Words are a very good tool for misscomunication.
About standard, from whatever writen definition I have read, mine seems consistent enough. Although it's not one usually spoken.

Memory as evidence?

There seems to be a problem here to, I'm not arguing that my memory is true because I remember that it is true. That's a contradiction. What I am saying is that it is true because it is consistent with what I am observing at the moment, and that it shows itself to be an effective tool.

Quote:
it’s telling you, “You can trust me, because I’ve been trustworthy in the past.
I don't think my memory would argue for it's own truthfullness in that manner (assuming that it could argue). It consists of several similar fragments of information, some of wich is fully testable in the moment. Infromation that I do test each time I wake up, eat breakfast and goes to work even without realizing it. If there was information in my long term memory that seems inconsistent then I do have a reason to question that information.

Quote:
None whatsoever, except for the testimony of your memory.
I think a strawman is forming here, you seem to stray abit too far in the analogy of the memory as a person trying to convince me. You might find arguments to support your analogy while not your claim.

Thank you for taking the time to reply.
Theli is offline  
Old 01-31-2003, 02:51 PM   #125
Veteran Member
 
Join Date: Jan 2002
Location: Sweden
Posts: 2,567
Default

Kenny has been busy.
Theli is offline  
Old 01-31-2003, 03:36 PM   #126
Veteran Member
 
Join Date: Jul 2000
Location: King George, VA
Posts: 1,400
Default

Kenny:

Thanks for the replies. Naturally it will take me a while to put together an answer, just as it did you. I may, however, have some short comments about what you didn't respond to a bit sooner.
bd-from-kg is offline  
Old 01-31-2003, 06:46 PM   #127
Veteran Member
 
Join Date: Jul 2000
Location: King George, VA
Posts: 1,400
Default

Kuyper:

Quote:
To say that an omnipotent being or a being with all the claimed attributes of God is beyond all human experience runs exactly counter to the claim of theistic belief in general and Christian belief in particular. The claim of the Christian (and I'm in that camp) is that we humans can and do experience God every day. His attributes are not considered impossible to imagine at all.
Well, yes. Those who claim to have witnessed an unusual phenomenon (i.e., who are making the very claim in question) can always say: “But the phenomenon has so been experienced by humans (namely by us), so what are you talking about? How can you say that it’s beyond human experience?” This is true whether the phenomenon is telepathy, alien abductions, ghosts, demonic possession, God, or what-have-you. Obviously the specific claims in question can’t be counted as part of “human experience” here.

My exposition of Bayes’ Theorem makes this a little clearer. It distinguishes between “k”, the background information, which by definition excludes anything that might be considered evidence for the hypothesis “h”, and the evidence “e” for the phenomenon in question. In other words, the “a priori” likelihood of h is to be estimated based on what we know aside from the evidence for h. (A clarification: Bayes’ formula can be applied recursively, with P(h|k) being “reset” to P(h|e&k) from the previous “round” each time, “k” being redefined to include the “old” evidence, and “e” being redefined to be the “new” evidence. In this case “k” doesn’t exclude all relevant evidence after the first round. But this gives the same result as including all relevant evidence in one go, and the latter procedure is much simpler to think and talk about.)

Quote:
The objection, then, doesn't seem grounded in whether or not concepts of God are beyond all human experience; rather, the objection seems to be grounded on whether or not such claimed experiences are, in fact, experiences of God. That is a very different issue.
No, it isn’t a different issue; it’s the same issue. And this brings us to a very fruitful way of thinking about this kind of question.

When we say that talking giraffes are farther “outside human experience” than families of ten-foot giants, what do we mean exactly? After all, while no one has seen a talking giraffe, no one has seen a family of ten-foot giants either. (At least I’m assuming this is true; if not I need to change the example a bit.) Both are completely outside all human experience. But they’re not equally outside all human experience, in a sense that I’ll try to explain.

We all try to understand our experiences (and try to predict the future, including the effects of our own possible actions) by creating a conceptual framework for interpreting them. We hypothesize that certain entities exist and interact in certain more or less predictable ways which account for the vast bulk of what we experience. I call this kind of scheme an “ontology”. Now this “baseline ontology” is expanded on in various ways to account for more unusual phenomena, and to include things that we have never personally experienced but have reason to believe other people have (by talking, reading, etc.) [This is really still part of our project of understanding our own experiences. For example, we believe that there are elephants in Africa to account for our finding references to elephants in Africa in seeming reputable sources.]

We’re naturally very conservative about modifying our existing ontology. New facts (i.e., new experiences, including the experience of hearing or reading about experiences other people are alleged to have had) are dealt with by modifying our ontology as little as possible. Thus, when my friend says that some ordinary neighbors have moved in next door, the simplest modification is to add in the existence of these new neighbors; this doesn’t disturb the existing scheme in any significant way. But if he says that a family of ten-foot giants has moved in, I have the choice of modifying my ontology so as to accommodate the existence of families of ten-foot giants (a pretty major undertaking, but feasible; there’s nothing that I know about humans that excludes that possibility of their being ten feet tall), or modifying it by changing my estimate of my friend’s veracity (much simpler in most cases). Of course, if I get a lot more evidence that ten-foot giants have moved next door, a modification that accommodates the existence of ten-foot giants will soon become much simpler than the numerous modifications that would be needed to accommodate all the new evidence in other ways. On the other hand, if my friend says that a family of talking giraffes has moved in, the modifications to my ontology needed to accommodate the actual existence of talking giraffes is extremely extensive: I have very strong reasons for believing that giraffes don’t have vocal chords that can support speech, and that their brains are incapable of mastering human language. So I’ll be far more reluctant to accept talking giraffes than ten-foot giants. In fact, I’ll believe that I’m the victim of an extremely elaborate hoax before I’ll believe in talking giraffes.

This is what Ockham’s Razor, or the principle of parsimony, is really all about: how much modification of one’s existing ontology is required to interpret a new experience in a certain way, and how much is required to interpret it in a different way? Pick the interpretation that involves the least modification. Strongly resist accepting any interpretation that would require a really major overhaul. Or in other words, extraordinary claims require extraordinary evidence.

Now adding an omnipotent, omniscient, omnibenevolent being to one’s ontology is about as drastic an overhaul as anything that could be imagined. So we are rationally justified in resisting doing so unless there is no alternative – that is, unless there is no other modification (more precisely, no less drastic modification, but then it’s hard to imagine anything that would not be less drastic) that could accommodate this evidence. And, of course, there are lots of straightforward, naturalistic ways to interpret all of this evidence; it can be accommodated with only minor modifications of our “baseline ontology”. So there is no rational reason whatsoever to accept the “God hypothesis” in preference to other, far more parsimonious hypotheses.

Relating this back to Bayes’ theorem, the principle of parsimony demands that we set the a priori probability P(h) of a hypothesis lower just in proportion to how much fundamental modification of our existing ontology (which can be thought of for this purpose as the “background information” ) would be required to accommodate it.

Quote:
But the theistic claim (or the Christian claim, at least) is that such belief is perfectly ordinary; that it is, in fact, what we were designed for. That is, I think, the essence of Alvin Plantinga's argument ...
But this doesn’t help. Accepting the hypothesis that we were designed to believe in God involves almost as drastic a modification of our existing ontology as accepting the hypothesis that God exists. And even if we accepted this hypothesis, we would have to accept the further hypothesis that it was God who designed us that way. Even the hypothesis that advanced aliens designed us to believe in God requires a far less drastic modification of our existing ontology than the hypothesis that an omnipotent, omniscient, omnibenevolent being did so.
bd-from-kg is offline  
Old 01-31-2003, 07:31 PM   #128
Veteran Member
 
Join Date: Jul 2000
Location: King George, VA
Posts: 1,400
Default

Theli:

As I said before, I’m not really interested in your nonstandard usages of “objective”, “subjective”, “truthful”, etc. This is all way off-topic anyway.

As for what you call the “memory as evidence” issue, I thought that it would be hopeless to get you to see the circularity of your argument. I’m even more convinced after your latest reply. I’m afraid we’ll just have to agree to disagree.
bd-from-kg is offline  
Old 02-01-2003, 04:26 AM   #129
Junior Member
 
Join Date: Jan 2003
Location: South Bend, IN
Posts: 47
Default

bd-from-kg,

Thanks for your response. I'll need some time to respond back. I'm in a busy time with work right now so don't always have the time I'd like for these dialogues.

Thanks
K
Kuyper is offline  
Old 02-01-2003, 09:12 AM   #130
Senior Member
 
Join Date: Jul 2000
Location: South Bend IN
Posts: 564
Default

bd-from-kg,

I don’t want to spend a great deal of time on this because the main discussion is already taxing most of my time resources, but I felt that I could not let this pass without some comment.

Your claim that theism is drastically less parsimonious than naturalism is simply false. In fact, theism is at least as parsimonious as naturalism if not more so. Theism is not an ordinary sort of existential claim, such as the claim that trees or aliens or unicorns exist. Theism is a claim concerning the fundamental nature of reality – essentially, it is the claim that reality is fundamentally personal at its deepest level (i.e. at the level of the ground of all being). There is nothing at all odd or bizarre about such a claim.

We are familiar with both personal and impersonal types of explanations. For instance, we typically explain the behavior things such as subatomic particles in terms of impersonal forces and the like, but we typically explain things like works of art, architectural structures, internet posts, and the like, in terms of personal causes such as motivation, intention, emotion, etc., on the part of their creators. The question is which types of explanations are more fundamental. Theism regards personal type explanations to be the most fundamental whereas metaphysical naturalism regards impersonal type explanations to be the most fundamental. All else being equal, I see no reason why the notion that impersonal explanations are somehow more fundamental is somehow drastically more plausible or more probable than the notion that personal explanations are somehow more fundamental. In fact, it seems more natural for us to explain things in personal terms (hence, even in many of my physics classes, my professors would say something like “the electron wants to be in the lowest energy state” even though we all really know that an electron is incapable of “wanting” anything), and it is personal type phenomena of which we are most directly aware in terms of our experience (since we ourselves are personal beings).

In terms of ontological parsimony, since theism says that all of reality is fundamentally to be explained in terms a single personal ground of being, it is either more parsimonious than metaphysical naturalism (if M.N. is taken to mean that the universe is ultimately to be explained in terms of a large number of independent impersonal entities and forces acting between them) or just as parsimonious (if M.N. is taken to mean that all of reality is ultimately to be explained in terms of a singe impersonal underlying principle or impersonal ground of all being). As far as claims such as God is omniscient, omnipotent, omni-benevolent, etc., are concerned, it is actually far more parsimonious to understand God -- so long as we are regarding God as personal --in such a way.

It is true that we do not have any direct experience with properties such as omniscience, omnipotence, and omni-benevolence, but we do have experience of such things as power, wisdom, and goodness as properties shared by personal beings, and we have experience of these things coming in varying degrees. Since we are understanding God to be the ground of all being -- the necessary prerequisite for and the source of all else that exists – we should expect there to be nothing outside of God which places any inherent limits on any of God’s positive attributes. If any of God’s positive attributes were to be less than maximal in degree, then it would seem that we would have to account for why they obtained to a lesser degree and why they obtained to the degree they did. But, then God would no longer be the most fundamental being or explanatory principle in our ontology and we would have to postulate something else that is more fundamental than God, so it is more parsimonious to regard God’s positive attributes as being maximal in degree. This whole analysis is buttressed by many of the contemporary versions of the ontological argument which (excluding the possibility premise) end in the conclusion that if the existence of an unlimited or maximally great being is logically possible then such a being must exist necessarily. In other words, an unlimited being may be self-explaining in a way that finite existents are not.

Now, I don’t necessarily expect you to buy into all of the above reasoning – perhaps your intuitions point in different directions -- but that doesn’t matter. The real point is that such reasoning holds a great deal of intuitive appeal for a great number of people. The notion that the ground or source of all being is in some sense unlimited or infinite is a basic intuition shared, not just by the three main monotheistic religions, but one shared almost universally across numerous human cultures and numerous religious traditions. The notion that God is omniscient, omnipotent, omni-benevolent, etc., is simply that basic human intuition played out in the context of a belief that the ground of all being is of a personal nature.

God Bless,
Kenny
Kenny is offline  
 

Thread Tools Search this Thread
Search this Thread:

Advanced Search

Forum Jump


All times are GMT -8. The time now is 07:47 PM.

Top

This custom BB emulates vBulletin® Version 3.8.2
Copyright ©2000 - 2015, Jelsoft Enterprises Ltd.