FRDB Archives

Freethought & Rationalism Archive

The archives are read only.


Go Back   FRDB Archives > Archives > IIDB ARCHIVE: 200X-2003, PD 2007 > IIDB Philosophical Forums (PRIOR TO JUN-2003)
Welcome, Peter Kirby.
You last visited: Yesterday at 05:55 AM

 
 
Thread Tools Search this Thread
Old 01-17-2002, 07:37 PM   #41
Regular Member
 
Join Date: Jul 2001
Location: Florida, USA
Posts: 363
Post

Damn, people. It's not that hard.

Plantinga list of possibilities is missing one very important option: Beliefs directly influence by behavior but are not determined by natural selection. Rather, they are determined by learning and reasoning skills.

Isn't that the way it's done? I don't recall having any innate beliefs about how to behave when I see a lion. Near as I can tell, I thought that they were just big, cuddly kitties until I was old enough to know better. I had absolutely no preconcieved beliefs about lions, which may not have been beneficial, but I had something far more powerful: logic and the ability to learn. My beliefs are not based on any internal process, but rather by input data and reasoning. There are a few innate directives: flee from danger, find a favorable female mate, find food when hungary and water when thirsty, sleep when tired. However, what constitutes data, a favorable mate, food, water and a suitable place to sleep is left to my own devices.

Experience has shown us that human reasoning is effective. The results of modern physics and mathematics should be more than enough to confirm this result. The fact that previously unknown physical phenomena can be precisely predicted by little more than human reasoning, a pencil and some paper should be proof positive that human logic is accurate to a very high degree.

Should we expect this from a naturalistic framework? Yes. First of all, logic is not a difficult phenomena to understand and use effectively. All that is needed is an accurate grasp of the basic rules and an ability to apply them correctly. Inability to correctly apply fundamental rules of logic would be detrimental to a creature's survival. Therefore, it is reasonable to assume that accurate reasoning skills would be fairly common in a species that relied heavily upon them.

Most other animals rely heavily on instinct. The reason for this is that their brains are not well developed so it is necessary for survival to have some drive for basic functions necessary for reproduction. If animals do not understand that they need to reproduce, they might negelect to do so, which is not a condition that evolution favors. However, the larger and more developed brain in Homo sapiens means that we are capable of more complex feats of reasoning and abstraction. Therefore, instinct would be unnecessary, neglecting of course the reproductive drive, otherwise we might simply lose interest and die out. Still, an at least elementary reliance on logic is found in other animals with the animals most closely related to us most prominently displaying the trait. Chimpanzees, for example, are able to make simple tools. Cats and dogs understand elementary cause and effect.

The ability to think is a far more advantageous trait then simply having beliefs that have luckily adapted to a specific situation. Sure, some weird belief about what snakes, trees or lions do to you and how might prove to be slightly more adventageous than a general curiosity, but what if a human is thrown into an environment with which he is unfamiliar? Say he meets up with a giant fireball. Will he simply be incinerated because his ancestors had not fortuitously mutated the successful adaptation which allowed them to falsely believe that a giant fireball means a bomb shelter building contest, thus saving themselves? Or might it be better for him if his reasoning says, "Hey, that big red thing doesn't look friendly and it's way to hot. I'm out of here." I think that it's obvious which is the most beneficial behavior.

Furthermore, my statement about how humans acquire belief are far more consistant with evidence and reality than the ludicrous examples put forth by scilvr and Plantinga. The sheer odds of having every particular experience coded for in advance purely by RM&NS is absurd. Also, it is not consistant with the real world data about humans actually handle data. Humans learn, so it doesn't take a master of the obvious to figure out that we must have evolved that ability. Considering how far it's got us, I think that it is a favorable trait.

Peace out.
Wizardry is offline  
Old 01-18-2002, 01:50 PM   #42
Veteran Member
 
Join Date: Jun 2000
Location: Greensboro, NC, U.S.A.
Posts: 2,597
Cool

It seems to me that there is an additional objection possible against Plantinga's argument: to essentially reject the probabilistic approach. This objection is similar to the "Maximal Warrant" objection he discusses in his paper, but his answer to that objection would not apply here.

What does Plantinga mean when he states that P(R/N&E) is low? I understand this to be a claim that the probability of beginning with unicellular, non-cognitive life and ending up with humans possessed of reliable cognitive faculties by way of naturalistic evolutionary mechanisms is relatively low. For argument's sake, let's grant that this is true.

However, does that really provide us with any relevant reason to believe that, in fact, our cognitive abilities are not or could not be reliable? I'm not sure that it does. The fact is that for any process with multiple possible outcomes the probability of any given outcome may be remotely low, but that does not change the fact that one of these outcomes will obtain.

To illustrate by analogy, let's say that a local church decides to raffle off a new car. The raffle is extremely popular, and 100 million tickets are sold (it was a big church; maybe Coral Ridge Presbyterian ). Now, the odds of any individual raffle ticket being a winner might be vanishingly small, but someone is going to win the raffle.

Does the fact that the initial probability of winning was very, very small provide any reason for the holder of the winning ticket to doubt that she has, in fact, won?

Perhaps there will be some initial shock; a momentary questioning. However, when shown the evidence (perhaps the stub of her ticket), will she continue to question? I think it's clear that she would not; such a response might lead us to question whether or not she was responding appropriately (ie, "she's crazy!" ).

Now, back to the argument: even given the initially low probability of the development of reliable cognitive faculties, do we actually have any reason to believe that we didn't "win the raffle?" It seems to me that the "ticket stub" is right here in front of us. Our faculties certainly give us every appearance of being reliable. We don't normally walk into obstacles that we failed to realize were there, eat poisonous foods that we believed to be safe or cease eating because we failed to realize that food was essential, etc, etc. We may occasionally make mistakes, but those are abnormal occurrences; due to incorrect information or improper function (mental illness).

It therefore seems to me that absent any good reason to believe that our cognitive faculties aren't reliable, that we have good a priori reason to believe that they are, and hence the initial probability of their actual development is moot.

It also seems to me that Plantinga uses "belief" as though that were representative of all of our cognitive faculties. However, I think that in doing so he ignores the relation of perception to belief. His argument seems to depend upon the non-existence of any necessary relationship between the two, but I can't immediately see how this is true. However, I need to collect my thoughts on that before posting more.

Regards,

Bill Snedden

[ January 18, 2002: Message edited by: Bill Snedden ]</p>
Bill Snedden is offline  
Old 01-18-2002, 07:18 PM   #43
Regular Member
 
Join Date: Jan 2002
Location: California
Posts: 118
Post

Scilvr,
It seems to me that the very existence of this discussion would challenge the idea that any worldview can reliably lead to "true" beliefs. Different people can often use reason on a given set of facts and end up with different beliefs. So whether the ability to reason comes from God or from naturalistic processes doesn't seem to guarantee that you will arrive at "true" beliefs.

It seems to me that this is more consistent with MN derived reason than with God derived reason. As you correctly pointed out, natural selection would only work based on the behavior and not on any underlying or resultant beliefs. Therefore MN could certainly result in incorrect beliefs so long as they don't compromise the behaviors that lead to survival.

With God given reason one might resonably expect it to be infallible and therefore it would always result in "true" beliefs. If it doesn't then it brings a few questions to my mind. Why would God not endow us with an infallible reasoning system? If its not 100% reliable how do we know how reliable it is?

I also am puzzled by the use of the terms "rational" and "irrational" as applied to naturalistic processes. Is this intended to convey thinking vs. non thinking?

My Websters defines rational like this:

1. agreeable to reason;reasonable;sensible: a rational plan for limiting traffic congestion.

Based on this definition it certainly appears to me that process of natural selection is "rational".

Thanks for you time.

Steve
SteveD is offline  
Old 01-20-2002, 11:14 AM   #44
Banned
 
Join Date: Jan 2001
Location: Florida
Posts: 376
Post

I haven't read every post in this thread, so forgive me if I am merely repeating something someone else said.

To touch on what SteveD said, I would have to somewhat agree with the argument and say that it is the case that evolution has not endowed us with the ability to reason correctly. That people believe in all kinds of stupid crap (ghosts, alien visitors, Jesus, etc), is quite obvious.

The vast majority of these beliefs are, for the most part, inconsequential to our survival, but may be actually be an outcome of our survival strategy. You see, we humans teach each other things, and accept the practices and beliefs of our society/tribe. This allowed us to keep good tool designs, and if someone improved the design, it would allow us to spread it around easier. We believe what others tell us because it is a great way to spread knowledge (just look at the state of human knowledge today), and it also probably makes us more appealing socially.

I think this is exactly what we should expect if we are the product of evolution. Our reasoning abilities are not perfect in the least, and indeed, should not be fully trusted.

But if the Christian god made us, why are we so prone to error? Why do people become atheists, Buddhists, Scientologists, etc? Why do we believe in ghosts, alien visitors, psychics, tarrot cards, etc? No matter what world-view is right, the vast majority of people don’t accept it. Naturalists can say it is because of many complicated psychological and sociological reasons, irrational thinking processes, etc. What possible excuse does a Christian (or nearly every other theist for that matter) have to explain all the irrationality around them, assuming their world-view is right? Did God purposely make all these faults in human thinking? Do we need to be stupid to have free-will? Did the Tree of Knowledge make us dumber?

Any comments or criticisms welcome.
Someone7 is offline  
Old 01-20-2002, 01:14 PM   #45
Veteran Member
 
Join Date: Jul 2000
Location: King George, VA
Posts: 1,400
Post

scilvr:

At this point it’s clear that you are relying heavily on Plantinga’s analysis. So if we are going to make any progress in this discussion we are going to have to analyze carefully what Plantinga is saying about some crucial points. Specifically, what exactly does he man when he suggest that (1) our behaviors may not be caused by our beliefs, or (2) our behaviors by be caused by our beliefs, but not by their “content”?

Since this is a complicated subject in itself, I’m going to devote this entire post to it. I’ll reply directly to your post of Jan. 17 later.

But before getting into this analysis, I want to illustrate how utterly counterintuitive both of these suggestions are with an everyday example.

The other day I went into town to shop. First, I made a list of the things I wanted to buy because I believed it would help me remember what to get and ultimately would result in my getting them. Next I got out my garage key because I believed that it would open the garage door. Then I got into the car because I believed that it was capable of getting me to town. I followed a particular route because I believed that it would take me to Wal-Mart. I went to Wal-Mart because I believed that I would find there certain items that I wanted. And so it went, as I went from store to store, buying various things because I believed that they would be useful. Then I headed to a restaurant because I believed that I was hungry and that I could get a meal there that I would enjoy. Finally I headed back to where I had started because I believed that my house would still be in the same place that I left it.

I could add hundreds, if not thousands, of other ways in which I acted the way I did because of various beliefs, but by now the point should be reasonably clear.

I submit that you cannot even begin to give a remotely plausible account of my behavior yesterday without referring, explicitly or implicitly, to the “contents” of the beliefs that I listed (and lots of other ones). But any serious account of the relationship between beliefs and behavior is going to have to be able to give a plausible account of my behavior yesterday, and that means that it is going to have to admit the existence of some important relationship between our behavior and the contents of our beliefs.

Now let’s see what Plantinga means when he suggests that it is reasonably possible that beliefs do not affect behavior. Actually he’s talking about the theory called epiphenomenalism, the idea that mental activities result from and accompany certain mental processes, but have no effect on them (or on anything else). This is a defensible theory, but it doesn’t really affect the original point. To see this, let’s assume that epiphenomenalism is true. Then any given conscious belief B is an epiphenomenon of some aspect B of the brain’s current activity. The idea is that the behaviors that we ordinarily ascribe to B are “really” caused by B. But in that case B acts in every way like a belief, so we might as well call it a belief. To keep things straight, let’s call B a p-belief (p for physical) and B an m-belief (m for mental). Epiphenomenalists do not deny that p-beliefs cause behavior; they just deny that m-beliefs do.

Of course, the beliefs we’re really interested when we ask whether our cognitive faculties reliably formulate true beliefs are our m-beliefs. So let’s look at the relationship between m-beliefs and p-beliefs. As an example, suppose I’m playing chess and I have an m-belief that moving my bishop to king’s knight six will allow me to force mate in four moves. According to the theory we are considering, this is an epiphenomenon of a corresponding p-belief. Now I can predict confidently that having this p-belief (in conjunction with my desire to win the game) will be followed by my moving the bishop to king’s knight six. Thus, although my p-belief is a complex brain state (and so perhaps cannot be properly described as a "belief" in any ordinary sense), its actual effects are exactly those that I ordinarily attribute to the corresponding m-belief . And the same kind of relationship holds between all of my m-beliefs and the corresponding p-beliefs.

Such a relationship can be described by saying that my m-beliefs are interpretations or descriptions of my p-beliefs.

So on this view, while it is perhaps not strictly correct to say that my m-beliefs affect my behavior, it is correct to say that the p-beliefs corresponding to my m-beliefs, of which my m-beliefs are descriptions, affect my behavior. Thus when I say, for example, that I selected the red key to put into the front door lock because of my belief that this is the key to that lock, I mean that this action is caused by the p-belief which is interpreted or described by the corresponding m-belief that this is the right key.

Needless to say, once epiphenomenalism is correctly understood, it is clear that it has no relevance whatever to the question of whether natural selection has a high probability of producing cognitive faculties that reliably formulate true beliefs, or whether a naturalist might have good grounds for believing that “reason” is a reliable guide to truth. Plantinga’s claim that “on N&E and this first possibility ... the probability of R [the proposition that our cognitive faculties are reliable] will be rather low” is simply false.

Now let’s look at Plantinga’s even more puzzling suggestion that our behavior might be caused by our beliefs, but not by their “content”. To understand what he’s talking about here we’d better go directly to the source. In Naturalism Defeated Plantinga says:

Quote:
On a naturalist or anyway a materialist way of thinking, a belief could perhaps be something like a long-term pattern of neural activity, a long-term neuronal event. This event will have properties of at least two different kinds. On the one hand, there are its electrochemical properties: the number of neurons involved in the belief, the connections between them, their firing thresholds, the rate and strength at which they fire, the way in which these change over time and in response to other neural activity, and so on. Call these syntactical properties of the belief. On the other hand, however, if the belief is really a belief, it will be the belief that p for some proposition p. Perhaps it is the belief that there once was a brewery where the Metropolitan Opera House now stands. This proposition, we might say, is the content of the belief in question. So in addition to its syntactical properties, a belief will also have semantical properties--for example, the property of being the belief that there once was a brewery where the Metropolitan Opera House now stands... And the second possibility is that belief is indeed causally efficacious with respect to behavior, but by virtue of the syntactic properties of a belief, not its semantic properties.
So this possibility, in contrast to the first one, is based on the theory of materialism, according to which consciousness (if it can meaningfully be said to exist at all) consists entirely of material things and their relationships to one another. According to this theory all beliefs are p-beliefs; either there are no such things as m-beliefs or they are identical to p-beliefs. So what Plantinga is saying is that behaviors depend only on “syntactical” properties of our p-beliefs and not on their “semantic” properties.

Now for this to make any sense at all, what Plantinga calls “syntactical” properties must include far more than gross properties such as numbers of neurons, firing thresholds, etc.; it must include the exact details of the “pattern of neural activity” in question. Only by including the exact details is it possible to distinguish between, say, a pattern that corresponds to a belief that the plane you have tickets for is scheduled to leave at 8 A.M. next Tuesday and one that corresponds to a belief that it is scheduled to leave at 9. But these patterns might result in quite different behavior (especially if, come Tuesday morning, you find yourself in a position where you would be in danger of being late for an 8:00 plane but are in plenty of time for a 9:00 one). So the syntactical properties will have to include an astronomical number of minute details about all aspects of the brain that represent the “belief” in question. But how in the world can such a complex pattern of neural activity be meaningfully said to be a belief that a plane is leaving next Tuesday at 8 A.M.?

A computer analogy might be helpful here. This time, suppose that I’m once again playing chess, but against a computer. This time it “decides” to “move” the bishop to king’s knight 6 because it “believes”, based on its calculations, that this move (and no other) will force mate in four moves. But of course the computer doesn’t (in the ordinary sense) “believe” anything; it is not “trying” to win; in fact, it has no idea that it is playing a game of chess at all. And in a sense it isn’t. At the lowest level of analysis all that’s happening in the computer is that a great number of elementary particles are moving and interacting with one another in accordance with the laws of physics. A higher level of analysis would refer to RAM, hard drives, CPU’s, and the details of the way they implement the program code. Only at the next level of interpretation would we begin to refer to chess, to selecting “moves” based on criteria designed to maximize the computer’s chances of “winning”, etc. A perfectly good description of what’s going on inside the computer is possible in terms of either of the two lower levels. It is perfectly correct to say that the computer’s “actions” are caused by the operations of the basic laws of physics, or that they are “caused” by running that particular computer code (with the specified inputs) on a computer with that particular structure. And neither of these causal explanations brings in the notion of “chess” or “winning”. However, since its behavior is exactly the behavior that we would ordinarily say would be “caused” by a belief that moving the bishop to king’s knight six will force mate in four moves, it is reasonable to describe its state as involving a p-belief to this effect. But this is an interpretation or description of the computer’s internal state, and so of course it plays no causal role, strictly speaking, in the computer’s behavior. So in this sense it’s perfectly correct to say that the computer’s behavior of moving the bishop to king’s knight six is caused only by the “syntactical” properties of its “belief”, and not by the fact that the “content” of this belief is that this move will force mate in four moves.

But in another sense it is perfectly valid to say that the computer’s “moving” the bishop to king’s knight six was caused by the fact that it “believed” that this move would lead to forced mate in four moves. This is simply a higher-level interpretation or description of what’s going on than the other two; this in no way makes it less valid. In fact, for most practical purposes it is a much better explanation, because it identifies the crucial aspects of the computer’s internal state that caused it to do what it did, and thus makes its “actions” far more comprehensible.

In fact, we resort to this kind of causal explanation all the time. For example, we say that Mrs. Brown’s death was “caused” by pneumonia; we do not say that she died because of complex biochemical processes (which we might then proceed to describe in detail for each cell) that were initiated by the introduction of a number of complex unicellular organisms (which we might also describe in detail) into her body. One might say that, strictly speaking, the fact that she died of pneumonia is merely an interpretation or description of these events, and not the events themselves, and that. since no one ever died from an interpretation or description, Mrs. Brown didn’t “really” die of pneumonia after all. Or to borrow Plantinga’s terminology, we might say that it was the syntactic properties of the disease (i.e., the detailed biochemical processes that constituted it) that caused her death, and not the “semantic” properties, such as the fact that it was pneumonia and not, say, a common cold. But none of this makes it invalid to say that she died because it was pneumonia and not a common cold, or that her death was caused by pneumonia.

So when Plantinga says that our behaviors may be caused by our beliefs, but not by their “content”, he is really suggesting that low-level causal interpretations of our behaviors are the only valid one. But by now it should be clear that this is wrongheaded. It’s true that, in terms of the lower-level interpretations, our behaviors really are caused by the syntactical properties of our p-beliefs and not the “semantic” property that they are beliefs that certain propositions are true. But at the (almost always more appropriate) higher level of interpretation it is perfectly valid to say that our behaviors are caused by these “semantic” properties – i.e., by the “content” of our beliefs.

We are finally in a position to evaluate Plantinga’s assertion regarding this second possibility:

Quote:
On this view, as on the last, P(R/N&E) (specified to those creatures) will be low. The reason is that truth or falsehood, of course, are among the semantic properties of a belief, not its syntactic properties.
By this point it is clear that Plantinga is again mistaken. Like epiphenomenalism, materialism offers no grounds whatever to doubt the reliability of our cognitive faculties.

[ January 20, 2002: Message edited by: bd-from-kg ]</p>
bd-from-kg is offline  
Old 01-20-2002, 02:55 PM   #46
Veteran Member
 
Join Date: Jul 2000
Location: King George, VA
Posts: 1,400
Post

scilvr:

Now for my comments on your last post. Unfortunately these will be relatively disconnected, since I don’t have time to tie them all together and show how they’re related to the original subject. Anyone interested is advised to refer back to this previous post.

Quote:
In Chapter twelve of Warrant and Proper Function Plantinga lists five possibilities:
I was commenting on the essay Naturalism Defeated, not to Warrant and Proper Function.

Quote:
Plantinga argues that either (1) the probability that our cognitive faculties are reliable (R) given naturalism (N) and evolution (E)is low (less than 1/2), or (2) we are in no position to say what the probability is ...
Yes, I left out that detail since it doesn’t affect my critique in any way. My point is that P(R|N&E) is neither low nor inscrutable, but quite high.

Quote:
It could be that beliefs are effects of behavior, or are effects of proximate causes that cause behavior.
So what? Beliefs can both cause behavior and be caused by behavior or be effects of things that cause behavior. In fact, it’s very likely that all of these things are true.

Quote:
One way that this situation could come about is through pleiotropy, where a single gene codes for more than one trait. It could be that there are genes that code for traits essential to survival that also just happen to code for consciousness and belief, where the latter don't play any causal role in behavior.
I’m not sure that it’s even meaningful to talk about genes that code for “consciousness”. What experiments or observations do you propose to test for whether a given gene codes for consciousness? If you can propose no such experiment or observation, I submit that you have no more idea than I do what you’re talking about here.

As for genes that just happen to “code for beliefs”, it has been pointed out many times on this thread alone that genes don’t code for beliefs. They code for cognitive function. Since cognitive function that tends to make accurate predictions is strongly conducive to survival, genes that code for it will tend to be strongly favored by natural selection. But cognitive function that tends to make accurate predictions will also tend to produce true beliefs, for reasons discussed earlier.

Quote:
bd:
Obviously our beliefs are not, on the whole, maladaptive.

scilvr:
Again, it's not so obvious to me. It is possible that a system or trait that is maladaptive becomes fixed in a population. Take sickle-cell anemia for example...
Cognitive function is very generic. Despite your and Plantinga’s silly examples, correct cognitive function will promote survival in practically all circumstances, whereas defective function will be highly detrimental to survival in practically all circumstances. So correct cognitive function will be selected for very strongly. And correct cognitive function is a very subtle property from a physical standpoint; it depends on fine details of brain structure. It’s difficult to imagine how a brain which is slightly defective (but still reasonably functional) could differ from a perfectly functional one in a way that could make a significant difference to survival for reasons unrelated to cognitive function.

Also, compromise solutions like the sickle-cell anemia gene tend to be short-term. In time natural selection finds more sophisticated solutions that deal with the problem without compromising survival-promoting mechanisms.

Quote:
So, Plantinga argues that it could be that a creatures beliefs are an "energy-expensive distraction, causing these creatures to engage in survival enhancing behavior, all right, but in a way less efficient and economic than if the causal connections by-passed belief altogether."
OK, let’s consider another realistic example.

My wife starts to have serious pains in her abdomen. I call the doctor’s office because I believe that it might be serious enough to require medical attention. After listening to my description of the symptoms, he forms the belief that she probably has appendicitis, because he believes that the stuff he learned in medical school is correct. He gives me his diagnosis and recommends that I get her to an emergency room as soon as possible because he believes that her life is in danger and believes that I’ll probably take his advice. On hearing this I call for an ambulance, telling them what the doctor told me, because I believe that this will induce them to send an ambulance which will take her to the hospital. They send the ambulance, based on my directions, because they believe me and believe that the directions I gave to my house are accurate. When we get to the hospital, we are admitted immediately because the staff believes that this could be a life-threatening emergency. After some tests they decide that it is and schedule immediate surgery because they believe that it could save her life. The surgeon does his thing based on many beliefs that he acquired from medical school and experience with such cases. As it turns out, it really was appendicitis and she would have died if she hadn’t received the proper treatment in time.

Although this example is more dramatic than most, it is similar to everyday life in that it involves complex interactions between a number of people who are acting cooperatively to achieve a common end, and each of whose actions is based on a whole complex of beliefs.

Now please explain to me how everyone involved in this episode might have come to engage in this survival-enhancing behavior through causal mechanisms that “bypass belief altogether”.

Quote:
Or, as with sickle-cell, it could be that beliefs are maladaptive, but the genes that encode for a creatures belief forming system also encodes for some other highly adaptive system or trait.
Since the ability and tendency to acquire true beliefs and act on them is itself a highly adaptive trait, it is pointless to speculate about whether the genes that encode for the belief-forming system that we actually have also encodes for some other highly adaptive trait. Occam’s Razor.

Quote:
Well, if image-desire combinations play a causal role in behavior, then we may now have a good argument for the conclusion that the probability of our visual faculties being reliable given N&E is low or inscrutable. This certainly doesn't help the naturalist.
I don’t know. I would have said that the fact that an argument yields absurd conclusions is a pretty good reason to consider it suspect. Or are you really prepared to argue that having an eye that produces accurate visual images does not provide a marked advantage in the struggle for survival? Do you really have doubts as to whether, say, hawks that saw things that weren’t there or failed to see things that were would be at a disadvantage relative to hawks who saw the things, and only the things, that were there?

Quote:
It seems to me that if the difference in predictions are subtle enough, there's no way of being confident that natural selection will favor the true belief.
Again, natural selection does not operate on beliefs. And a “false” belief that yields almost the same predictions (or almost equally accurate ones) is very nearly as true as the “true” one. But in any case, so what? Your point seems to be merely that natural selection operates more strongly on large differences than on small ones. We already knew that.

Quote:
Also, it seems to me that in some cases, a false belief would be more adaptive.
This gets tiresome. One can imagine situations where being slower would be more adaptive for a deer. So what? A trait will be selected for if it usually promotes survival and selected against if it is usually detrimental.

Quote:
For example...
No matter how many examples you come up with, you aren’t going to convince anyone this side of sanity that defective cognitive function that tends to lead to false predictions about the real world is adaptive. This is a waste of time.

Quote:
It seems that this gives the naturalist reason to doubt that naturalism is true, no?
Of course. High-level, abstract beliefs that have little or no effect on survival are the ones that one should be most skeptical of and examine most closely. I hope that you’re in the habit of doing this.

[ January 20, 2002: Message edited by: bd-from-kg ]</p>
bd-from-kg is offline  
Old 01-21-2002, 11:27 AM   #47
Veteran Member
 
Join Date: Jun 2000
Location: Greensboro, NC, U.S.A.
Posts: 2,597
Cool

Because it may be of interest to this discussion, let me add <a href="http://philosophy.wisc.edu/fitelson/PLANT/PLANT.html" target="_blank">this link</a> that I found in the II library whilst researching something totally different.

The linked article is a direct response to Platinga's argument offered by Brandon Fitelson and Elliot Sober in which they take issue with his use of Bayesian probability as well as raise some of the same issues as posters in this forum.

They also lay out briefly the objection I had mentioned but not yet laid out (and upon which bd and others have also touched). Namely that Plantinga subsumes several factors under the heading of "R" that should likely be considered separately for a number of reasons.

Regards,

Bill Snedden
Bill Snedden is offline  
Old 01-21-2002, 03:06 PM   #48
Senior Member
 
Join Date: Jun 2001
Location: Australia
Posts: 759
Post

Quote:
Originally posted by scilvr:
<strong>

If it was in fact the case that we did have creatures that had minds geared towards, and capable of, formulating true beliefs, I fail to see how this would be expected given naturalism and Darwinian evolution. If anything, we should expect minds that formulate survival enhancing beliefs, not true ones.</strong>
But in many cases true beliefs are survival enhancing ones!

For example - "If I smack that bear in the nose, it will kill me"

It is obvious to me that only a mind that can determine truth combined with some accurate sensory input could survive. This is not to say that errors would not creep in, as long as they were not errors that lead to an incapacity to breed. (like smacking a bear in the nose)
David Gould is offline  
Old 01-21-2002, 09:16 PM   #49
Contributor
 
Join Date: Jul 2001
Location: Florida
Posts: 15,796
Post

bd-from-kg writes:

Quote:
The crux if this argument is the “Rule” that “no thought is valid if it can be fully explained as the result of irrational causes”. It can’t be denied that (at least until the idea of evolution came along) everyone did believe this, and in fact considered it to be self-evident. But it seems reasonable to ask why everyone believed it. It would seem that there are two possible answers: (1) it is an innate belief; a fundamental intuition, or (2) it is based on experience.
But haven't you subtly changed the argument here? The question is, "Where does reason come from? Can it derive from irrational processes?" If it derives from irrational processes, how can we know this? Only by the application of our reason. But then we have to assume that our reasoning is valid in the first place. But how can we assume our reasoning is valid if it derives from an irrational process?

[ January 21, 2002: Message edited by: boneyard bill ]</p>
boneyard bill is offline  
Old 01-22-2002, 03:01 AM   #50
Regular Member
 
Join Date: Jul 2001
Posts: 172
Post

boneyard bill:

You wrote:

Quote:
But how can we assume our reasoning is valid if it derives from an irrational process?
The naturalist would agree that not just any blind process would yield a reliable cognitive system. However, the argument so far has been that a darwinian process would likely yield a cognitive system that would produce mostly true beliefs. The reasoning is simply that any organism with largely false beliefs would be unable to find food and mates and also unable to avoid predators.

Plantinga quotes Quine in this regard:

Quote:
Creatures inveterately wrong in their inductions have a pathetic but praiseworthy tendency to die before reproducing their kind.
Transworldly Depraved is offline  
 

Thread Tools Search this Thread
Search this Thread:

Advanced Search

Forum Jump


All times are GMT -8. The time now is 09:53 AM.

Top

This custom BB emulates vBulletin® Version 3.8.2
Copyright ©2000 - 2015, Jelsoft Enterprises Ltd.