FRDB Archives

Freethought & Rationalism Archive

The archives are read only.


Go Back   FRDB Archives > Archives > IIDB ARCHIVE: 200X-2003, PD 2007 > IIDB Philosophical Forums (PRIOR TO JUN-2003)
Welcome, Peter Kirby.
You last visited: Today at 05:55 AM

 
 
Thread Tools Search this Thread
Old 06-07-2002, 10:09 PM   #1
Veteran Member
 
Join Date: Mar 2002
Location: anywhere
Posts: 1,976
Exclamation Dembski's 2nd Reply to Wein

<a href="http://www.iscid.org/papers/Dembski_WeinsFantasy_060702.pdf" target="_blank">Here</a>.
Principia is offline  
Old 06-08-2002, 01:29 AM   #2
Contributor
 
Join Date: Jul 2000
Location: Lebanon, OR, USA
Posts: 16,829
Post

I read it, and I'm not very impressed. His proposed way of recognizing "specified complexity" is to demonstrate that no non-designed effect could have produced something. But his comment about the craters of the Moon is something like "we know the answer", as compared to what Kepler had known.

But as our knowledge increases, seeming examples of specified complexity could be shown to be the unspecified sort of complexity. Thus, Dembski might think that spiderwebs represent specified complexity -- complexity specified by tiny 8-legged specifiers.

However, as I'd mentioned in another thread, Thiemo Krink's work suggests that spiders can build their webs by following some simple algorithms. Which suggests that spiderwebs represent the unspecified kind of complexity instead of the specified kind.

So unless Dembski can point out some more positive way of recognizing specified complexity, its domain will be doomed to shrink as knowledge expands.
lpetrich is offline  
Old 06-08-2002, 01:56 PM   #3
Regular Member
 
Join Date: Oct 2000
Location: Orlando, FL
Posts: 385
Post

Dembski should stick to reviewing Simpsons episodes. <a href="http://www.arn.org/docs/dembski1129.htm" target="_blank"> The Simpsons w/Gould</a>
Peregrine is offline  
Old 06-08-2002, 06:08 PM   #4
Veteran Member
 
Join Date: Jul 2001
Location: Orion Arm of the Milky Way Galaxy
Posts: 3,092
Post

Quote:
Originally posted by Scientiae:
<strong><a href="http://www.iscid.org/papers/Dembski_WeinsFantasy_060702.pdf" target="_blank">Here</a>.</strong>
Wein has really hit him were it hurts. There is simply no greater evidence of that than this piece of garbage that Dembski has written. Notice how he utterly avoids actually addressing Wein's substantive criticisms.

And he has endorsements of a Nobel Prize winner and a "senior member" of the National Academy of Sciences -- only he does not give us their names. Give me a break! Neither of these people have any need to hide their names since they can get a senior level position anywhere in their fields.
Valentine Pontifex is offline  
Old 06-09-2002, 11:15 AM   #5
Moderator - Science Discussions
 
Join Date: Feb 2001
Location: Providence, RI, USA
Posts: 9,908
Post

There's an ARN thread about this <a href="http://www.arn.org/cgi-bin/ubb/ultimatebb.cgi?ubb=get_topic;f=13;t=000097" target="_blank">here</a>, with some additional recent Dembski criticism <a href="http://www.arn.org/cgi-bin/ubb/ultimatebb.cgi?ubb=get_topic;f=13;t=000098" target="_blank">here</a>. This was my response to Dembski's latest article:

It seems to me that Dembski's responses have tended to be at a philosophical level--ie does it make sense to infer design by the basic method of eliminating natural hypotheses--while most of Wein's critiques focus on the technical issues of Dembski's arguments, and to this Dembski avoids responding in detail. Perhaps we should start a list of basic technical problems Dembski has failed to address. Here are some I can think of off the top of my head:

1. What is the basis for his idea that fitness landscapes/natural laws can themselves exhibit "specified complexity?" According to what Wein calls the "chance-elimination method" the amount of specified information is computed accorded to the probability that a specified event would occur relative to known natural laws. Therefore it would be meaningless to talk about those laws themselves exhibiting specified complexity, unless you know of meta-laws that give various possible laws of nature different probabilities. Wein has surmised that Dembski has a separate "uniform probability method" which automatically computes specified complexity assuming all events in the phase space are equally probable (and I note that other people, such as jazzraptor, have independently interpreted his fitness-landscape arguments in the same way). But in his original response Dembski was pretty evasive about whether he actually has such a uniform-probability method (I talked about this more on <a href="http://www.arn.org/cgi-bin/ubb/ultimatebb.cgi?ubb=get_topic;f=13;t=000062" target="_blank">Wein's response to Dembski up on talkorigins</a>). So does he or doesn't he? None of Dembski's appeals to the NFL theorem would make much sense if he didn't, since the NFL theorem assumes all fitness landscapes are equally probable. In fact, if Dembski has no way of computing specified complexity besides the chance-elimination method, very little of chapter 4 of NFL would make sense either.

Note also that in his latest response Dembski writes:

Quote:
The monolith in Stanley Kubrick's 2001: A Space Odyssey (a homogeneous rectangular solid) exhibits specified complexity. The sphericity of the stars do not.
This comment only makes sense if you assume the relevant probability distribution used in computing specified complexity should be the one induced by natural laws. Under a uniform probability distribution on the set of all possible configurations of particles within a certain volume of space, a spherical star exhibits vastly more than 500 bits of specified information. So why does he claim that spherical stars do not contain specified complexity, while at the same time claiming that if complicated multicellular organisms are a probable outcome of natural laws operating for billions of years, that just shows that specified complexity was being "smuggled in" through the fitness landscapes? Should critics trust William Dembski's spidey-sense about what contains specified complexity and what doesn't, or does he have some set of well-defined criteria that he's just not telling us?

2. Dembski's "Law of Conservation of Information" is apparently seen as a great achievement by Dembski--he offers it as a candidate for a "fourth law of thermodynamics"--but it seems somewhat trivial on closer analysis. See Wein's discussion <a href="http://www.talkorigins.org/design/faqs/nfl/replynfl.html#s5p1" target="_blank">here</a>. Is Wein correct that it is basically just a restatement of the idea that events which have a low probability under all relevant chance hypotheses must be due to design? In a long <a href="http://www.arn.org/cgi-bin/ubb/ultimatebb.cgi?ubb=get_topic;f=1;t=001416;p=3" target="_blank">discussion</a> with jazzraptor a while ago I claimed that the LCI is basically just saying that events which are extremely improbable relative to a set of initial conditions and laws acting on those conditions are, in fact, highly unlikely to occur naturally (ie without some intelligence coming in and violating those laws). Jazz at the time said "your interpretation of LCI is indeed tautology. LCI is useless and trivial when formulated in the way that you misunderstand it." So here we have an IDist agreeing that if Wein's and my interpretation is correct, LCI is pretty trivial. So, is it or isn't it?

3. What exactly is the point of "specified complexity" in the first place? As Wein points out, it seems to be an unecessary middleman--instead of:

all relevant chance hypotheses eliminated --&gt; specified complexity --&gt; design

...one could simply say:

all relevant chance hypotheses eliminated --&gt; design

Even if one thinks there is some philosophical merit in his ideas, it seems to me that all his attempts at technical formalism tend to confuse the issues or make hand-wavey arguments sound much more rigorous than they actually are. In his most recent response Dembski conceded that the notion of "complexity" used in calculating specificational resources (note that this is entirely different from the 'complexity' in 'specified complexity'), which in turn are used to compute probability bounds for inferring design (such as the 'universal probability bound' of 10^150), is basically just an intuitive notion which is impossible to formalize. Yet in NFL (p. 76) he makes the function quantifying "complexity" sound much more formal:

Quote:
The tractability condition employs a complexity measure phi that characterizes the complexity of patterns relative to S's background knowledge and abilities as a cognizer to percieve and generate patterns. Such a measure is objectively given (relative to S) and determined up to monotonic transformations.
And in section 2.4, p. 61, he used "phi" as a symbol for the well-defined notion of Kolmogorov complexity, which is widely used in algorithmic information theory. Anyone who wasn't reading extremely carefully would probably get the impression that his "complexity measure phi" was some sort of formal mathematical function, but it turns out that phi is basically totally subjective and reduces to an observer saying "hmm, this pattern seems a lot more complicated than that one."

If I'm right that Dembski has no formal method for choosing probability distributions independent of actual probabilities induced by laws of nature, then this same sort of critique would apply to all his arguments about the NFL theorem and fitness landscapes, which sound fairly technical but would basically reduce to the intuition that there's something fishy about the idea that natural laws could be conducive to the evolution of complicated life forms without some intelligent being overseeing them. Likewise basically all his arguments about "complex specified information" (including such things as the 'law of conservation of information'), which to many IDist observers seem to be complicated technical arguments involving new results from information theory and statistics, would seem to turn out to hide pretty simple intuitive ideas about improbable events being due to design.

This is the main beef that people who are well versed in math seem to have with Dembski's ideas--not so much that they think the core ideas are complete nonsense, but that they think that Dembski is making trivial statements using a mess of convoluted mathematical terminology that make people favorably inclined towards ID think his ideas are much more sophisticated and technical than they actually are. So Dembski's responses, which have concentrated almost exclusively on the philosophical underpinnings, really seem to miss the point.
Jesse is offline  
Old 06-09-2002, 11:39 AM   #6
Veteran Member
 
Join Date: Aug 2001
Location: Los Angeles
Posts: 1,427
Unhappy

This is getting way over my head.

The IDists have succeeded on one point at least: it all sounds a lot more intellectual and scientific than using quotes from Genesis.
bluefugue is offline  
Old 06-09-2002, 05:58 PM   #7
Veteran Member
 
Join Date: Dec 2002
Location: Gatorville, Florida
Posts: 4,334
Question

You know, I'm not at all certain that I understand the measurement system that Dembski is proposing (see <a href="http://www.iscid.org/papers/Dembski_WeinsFantasy_060702.pdf" target="_blank">HERE</a>), but I am struck by the following thought: suppose, for just a moment, we take Dembski's ideas seriously. And then, just suppose we take my recent essay, <a href="http://www.infidels.org/library/modern/bill_schultz/ID-not.html" target="_blank">Were Humans Intelligently Designed? Science Says No!</a> as a roadmap for looking at the human genome as a complex thing to be analyzed.

In reading Dembski's paper, at one point he uses an example from cryptography. He argues that the encrypted text "NFUIJOLT JU JT MJLF B XFBTFM" is more properly translated as "METHINKS IT IS LIKE A WEASEL" rather than "PROGRESS IS AN IDEA I ESTEEM" because the former translation is a simple Caesar-cypher while the latter translation could only be accepted if a one-time pad were being used. Thus, the "intelligent design" is seen to be the simplest explanation. The other possibility would be, according to Dembski, mere random characters and not in any way "designed."

Anyway, it seems to me that the more we know about the human genome and its workings, the more it frankly appears to be the second sort of cypher-text rather than the first. And in my view, the end result would seem to me that somebody smart could take this argument of Dembski's and turn it against him by showing that, according to Dembski's own criteria, the human genome is a product of random processes rather than a product of intelligent design.

At least, it looks that way to me from a quick read of Dembski's rantings. Jesse, your comments please, because you seem to understand Dembski's points better than I.....

== Bill
Bill is offline  
Old 06-09-2002, 07:54 PM   #8
Moderator - Science Discussions
 
Join Date: Feb 2001
Location: Providence, RI, USA
Posts: 9,908
Post

Quote:
In reading Dembski's paper, at one point he uses an example from cryptography. He argues that the encrypted text "NFUIJOLT JU JT MJLF B XFBTFM" is more properly translated as "METHINKS IT IS LIKE A WEASEL" rather than "PROGRESS IS AN IDEA I ESTEEM" because the former translation is a simple Caesar-cypher while the latter translation could only be accepted if a one-time pad were being used. Thus, the "intelligent design" is seen to be the simplest explanation. The other possibility would be, according to Dembski, mere random characters and not in any way "designed."
Actually Dembski was addressing a fairly obscure point in the section you're mentioning. First of all, he has the idea of "specified complexity," which just means an event that matches some preexisting pattern (specification) and is also very improbable. If the event is improbable enough according to all naturalistic hypotheses, he says we should conclude design.

But how to decide what "improbable enough" is? Here he introduces the notion of a probability bound based on "replicational resources" and "specificational resources". The replicational resources refer to the number of trials you have to hit a specification--if you flip a coin 10 times, the chance that your sequence will match the specification "10 heads in a row" is pretty small, but if you flip it a million times in a row, the chances are quite good that you'll hit that specification at least once. That's what the "specificational resources" are supposed to quantify. Meanwhile, the more patterns (specifications) you have in your mind to begin with, the more likely it is that a given sequence of 10 coinflips will match at least one of them--it could match "10 heads", "10 tails", "alternating heads and tails", "heads only on prime flips," etc. That's what the "specificational resources" are meant to keep track of--the number of specifications you have to choose from.

Finally, to add to all this, he introduces the notion of a "complexity measure" phi on the set of all specifications. Basically the idea is that an extremely simple pattern is more significant that some extremely convoluted pattern, since there are a lot more ways of finding a convoluted pattern in some set of events than simple ones. If I roll some dice and get the sequence 125244563 I could come up with something like "12 is the month of my grandmother's birthday, 244 is the size of my high school class + my grade school class, 4 is the number of goldfish I have, etc." but this seems less significant than if 125-24-4563 happens to be my exact social security number.

So, Dembski says that when figuring out the specificational resources, we should only look at the set of all specifications with "complexity" smaller or equal to the "complexity" of the particular specification the event actually matched. Note that this notion of "complexity" is totally unrelated to the complexity in "specified complexity", where complexity is just a codeword for improbability.

Once you have both the specificational resources and the replicational resources figured out (it's a totally subjective process, but never mind), then you can figure out the probability bound. If your specified event had a probability smaller than that, you're justified in concluding design. Dembski also figures out the maximum number of specifications that could be dreamed up in the entire history of the universe, and uses that to get a "universal probability bound" of 1 in 10^150. If any specified event occurs with a probability less than this, he says we can safely conclude design. In most of his arguments he simply ignores the issue of specificational and replicational resources and just sticks with this universal probability bound, so for the most part you can ignore this stuff too and still understand what he's saying.

But the whole thing about the simple cipher vs. the complicated is an exception, since there he was trying to illustrate his notion of a "complexity measure" on the set of all specifications which is needed to compute the specificational resources. I don't think it would relate much to the question of whether DNA can be seen as a cipher, and if so how complicated it is. Dembski is usually interested in the probability that a particular structure in the phenotype will evolve, not that a particular sequence of DNA will evolve.

[ June 09, 2002: Message edited by: Jesse ]</p>
Jesse is offline  
Old 06-10-2002, 04:35 PM   #9
Veteran Member
 
Join Date: Dec 2002
Location: Gatorville, Florida
Posts: 4,334
Arrow

For those who entered this discussion late, here are some interesting links:
  • <a href="http://www.talkorigins.org/design/faqs/nfl/" target="_blank">Wein's actual review</a> (about 37,000 words worth) of Dembski's book, No Free Lunch
  • <a href="http://www.designinference.com/documents/05.02.resp_to_wein.htm" target="_blank">Dembski's first reply to Wein</a>, Obsessively Criticized but Scarcely Refuted: A Response to Richard Wein
  • <a href="http://www.talkorigins.org/design/faqs/nfl/replynfl.html" target="_blank">Wein's rebuttal to Dembski's Response</a> Response? What Response?
  • <a href="http://www.iscid.org/papers/Dembski_WeinsFantasy_060702.pdf" target="_blank">Dembski's final rebuttal</a>, The Fantasy Life of Richard Wein:
    A Response To A Response
  • <a href="http://www.math.uwaterloo.ca/~shallit/nflr3.pdf" target="_blank">Jeffrey Shallit's separate and distinct trashing of Dembski's No Free Lunch</a>
There are many other links on this topic from the main page of <a href="http://www.talkorigins.org/design/faqs/nfl/" target="_blank">Wein's main review</a> (see the box in the top-right corner).

== Bill
Bill is offline  
Old 06-10-2002, 04:51 PM   #10
Veteran Member
 
Join Date: Dec 2002
Location: Gatorville, Florida
Posts: 4,334
Thumbs up

I like this paragraph of <a href="http://x" target="_blank">Wein's conclusion</a>:
Quote:
Specified complexity (CSI) is not a marker of intelligent design. If specified complexity is determined according to the uniform-probability interpretation, then natural processes are perfectly capable of generating it. If it is determined by the chance-elimination method, then specified complexity is just a disguise for the god-of-the-gaps argument.
Yes, indeed! All that the whole business of Intelligent Design amounts to is just a god-of-the-gaps argument. I couldn't have said it better myself.

== Bill
Bill is offline  
 

Thread Tools Search this Thread
Search this Thread:

Advanced Search

Forum Jump


All times are GMT -8. The time now is 03:54 PM.

Top

This custom BB emulates vBulletin® Version 3.8.2
Copyright ©2000 - 2015, Jelsoft Enterprises Ltd.