FRDB Archives

Freethought & Rationalism Archive

The archives are read only.


Go Back   FRDB Archives > Archives > IIDB ARCHIVE: 200X-2003, PD 2007 > IIDB Philosophical Forums (PRIOR TO JUN-2003)
Welcome, Peter Kirby.
You last visited: Yesterday at 05:55 AM

 
 
Thread Tools Search this Thread
Old 02-24-2003, 03:51 PM   #1
Regular Member
 
Join Date: Feb 2003
Location: Amman, Jordan
Posts: 258
Default No Free Lunch

William Dembski, and Michael Behe are the two leading figures in the so called Intelligent Design (ID) movement. They claim that neo-Darwinism cannot explain the origin and subsequent evolution of life, which can only be explained by ID.
Dembski published his latest book "No free Lunch" in which he argues that the source of information in Nature cannot arise de novo. There has to be a designer in order for information to exist, it does not just emerge.
He also uses the concept of irreducible complexity for certain structures in biological organisms and shows that they could not have arisen in the multiple steps acted upon by Natural Selection.
Personally. I find the mathematical arguments elegant but fundamently flawed. The use of the word information by Dembski and Behe is defined according to Shannon's information theory, which itself invokes design. It is like a self reinforcing argument. The basic assumption they make have nothing to support them.
Does anyone like to discuss the No Free Lunch Hypothesis?
MyKell is offline  
Old 02-24-2003, 04:21 PM   #2
Veteran Member
 
Join Date: Jul 2002
Location: East Coast. Australia.
Posts: 5,455
Default

The no free lunch hypothesis is a thought experiment that contradicts the well known results of real life experiments and naturally occuring examples. This is one of the calling cards of a quack scientist.
Doubting Didymus is offline  
Old 02-24-2003, 04:32 PM   #3
Veteran Member
 
Join Date: Mar 2002
Location: anywhere
Posts: 1,976
Default

Quote:
MyKell: The use of the word information by Dembski and Behe is defined according to Shannon's information theory, which itself invokes design.
Not quite. Shannon's information metric is itself only a descriptor, making no requirements on the design nature of the process being described. The random snow on a bad channel in your TV screen has "information" (and paradoxically it has maximal Shannon information).

But in general you're right. Dembski's thesis is an argument from ignorance. You've read this Talk.Origins FAQ by Richard Wein, right?

EDIT: fixed link
Principia is offline  
Old 02-24-2003, 06:05 PM   #4
Regular Member
 
Join Date: Feb 2003
Location: Amman, Jordan
Posts: 258
Default

Quote:
Originally posted by Principia
The random snow on a bad channel in your TV screen has "information" (and paradoxically it has maximal Shannon information).

I thought, according to Shannon's information theory, information was the reduction of uncertainty among a set of possible alternatives. Therefore, the snow on a bad channel in my TV screen would have minimal information relative to both the observer and the TV.
The way Dembski and Behe use this definition to invoke design is by saying that some force should do the reduction of uncertainty. Or this was at least my understanding.
Thanks alot for the link you sent me, this is exactlywhat I was looking for
MyKell is offline  
Old 02-24-2003, 06:41 PM   #5
Veteran Member
 
Join Date: Mar 2002
Location: anywhere
Posts: 1,976
Default

MyKell,

Shannon's information metric is typically defined as:
Code:
H = - sum p_i * log p_i, for all elements i = 1 to N
It can be mathematically demonstrated (e.g., using Lagrange multipliers) that this term, H, is maximized if and only if all the probabilities are identical -- that is, when p_i = 1/N for all elements, i. It is minimized when one of the elements has probability of 1, and the rest has probability of 0. So a process which, for example, produces only one output has minimal information. And random snow, in which every pixel intensity has presumably equal probability, would have maximum information.
Principia is offline  
Old 02-24-2003, 06:53 PM   #6
Regular Member
 
Join Date: Feb 2003
Location: Amman, Jordan
Posts: 258
Default

Thanks for the note, that cleared up some of the confusion.
I think I was confused about how information content relates to its complexity. Obviously, they are not related. The chaotic pattern of snow on a TV screen has maximum information content but very low complexity, right?
MyKell is offline  
Old 02-24-2003, 07:04 PM   #7
Veteran Member
 
Join Date: Mar 2002
Location: anywhere
Posts: 1,976
Default

This is a difficult question, since the quantification of complexity itself is a whole field of research unto itself. Complexity in certain senses (e.g. Kolmogorov-Chaitin) measures the "size" of the minimal process required to generate a given output. But, these measures are notoriously difficult to quantify. I'd suggest doing a google search on the topic, or perhaps one of the more versed lurkers will pop in (RBH or lpetrich, you there?).

The main twist Dumbski offers to this field is the notion of "specified" complexity. And the problem he is trying to circumvent, imo, is the common [though mistaken] conflation of information measures and semantic content. So, according to him, design is real if it is complex (which to him, merely means that it is improbable to occur by chance and natural laws) and if it is specified (i.e. that it has "meaning" in some subjective sense.) You're probably better versed in Dumbskian then I am at this point, since you recently finished reading NFL. So if I got Dumbski wrong, I don't care.
Principia is offline  
Old 02-24-2003, 09:13 PM   #8
RBH
Contributor
 
Join Date: Aug 2002
Location: Ohio
Posts: 15,407
Default Complexity Measures

IIRC, in The Design Inference Dembski started out using "specified improbability," but shifted to "specified complexity" with the same meaning as improbability. I wish he had stuck with the improbability locution, but "improbable" ain't as sexy as "complex," and he couldn't (unjustifiably) piggyback on the connotations of "complex" if he used "improbable."

For some background on complexity measures, see the links suggested here. They're down close to the bottom of the page.

RBH
RBH is offline  
Old 02-25-2003, 05:34 AM   #9
Contributor
 
Join Date: Feb 2002
Location: With 10,000 lakes who needs a coast?
Posts: 10,762
Default

Kent Hovind could testify that not only can you get a free lunch, but you don't have to pay taxes on it.
Godless Dave is offline  
Old 02-25-2003, 08:48 AM   #10
Veteran Member
 
Join Date: Jan 2001
Location: Santa Fe, NM
Posts: 2,362
Default

Quote:
Originally posted by Doubting Didymus
The no free lunch hypothesis is a thought experiment that contradicts the well known results of real life experiments and naturally occuring examples. This is one of the calling cards of a quack scientist.
Not true. The NFL theorem is a theorem that provides some counter-intuitive (but entirely true) results about black-box optimizers.

Dumbski plays fast and loose with the interpretation of the theorem, (for example, adding on the assertion that they somehow make evolutionary algorithms infeasable), but that's not the theorem's fault.
Undercurrent is offline  
 

Thread Tools Search this Thread
Search this Thread:

Advanced Search

Forum Jump


All times are GMT -8. The time now is 01:44 AM.

Top

This custom BB emulates vBulletin® Version 3.8.2
Copyright ©2000 - 2015, Jelsoft Enterprises Ltd.