Wednesday, November 18, 2009

Paper Rebutting Dembski Finally Out

Back on my previous sabbatical, in 2001-2, I spent a couple of months reading WIlliam Dembski's book, No Free Lunch, which he was kind enough to send me. I chose to do that for a number of reasons: first, I was interested to see if his claims about a mathematical refutation of Darwinism were true; second, a sabbatical is the time to tackle some unusual project you don't usually have time for; and third, I have an interest in pseudoscience and pseudomathematics. Reading it led to some fun discussions with Wesley Elsberry and we eventually produced a long, 54-page refutation of many of Dembski's claims.

But then, what to do with it? I had heard Dembski and Ruse were co-editing a voume, so I briefly entertained the idea of submitting it for inclusion there. But I was worried Dembski would refuse because the paper was sharply critical of his work, and after talking to Ruse I had second thoughts and decided to look for another venue. We chose a journal whose subject matter included biology and philosophy, but the paper was eventually rejected -- not because of the quality of the paper, but because the referees felt that spending 54 pages to debunk what they perceived as anti-evolution crackpottery was not a good use of their journal's space.

Finally, we were invited to submit the paper to a special issue of the journal Synthese, and we did so. The paper went through multiple rounds of refereeing, with the referees suggesting that more and more be cut. Now that it has finally appeared, it is down to a measly 34 pages. Luckily the long version is still available online.

If you can't read the Synthese version because you don't have a subscription, just write me and I'll be happy to send you a copy.

This is the longest interval I've ever had between finishing a paper (2002) and the time it appeared (2009). And it's likely to be my only paper in a philosophy journal. I predict that the intelligent design community will continue to ignore all the criticisms (which have been available to them for years) and continue to pretend that CSI is actually a coherently-defined entity, and that the "law of conservation of information" holds. I predict lots of breast-beating, and excuses for not addressing our criticisms, but no response that deals forthrightly with all the errors we found in Dembski's work.

22 comments:

DiEb said...

I'd love to read the condensed version, too. "I predict lots of breast-beating, and excuses for not addressing our criticisms, but no response that deals forthrightly with all the errors we found in Dembski's work." - indeed, exactly my experience.

Frank said...

"I predict that the intelligent design community will continue to ignore all the criticisms ... but no response that deals forthrightly with all the errors we found in Dembski's work"

My prediction is slightly different. In my experience, no rebuttal is rebuttal-proof. What will happen is that Dembski, or a cohort, will find some less-than-excellent portions of your paper, attack those, and all their readers will think that he's dismissed the challenge.

Ewan said...

Professor William A Dembski has been called the Isaac Newton of information theory. I trust what he has to say.

http://www.youtube.com/watch?v=TtsjjeIdxdo

http://www.youtube.com/watch?v=uZ4gAbc4z0E

Jeffrey Shallit said...

Ewan:

You should certainly trust everything that William Dembski has to say. After all, he is a leading scientist and mathematician.

andrew said...

Congratulations! Loved the original, and look forward to reading the condensed version.

Blake Stacey said...

Isaac Newton was an angsty teenager who grew up to revolutionize science and fight crime. Dembski has the angst part down, maybe. . . .

paul01 said...

Numbr3s!

Anonymous said...

7.1.2 mentions triangular ice crystals.

Do you have some information on these?

TomS

Jeffrey Shallit said...

Tom:

http://tinyurl.com/5op2v

dete said...

Seems to me there is an interesting flaw in section 10.1 (I'm looking at the talkreason link, if the numbering is different between versions). I don't feel it invalidates the rest of the paper, and it sure as hell doesn't validate Dembski, but it raises an interesting point.

As I understand it "random data" has very high Kolmogorov complexity. (Indeed, this seems to be the primary motivation for Dembski to introduce CSI; he wants a formal measure of information which represents what people informally refer to as "meaning". Clearly random data isn't it.) And yet in section 10.1, you very cavalierly introduce a function that operates randomly.

I don't need to tell you that, so far as we know, randomness isn't "computable". Indeed, if you look at descriptions of random number generators (such as Schneier's Yarrow), they talk about randomness in the same sorts of terms used in information theory: Such-and-such value has n bits of randomness.

In short:
- Random data IS information
- Your function is random
- Your function must have a hidden input which is the source of the randomness
- You are "smuggling in CSI"

Jeffrey Shallit said...

Dete:

No, I think your objection is without merit. We get randomness "for free" from quantum mechanics -- for example, radioactive decay. For example, there is at least one site on the internet that produces truly random numbers from radioactive decay - I think it is even referenced in one version of our paper.

Richard Emmanuel Jones said...

I interviewed him the other day - he seemed alright.

http://richardemmanueljones.blogspot.com/2009/11/beneath-sheets-of-cnwch-y-craig-above.html

dete said...

Just because you get the input via hardware doesn't mean it's not an input!

I've been thinking a bit more about this since I posted last night, and I think that Dembski might be right. Right in a narrow sense, while being so very, very wrong in the broader sense.

If you consider randomness information, then it may well be the case that you can't create information from nothing.

This statement (if true) obviously isn't very devastating to evolution since nature has no shortage of random inputs. But in a purely theoretical sense, it seems possible and even likely -- and pretty interesting...

Jeffrey Shallit said...

Dete:

Dembski never says, to the best of my knowledge, that "you can't create randomness from nothing".

It's true that deterministic processes are worse at creating information than random ones. But even deterministic processes can create information. For example, consider the transformation on strings that maps a string of the form x to one of the form xx. Iterating this transformation can be proved to generate strings of arbitrarily large Kolmogorov complexity.

Dembski has added exactly nothing to our knowledge of randomness.

Alex said...

"But even deterministic processes can create information. "

/"useful"/ information?

Jeffrey Shallit said...

Alex: define "useful" information.

Alex said...

It's tricky to define. It's like defining pornography. It might be a case of "you know it when you see it," but I'm not sure. http://en.wikipedia.org/wiki/I_know_it_when_I_see_it
But of course there are many things in science that are also tricky to define.

dete said...

Re: Dembski on randomness

Oh, heavens no: Dembski knows that finding natural sources of randomness isn't a problem. That's why he goes to so much trouble to create a supposed measure of "information" which specifically excludes randomness. He fails of course, but it's clear he doesn't really understand what he's talking about.

Re: Information from deterministic functions

I don't think I agree, since the added information is in the applied function. Consider this: I can create a deterministic function, f(x) defined over {0, 1}:

f(0): 0
f(1): Complete works of Shakespeare

Now, clearly "the complete works of Shakespeare" has a greater information content than the binary digit "1". But the extra information was encoded in the function ("smuggled in" to hear Dembski tell it).

So, for all deterministic functions f, and all values x within the range of x, if there where a valid information metric M defined for an encoding of f and values in the range of f:

M(f) + M(x) >= M(f(x))

Which I believe is true, and which is a key component of Dembski's argument.

As I've said before, this is hardly a nail in evolution's coffin since I believe randomness to BE information and randomness is in no shortage in the real world (you know... where evolution happened).

Jeffrey Shallit said...

Re: M(f) + M(x) >= M(f(x))

Nope - this is wrong. Consult any text on Kolmogorov complexity. All you know is that M(f) + M(x) is within O(1) of M(f(x)), but it might be greater, less, or equal.

dete said...

Interesting and unexpected! I guess I have some reading to do... :-)

Anonymous said...

I'm sorry but you can't contradict Dembski, even if he has a flaw here or there, intelligent design is evident, you scientists have had to change lots of statements as things become more advanced, so whether you like it or not, Intelligent Design is present and you just don't want to see it.

Jeffrey Shallit said...

I'm sorry but you can't contradict Dembski, even if he has a flaw here or there, intelligent design is evident, you scientists have had to change lots of statements as things become more advanced, so whether you like it or not, Intelligent Design is present and you just don't want to see it.

This sounds an awful lot like "My team is better! So there!"

If you have any criticisms of my paper, why not present them, instead of performing mindless posturing?