Science often depends on experiments, and experiments are notoriously prone to error. Even if the experiment's results are correct, the conclusions may be wrong. And even if the experiment and conclusions are correct, they may represent only part of the truth. Sometimes scientists are simply wrong, and they need to admit it. While they don't do experiments, a similar obligation falls on mathematicians.
Most mathematicians and scientists recognize this obligation. In 1989, for example, the mathematician I. J. Good published a corrigendum to one of his previous papers. This wouldn't be noteworthy except that the paper he was correcting was published in 1941, nearly 50 years before.
Unfortunately, it doesn't always work that way. A classic case is that of René Blondlot (1849-1930), a French physicist who believed he had discovered a new kind of radiation, which he called "N-rays" in honor of Nancy, his native city. You can read about this case in Walter Gratzer's book The Undergrowth of Science, and I am following Gratzer's account here.
N-rays, Blondlot said, had all sorts of unusual properties. They could go through paper, wood, quartz, and mica, but not water or rock salt. They were emitted by animal and plant tissue. When N-rays hit the human eye, people could see better. Some other physicists confirmed his results, and found other strange properties of the rays. Blondlot's results were often based on very subjective data, such as a line of glowing phosphor increasing or decreasing in brightness based only on visual observations.
However, others failed to confirm his results, notably German physicists such as Heinrich Rubens, Otto Lummer, and Paul Drude, and English physicists such as Lord Rayleigh, Lord Kelvin, and William Crookes. Nationality began to influence the controversy: one Blondlot supporter later claimed the Germans could not detect the effect of the rays because "their sensibilities were inferior and were further blunted by their bruish diet of beer and sauerkraut" (Gratzer, p.20).
Ultimately, N-rays met their scientific death when R. W. Wood, a Johns Hopkins physicist, visited Blondlot's laboratory. In a 1904 letter published in Nature, he made a devastating revelation. While observing an experiment in a dark room in which the N-rays were refracted by an aluminum prism, Wood surreptitiously removed the prism from the apparatus. The French experimenters, unaware of Wood's actions, went on describing exactly the same phenomena as before.
But Blondlot and many of his supporters did not concede. One French scientist, Turpain, attempted to reproduce Blondlot's results, failed, and submitted his results to Comptes rendus for publication. They were rejected by the editor who said, "Your results can be explained simply by supposing that your eyes are insufficiently sensitive to appreciate the phenomena." (Gratzer, p. 21) Turpain replied, "If N-rays can only be observed by rare privileged individuals then they no longer belong to the domain of experiment." (ibid) When Blondlot died in 1930, his posthumous papers showed he continued to believe in and experiment on N-rays for many years after Wood's debunking.
The case of N-rays is a spectacular example of a failure of a scientist to admit he was wrong. Even in this case, though, science's self-correction won the day. After Wood's debunking, N-rays vanished from the scientific literature.
Blondlot's tale is a cautionary one. By contrast, I offer a case where the proper behavior was displayed, from Richard Dawkins' 1996 Richard Dimbleby lecture:
A formative influence on my undergraduate self was the response of a respected elder statesmen of the Oxford Zoology Department when an American visitor had just publicly disproved his favourite theory. The old man strode to the front of the lecture hall, shook the American warmly by the hand and declared in ringing, emotional tones: "My dear fellow, I wish to thank you. I have been wrong these fifteen years." And we clapped our hands red.
Admitting you are wrong is a basic part of the mathematical and scientific ethic. In the days of the Internet, mea culpas can be more public and more effectively distributed then ever before. For both of my published books, for example, I maintain public errrata pages. The errata and addenda for my 1996 book now take up ten pages!
But not everyone agrees. Take two controversial books published in the last few years: A New Kind of Science by Stephen Wolfram, and No Free Lunch, by William Dembski.
Wolfram's book was published in 2002. Roughly speaking, the main thesis is that even very simple interactions give rise to complex phenomena that are hard to predict. Over 1280 pages, this thesis is developed and applied to many different areas, including mathematics, physics, economics, and biology; it is touted as a genuine revolution in science. Critics, generally, speaking have not been kind. (See here for a compendium of many different reviews.) I find the book interesting, with many fascinating digressions. The pictures are nice. But the importance of the main thesis is wildly overstated, and Wolfram never really gives a formal definition of "complex" that would satisfy a mathematician or physicist; rather, he relies on an informal definition of complexity based on appearance to the human visual system. If he did use a more formal definition -- let's say Kolmogorov complexity -- then his claims become incoherent, trivial, or wrong.
Dembski's book was also published in 2002. Dembski defines a new kind of complexity, which he calls "specified complexity" or "complex specified information". He then discusses properties of this measure, which he claims satisfies a law called "The Law of Conservation of Information", and concludes that specified complexity cannot be generated by natural causes. He then finds specified complexity in biological structures such as the [sic] bacterial flagellum, and concludes the flagellum cannot have arisen through natural causes. Needless to say, most reviewers have not been kind to Dembski either. (See here for some reviews.) Like Wolfram, Dembki's definition of complexity suffers from subjectivity, as it depends critically on the knowledge base of the observer. When one tries to make the definition more precise, Dembski's claims become incoherent, trivial, or wrong.
The analogy between the work of Wolfram and Dembski is imperfect. Wolfram, for example, has a genuine record of achievement, winning a MacArthur "genius" grant, and creating the software system Mathematica. Many of his papers continue to be cited by scientists and mathematicians. By contrast, Dembski, though possessing numerous degrees, has had a negligible impact on mathematics and science.
Nevertheless, there is one thing they have in common, and this brings me back to the start of this entry. Neither Wolfram nor Dembski have seen fit to make available errata pages for their books. In this, they fail in their intellectual obligation. Back in October 2002, I sent Wolfram a list of various errata in his book. Eventually one of his assistants acknowledged the errors, but Wolfram has never made them public. (Some of them can be seen here, due to the persistence of Evangelos Georgiadis.) Considering the extensive web presence for A New Kind of Science, surely an errata page is not asking too much.
Similarly, in April 2002 I sent Dembski a review in which I pointed out many mistakes in No Free Lunch, but Dembski only acknowledged one of these errors publicly and it took him three years. (My review later appeared in the journal BioSystems.)
Both Wolfram and Dembski seem to be taking a page from Canadian feminist Nellie McClung, who reportedly said, "Never retract, never explain, never apologize –- get the thing done and let them howl!" This might be a good motto for a social activist, but for a scientist or mathematician it is a dereliction of duty. If you want to be taken seriously when you're right, it's a good idea to be upfront about it when you're wrong.
Subscribe to:
Post Comments (Atom)
22 comments:
Probability zero:
Shallit writes an article titled "Not admitting when you are a stalker"
LOL
Bob Park's Voodoo Science: The Road from Foolishness to Fraud has as a main theme the idea that pseudoscientists can't admit that they are wrong.
Good use of the N-ray example, I think its a classic example of the power of beliefs dressed up in mathematical or scientific language.
Probability Zero: DaveScot writes a comment that isn't drivel.
Probability that I will be checking back here often to see what you're writing, One.
Hi! My algo prof last term (A.L.) mentioned that her husband wrote a paper on adding a coin to the Canadian coin system that would reduce the amount of change we carry around. I took note of that and decided to look it up when I had some free time. Now's the time. A quick Google search brought up your UW page up, and from there I followed a link to your niece's blog, the first post of which informed me that you, in turn, started your own blog. Furthermore, it instructed the reader to go to your blog and leave a comment. Which is what I'm doing. :)
Now I shall get back to looking for the '18 cent coin' paper.
Cheers,
Anton.
I'm afraid this article falls into the "cute but wrong" category, at least in its analysis of Wolfram.
For a start, the claim that Wolfram relies on visual complexity is false.
Rather, Wolfram relies on the experimental fact that a large range of (computational) perceptual processes tend to - on average - agree on what is complex and what is not. See chapter 10 of the NKS book. This is not a minor result.
Furthermore, it is untrue that Wolfram's basic claims make no sense even if one operates under Kolmogorov complexity...
The first basic claim is that the world of simple programs can be systematically studied for science and mined for technology. This is exlusively an experimental issue with no connection to the theory of Kolmogorov complexity.
The second basic claim is that natural phenomenon can be modeled with very simple programs. Again, no connection.
The third basic claim is that the process of doing science should be reformed to take into account experimental facts about computation. Here there is friction, and it is leading to some fruitful thinking - for instance the recent book Metamath by Chaitin (a co-inventor of said Kolmogorov-Chaitin complexity theory)
So at least on the level of intellectual content I think this comparison is misleading... and less interesting than the obvious comparison that can be made: That the core experimental results of NKS directly contradict Dembski's thesis.
Hi Jeffrey,
You won't get any argument from me when it comes to Wolfram and Dembski, but I don't think Dawkins is the right person to quote when it comes to admitting that you're wrong.
Actually, when Dave Scot isn't in his braying jackass mode, I find he has some interesting things to say. I let his comment through in the hope that he'll try to behave himself in the future, and provide some real content instead of insults.
I like Ed Brayton, and find him a very interesting commenter on a number of subjects. I don't, however, find his case against Dawkins very convincing. Read his piece and the comments, and you'll see that there is some genuine doubt about the controversy on both sides. I know from personal experience that events of even a few months ago can disappear or get changed in one's memory, and I think a lot of the controversy can be resolved by simply allowing for the possibility that Dawkins' memory was faulty. No dishonesty necessary.
I think Dawkins' eventual answer to the question about information content was unsatisfactory. He should have simply said that random mutations increase information content (the way it is understood by mathematicians and computer scientists) and be done with it.
Hi Jeffrey,
"No dishonesty necessary."
But your post wasn't about dishonesty but about admitting when you're wrong.
As for Matzke trying to find unanswered questions to raise about the incident, I don't give much for it. Sure, if Brayton had watched the tape for a different reason and only now, 6-7 years later, remembered that Dawkins' account didn't check up, I would have reason to doubt his memory. But the question of Dawkins' reply was the very thing Brayton and Morton were investigating. Implying that Brayton forgot which conclusion he reached back then strains my willingness to suspend disbelief.
But let's say that, out of general courtesy, we let Dawkins get the benefit of the doubt. Are you willing to extend the same courtesy to Dembski? Maybe he hasn't acknowledged the rest of the errors because in his mind, they aren't errors.
Hi, Krauze. I confess I don't quite see what your point is. What, precisely, should Dawkins have admitted he was wrong about? In any event, my piece was not about Dawkins, but about Dembski and Wolfram.
As for Dembski, I'm certainly willing to give him the benefit of the doubt about some issues. But the point of my post was not about any individual mistake; it was about the lack of an errata page for No Free Lunch.
Now that he's finally admitted that the centerpiece calculation of the book, in which he estimates the probability of the [sic] flagellum, is off by 65 orders of magnitude, wouldn't that merit an entry on an errata page? Or about his claim "Consider that the Smithsonian Institution devotes a room to obviously designed artifacts for which no one has a clue what those artifacts do." I already showed that was wrong. Wouldn't that merit an entry?
I think even if one restricts oneself to purely factual issues, there are many errors worth correcting in No Free Lunch. Wouldn't readers of that book benefit by have the errors publicly corrected?
This is the first time I ever saw Dembski's claim about a room full of artifacts of unknown purpose. I agree with the main point that his statement is clearly wrong and he should have corrected it, but something else catches my interest.
First of all, even if there were only several artifacts that allegedly backed up some sort of creationist claim, these would still need to be addressed.
Second, the fact that no purpose is known doesn't help the case for intelligent design particularly. The curators identified these as man-made artifacts, and I assume they had some good, scientific reason for doing so. Is it possible though the purpose was unknown, the method of manufacture was readily identified?
E.g. you might look at a piece of woodwork, for instance, and conclude that a lathe was used for some parts, or that dovetail joints were used. This could still leave you befuddled over what it was for, but convinced beyond a shadow of a doubt that it was made by humans of a particular historical period. I suspect that Smithsonian researchers would use such evidence to decide that they have found an artifact rather than some bogus philosophical argument.
anonymous: what fruitful thinking? chaitin has been at this for a long time [eg. see limits of mathematics] and i cannot really find anything that is particularly indebted to wolfram in his quest for omega.
Hi Jeffrey,
"I confess I don't quite see what your point is. What, precisely, should Dawkins have admitted he was wrong about?"
It's right there, in Brayton's post:
"Dawkins responded and said the following:
A. That he had never given an interview to the people who produced the tape
B. That he had never been asked that question by anyone in any interview, and
C. That he would never pause that long before answering a question, it wasn't his style
[...]
[The creationist producer Gillian Brown] sent me a package that included the signed contract for the interview, with Dawkins' signature, and the unedited footage of the entire interview so we could see exactly what transpired."
Besides, if you didn't think that Dawkins had said anything that was incorrect, what was the point of your earlier comment, trying to chalk it up to Dawkins' faulty memory?
"In any event, my piece was not about Dawkins, but about Dembski and Wolfram."
I know, but if you're going to apply high standards, you should apply them universally. Otherwise, you'll give the appearance that you're just engaging in partisan rhetoric.
"But the point of my post was not about any individual mistake; it was about the lack of an errata page for No Free Lunch."
Okay, let's apply this principle universally: How many books about evolutionary biology have errata pages?
"I think even if one restricts oneself to purely factual issues, there are many errors worth correcting in No Free Lunch. Wouldn't readers of that book benefit by have the errors publicly corrected?"
Sure. Let's take another example: In his book, Unweaving the Rainbow, Dawkins describes the molecule retinal as a "protein". As a purely factual issue, this is wrong. Wouldn't the readers of Dawkins book also benefit by having this error publicly corrected?
oz: Try his new book Metamath.
But you miss the point anyway . It not that Wolfram contributes greatly to Chaitin's work.
The point is Shallit claims that "If he did use a more formal definition -- let's say Kolmogorov complexity -- then his claims become incoherent, trivial, or wrong."
Yet here we have the coinventor of this "more formal definition" saying that Wolfram's are interesting ideas... precisely because they are different from his.
I'll trust Chaitin's claims about Chaitin's theory more than Shallit's. You can decide for yourself.
Hi Sabakunotora/Anton:
The "18-cent" paper can be found here:
http://www.cs.uwaterloo.ca/~shallit/papers.html
Thanks for your interest.
Hi anonymous:
The claim of Wolfram that becomes questionable when "complexity" is Kolmogorov complexity is the claim that simple programs generate complexity.
I agree that simple programs can generate visually complex results, and I agree that simple programs can probably create relatively unpredictable results in a computational complexity sense. But by the definition of Kolmogorov complexity, a simple program P when applied to an input x, is not going to generate an output with Kolmogorov complexity higher than the size of P, plus the size of x, plus a constant that doesn't depend on P and x.
Now it is true that one can, in principle, get arbitrarily high complexity by iterating a program P on an input x (consider the program that simply concatenates x to itself). But this isn't what Wolfram seems to be saying.
I don't think Wolfram's definition of complexity as "it looks complex to me" (not a quote, of course) is terribly useful to science -- not until someone finds a way to measure this complexity in a well-defined way that researchers agree captures a useful notion of complexity. I don't think Wolfram has provided such a definition.
Hi, Krauze:
Sure, if Dawkins recollected incorrectly, and was later shown to be wrong, he should admit it. But even if Dawkins personally failed in this case, the quote I provided is still a good one, don't you think?
I quote Mencken from time to time, even though he was an anti-semite. Let's try to focus on the point of my piece, shall we?
You ask, how many books on evolutionary biology have errata pages. Good question. Why don't you try to answer it? You could start here where there are apparently errata pages for Ken Miller's books, although I couldn't load them.
As for applying standards universally, I think you're reading me a bit too literally. I don't claim that every writer of every book should have a web page with errata for that book. In particular, Dawkins, who has no web presence of his own, can hardly be expected to suddenly set up a web page with errata for his book. It would be good if he does, but writers have used alternate methods for years, such as maintaining a paper sheet with errors that can be sent to readers who request one. I hope Dawkins does correct the error, perhaps in a future edition of the book, and I hope he is kind to anyone who points out this mistake. Otherwise he'd be failing in his duty, too.
The situation is a bit different with Dembski and Wolfram. Both seem quite web-savvy, and both have an extensive web presence, and both have quite a lot of information about their books online. Yet neither has bothered to put up a page with errata for their books. I find that strange, and, as I said, a dereliction of duty. Dembski is downright hostile to people who point out errors. Wolfram didn't behave nearly as badly, but he still seems to take quite a while acknowledging his errors.
Thanks for taking the time to comment.
Hi Jeffrey, Thanks for the comments.
I again have to disagree on your characterization of Wolfram's definition.
The definition Wolfram offers: the complexity of a process is equivalent to its computational sophistication.
Loosely speaking, this means that you can measure complexity according to the amount of irreducible computation a proccess performs. Not rigorous yet, but not "its complex if it looks complex" either.
As to whether simple programs tend to satisfy this criterion, this is a topic that can (and has) be investigated with a variety of objective quantitative experiments.
On another note, so what is actually interesting or useful to science? This is a good question that will be answered differently by many.
My personal view:
Interesting: Program "B" is the shortest known program that computes or models "A". (NKS)
Not interesting: "A" scores 45.7 on the complexity benchmark. (various research trends)
Useful: discover program "B" or its close approximates with systematic search of the computational universe. (NKS)
Not useful: "B" is provably impossible to find, and we don't really care about what concrete programs do anyway.(Kolmogorov)
Your mileage may vary according to what you want to achieve.
-kovas boguta
It's perhaps worth mentioning Donald Knuth, who must be the most compulsive publisher of corrigenda in computer science, if not all science.
- Tim
I absolutely agree on this issue.
Wolfram should have an Errata Page!
Another striking example of refusing to admit an error is John Searle and his "Chinese Room" thought experiment, which he still claims "proves" that a computer cannot have mental states by virtue of executing an algorithm, despite numerous refutations of his argument that demonstrate both trivial and subtle errors. Compare the standard of proof used by Searle and his adherents to that of Andrew Wyles, who nearly committed suicide after a demonstration that his attempt at a proof of Fermat's Last Theorem had failed. He then spent a year attempting to rescue his proof and eventually achieved a brilliant success.
You might be interested in the following review by MIT errata's expert Evangelos Georgiadis:
http://www.math.usf.edu/~eclark/jca_georgiadis.pdf
which also appeared in the latest issue of "The Journal of Cellular Automata."
Post a Comment