A couple of months ago, I finished a first reading of Stephen Meyer's new book, Signature in the Cell. It was very slow going because there is so much wrong with it, and I tried to take notes on everything that struck me.
Two things struck me as I read it: first, its essential dishonesty, and second, Meyer's significant misunderstandings of information theory. I'll devote a post to the book's many mispresentations another day, and concentrate on information theory today. I'm not a biologist, so I'll leave a detailed discussion of what's wrong with his biology to others.
In Signature in the Cell, Meyer talks about three different kinds of information: Shannon information, Kolmogorov information, and a third kind that has been invented by ID creationists and has no coherent definition. I'll call the third kind "creationist information".
Shannon's theory is a probabilistic theory. Shannon equated information with a reduction in uncertainty. He measured this by computing the reduction in entropy, where entropy is given by -log2 p and p is a probability. For example, if I flip two coins behind my back, you don't know how either of them turned out, so your information about the results is 0. If I now show you one coin, then I have reduced your uncertainty about the results by -log2 1/2 = 1 bit. If I show you both, I have reduced your uncertainty by -log2 1/4 = 2 bits. Shannon's theory is completely dependent on probability; without a well-defined probability distribution on the objects being discussed, one cannot compute Shannon information. If one cannot realistically estimate the probabilities, any discussion of the relevant information is likely to be bogus.
In contrast, Kolmogorov's theory of information makes no reference to probability distributions at all. It measures the information in a string relative to some universal computing model. Roughly speaking, the Kolmogorov information in (or complexity of) a string x of symbols is the length of the shortest program P and input I such that P outputs x on input I. For example, the Kolmogorov complexity of a bit string of length n that starts 01101010001..., where bit i is 1 if i is a prime and 0 otherwise, is bounded above by log2 n + C, where C is a constant that takes into account the size of the program needed to test primality.
Neither Shannon's nor Kolmogorov's theory has anything to do with meaning. For example, a message can be very meaningful to humans, and yet have little Kolmogorov information (such as the answer "yes" to a marriage proposal), and have little meaning to humans, yet have much Kolmogorov information (such as most strings obtained by 1000 flips of a fair coin).
Both Shannon's and Kolmogorov's theories are well-grounded mathematically, and there are thousands of papers explaining them and their consequences. Shannon and Kolmogorov information obey certain well-understood laws, and the proofs are not in doubt.
Creationist information, as discussed by Meyer, is an incoherent mess. One version of it has been introduced by William Dembski, and criticized in detail by Mark Perakh, Richard Wein, and many others (including me). Intelligent design creationists love to call it "specified information" or "specified complexity" and imply that it is widely accepted by the scientific community, but this is not the case. There is no paper in the scientific literature that gives a rigorous and coherent definition of creationist information; nor is it used in scientific or mathematical investigations.
Meyer doesn't define it rigorously either, but he rejects the well-established measures of Shannon and Kolmogorov, and wants to use a common-sense definition of information instead. On page 86 he approvingly quotes the following definition of information: "an arrangement or string of characters, specifically one that accomplishes a particular outcome or performs a communication function". For Meyer, a string of symbols contains creationist information only if it communicates or carries out some function. However, he doesn't say explicitly how much creationist information such a string has. Sometimes he seems to suggest the amount of creationist information is the length of the string, and sometime he suggests it is the negative logarithm of the probability. But probability with respect to what? Its causal history, or with respect to a uniform distribution of strings? Dembski's definition has the same flaws, but Meyer's vague definition introduces even more problems. Here are just a few.
Problem 1: there is no univeral way to communicate, so Meyer's definition is completely subjective. If I receive a string of symbols that says "Uazekele?", I might be tempted to ignore it as gibberish, but a Lingala speaker would recognize it immediately and reply "Mbote". Quantities in mathematics and science are not supposed to depend on who is measuring them.
Problem 2: If we measure creationist information solely by the length of the string, then we can wildly overestimate the information contained in a string by padding. For example, consider a computer program P that carries out some function, and the identical program P', except n no-op instructions have been added. If he uses the length measure, then Meyer would have to claim that P' has something like n more bits of creationist information than P. (In the Kolmogorov theory, by contrast, P' would have only at most order log n more bits of information.)
Problem 3: If we measure creationist information with respect to the uniform distribution on strings, then Meyer's claim (see below) that only intelligence can create creationist information is incorrect. For example, any transformation that maps a string to the same string duplicated 1000 times creates a string that, with respect to the uniform distribution, is wildly improbable; yet it can easily be produced mechanically.
Problem 4: If we measure creationist information with respect to the causal history of the object in question, then we are forced to estimate these probabilities. But since Meyer is interested in applying his method to phenomena that are currently poorly understood, such as the origin of life, all he's really doing (since his creationist information is sometimes the negative log of the probability) is estimating the probability of these events -- something we can't reasonably do, precisely because we don't know that causal history. In this case, all the talk about "information" is a red herring; he might as well say "Improbable - therefore designed!" and be done with it.
Problem 5: All Meyer seems interested in is whether the string communicates something or has a function. But some strings communicate more than others, despite being the same length, and some functions are more useful than others. Meyer's measure doesn't take this into account. A string like "It will rain tomorrow" and "Tomorrow: 2.5 cm rain" have the same length, but clearly one is more useful than the other. Meyer, it seems to me, would claim they have the same amount of creationist information.
Problem 6: For Meyer, information in a computational context could refer to, for example, a computer program that carries out a function. The longer the program, the more creationist information. Now consider a very long program
that has a one-letter syntax error, so that the program will not compile. Such a program does not carry out any function, so for Meyer it has no information at all! Now a single "point mutation" will magically create lots more creationist information, something Meyer says is impossible.
Even if we accept Meyer's informal definition of information with all its flaws, his claims about information are simply wrong. For example, he repeats the following bogus claim over and over:
p. 16: "What humans recognize as information certainly originates from thought - from conscious or intelligent human activity... Our experience of the world shows that what we recognize as information invariably reflects the prior activity of conscious and intelligent persons."
p. 291: "Either way, information in a computational context does not magically arise without the assistance of the computer scientist."
p. 341: "It follows that mind -- conscious, rational intelligent agency -- what philosophers call "agent causation," now stands as the only cause known to be capable of generating large amounts of specified information starting from a nonliving state."
p. 343: "Experience shows that large amounts of specified complexity or information (especially codes and languages) invariably originate from an intelligent source -- from a mind or personal agent."
p. 343: "...both common experience and experimental evidence affirms intelligent design as a necessary condition (and cause)
of information..."
p. 376: "We are not ignorant of how information arises. We know from experience that conscious intelligent agents can create informational sequences and systems."
p. 376: "Experience teaches that whenever large amounts of specified complexity or information are present in an artifact or entity whose causal story is known, invariably creative intelligence -- intelligent design -- played a role in the origin of that entity."
p. 396: "As noted previously, as I present the evidence for intelligent design, critics do not typically try to dispute my specific empirical claims. They do not dispute that DNA contains specified information, or that this type of information always comes from a mind..."
I have a simple counterexample to all these claims: weather prediction. Meteorologists collect huge amounts of data from the natural world: temperature, pressure, wind speed, wind direction, etc., and process this data to produce accurate weather forecasts. So the information they collect is "specified" (in that it tells us whether to bring an umbrella in the morning), and clearly hundreds, if not thousands, of these bits of information are needed to make an accurate prediction. But these bits of information do not come from a mind - unless Meyer wants to claim that some intelligent being (let's say Zeus) is controlling the weather. Perhaps intelligent design creationism is just Greek polytheism in disguise!
Claims about information are central to Meyer's book, but, as we have seen, many of these claims are flawed. There are lots and lots of other problems with Meyer's book. Here are just a few; I could have listed dozens more.
p. 66 "If the capacity for building these structures and traits was something like a signal, then a molecule that simply repeated the same signal (e.g., ATCG) over and over again could not get the job done. At best, such a molecule could produce only one trait."
That's not clear at all. The number of repetitions also constitutes information, and indeed, we routinely find that different numbers of repetitions result in different functions. For example, Huntington's disease has been linked to different numbers of repetitions of CAG.
p. 91: "For this reason, information scientists often say that Shannon's theory measures the "information-carrying capacity," as opposed to the functionally specified information or "information content," of a sequence of characters or symbols.
Meyer seems quite confused here. The term "information-carrying capacity" in Shannon's theory refers to a channel, not a sequence of characters or symbols. Information scientists don't talk about "functionally specified information" at all, and they don't equate it with "information content".
p. 106: (he contrasts two different telephone numbers, one randomly chosen, and one that reaches someone) "Thus, Smith's number contains specified information or functional information, whereas Jones's does not; Smith's number has information content, whereas Jones' number has only information-carrying capacity (or Shannon information)."
This is pure gibberish. Information scientists do not speak about "specified information" or "functional information", and as I have pointed out, "information-carrying capacity" refers to a channel, not a string of digits.
p. 106: "The opposite of a complex sequence is a highly ordered sequence like ABCABCABCABC, in which the characters or constituents repeat over and over due to some underlying rule, algorithm, or general law."
This is a common misconception about complexity. While it is true that in a string with low Kolmogorov complexity, there is an underlying rule behind it, it is not true that the "characters or constituents" must "repeat over and over". For example, the string of length n giving a 1 or 0 depending on whether i is a prime number (for i from 1 to n) has low Kolmogorov complexity, but does not "repeat over and over".
p. 201 "Building a living cell not only requires specified information; it requires a vast amount of it -- and the probability of this amount of specified information arising by chance is "vanishingly small."
Pure assertion. "Specified information" is not rigorously defined. How much specified information is there in a tornado? A rock? The arrangement of the planets?
p. 258 "If a process is orderly enough to be described by a law, it does not, by definition, produce events complex enough to convey information."
False. We speak all the time about statistical laws, such as the "law of large numbers". Processes with a random component, such as mutation+selection, can indeed generate complex outcomes and information.
p. 293: "Here's my version of the law of conservation of information: "In a nonbiological context, the amount of specified information initially present in a system Si, will generally equal or exceed the specified information content of the final system, Sf." This rule admits only two exceptions. First, the information content of the final state may exceed that of the initial state, Si, if intelligent agents have elected to actualize certain potential states while excluding others, thus increasing the specified information content of the system. Second, the information content of the final system may exceed that of the initial system if random processes, have, by chance, increased the specified information content of the system. In this latter case, the potential increase in the information content of the system is limited by the
"probabilistic resources" available to the system."
Utterly laughable. The weasel word "generally" means that he can dismiss exceptions when they are presented. And what does "in a nonbiological context" mean? How does biology magically manage to violate this "law"? If people are intelligent agents, they are also assemblages of matter and energy. How do they magically manage to increase information?
p. 337 "Neither computers by themselves nor the processes of selection and mutation that computer algorithms simulate can produce large amounts of novel information, at least not unless a large initial complement of information is provided."
Pure assertion. "Novel information" is not defined. Meyer completely ignores the large research area of artificial life, which routinely accomplishes what he claim is impossible. The names John Koza, Thomas Ray, Karl Sims, and the term "artificial life" appear nowhere in the book's index.
p. 357: "Dembski devised a test to distinguish between these two types of patterns. If observers can recognize, construct, identify, or describe apttern without observing the event that exemplifies it, then the pattern qualifies as independent from the event. If, however, the observer cannot recognize (or has no knowledge of) the pattern apart from observing the event, then the event does not qualify as independent."
And Dembski's claim to have given a meaningful definition of "independence" is false, as shown in detail in my paper with Elsberry -- not referenced by Meyer.
p. 396: "As noted previously, as I present the evidence for intelligent design, critics do not typically try to dispute my specific empirical claims. They do not dispute that DNA contains specified information, or that this type of information always comes from a mind..."
Critics know that "specified information" is a charade, a term chosen to sound important, with no rigorous coherent definition or agreed-upon way to measure it. Critics know that information routinely comes from other sources, such as random processes. Mutation and selection do just fine.
In summary, Meyer's claims about information are incoherent in places and wildly wrong in others. The people who have endorsed this book, from Thomas Nagel to Philip Skell to J. Scott Turner, uncritically accepting Meyer's claims about information and not even hinting that he might be wrong, should be ashamed.
Subscribe to:
Post Comments (Atom)
144 comments:
Does the string "pain' contain the same amount of information to a Frenchman, an Englishman, and a German?
I'm gonna go beat someone named Makarios over the head with this. Thank you!
"Random processes, such as mutation and selection, can indeed produce information and complex outcomes."
Selection isn't random!!!
Why should they be ashamed? "Expelled" plainly shows that they are doing all of this to prevent another Hitler. If they really thought that "information" was being introduced into biological systems (beyond of what can be observed in real time, e.g. cell replication) they would have long ago stopped endlessly repeating "we found info!" and moved on to the all-important step of determining exactly where, when and how that "info" gets inserted. But they can't do that because they know the result would be indistinguisable from evolution, and that would not please their clueless YEC and OEC fans.
Selection isn't random!!!
Yes, it's poorly phrased. I was thinking of "selection and mutation" together as a process, with the randomness coming from the mutation. I'll rephrase it.
I propose that "creationist information"
be dubbed Discovery Institute Specified
information: DISinformation.
When we look at an
object and infer a meaning, with no
consideration of the process of origin of
that object, then we are engaging in
a form of fortune telling, known as
augury. Identifying DISinformation is
a form of augury.
p. 258 "If a process is orderly enough to be described by a law, it does not, by definition, produce events complex enough to convey information."
Somebody call Johannes Kepler.
I see two aspects to this obsession with information. The first is that they are taking a scientific term that has a precise meaning and claim that they are applying it to biology. In reality they are using the word, information, in its common usage and applying their psuedo science smoke screen to obscure that. It is very similar to the "It's only a theory ...." line of reasoning. The second aspect of their claim is basically a restatement of their argument that Evolution violates the second law of thermo dynamics.
Does your (and Elsberry's) "Information Theory, Evolutionary Computation, and
Dembski's \Complex Specified Information" at http://bit.ly/5OoGM have much to do with this issue? Because I'm willing to read it.
Filipe:
Indeed, that is the very paper I linked to in the text of this post, in the phrase that reads "my paper with Elsberry".
You might like to read Meyer's letter in the Times Literary Supplement, which I've just posted over at http://whyevolutionistrue.wordpress.com/2010/01/13/signature-in-the-cell-meyer-responds-in-the-tls/
If you fancied writing a letter to the TLS about Meyer's use of "information" in this letter, then their mail is letters@the-tls.co.uk
Dude, you gotta be more concise and precise. At my blog, I've summarized Dr. Meyer's argument for you so you can focus on the core issues.
http://yters.blogspot.com/2010/01/dr-shallit-needs-to-be-more-precise-in.html
If you do write a better critique, you might win yourself a new convert, since I've been searching quite awhile to find a good counter to ID. Haven't found one yet though:P Just a bunch of yourmominum.
p. 16: "What humans recognize as information certainly originates from thought - from conscious or intelligent human activity... Our experience of the world shows that what we recognize as information invariably reflects the prior activity of conscious and intelligent persons."
This is ID in a nutshell. All Meyer's is basically saying is "Things that appear designed always are." Not only is that statement false (he should Google the term pareidolia), but it's at heart an argument which relies on intuition. There is no possible way to base science on intuition. Meyer's definition of information is fuzzy because it needs to be. Far from being an actual rational system meant to be used to arrive at various conclusions, creationist information theory is an ad hoc excuse drummed up to arrive at certain predetermined conclusions, conclusions which are themselves essentially personal judgment calls. The creationist just intuitive knows that the Universe was designed because it appears designed (to him).
I appreciate the clear description of ID creationist abuse of the word information, but I don't think you should have included this paragraph:
"The weasel word "generally" means that he can dismiss exceptions when they are presented. And what does "in a nonbiological context" mean? How does biology magically manage to violate this "law"? If people are intelligent agents, they are also assemblages of matter and energy. How do they magically manage to increase information?"
1. He stated his exceptions.
2. The magic violation is almost certainly supposed to be the intelligent designer for which intelligent design is named.
3. Now you've given him an idea for another book: using fake information theory to prove body-mind dualism: people's brains need magic to work.
I'm trying to logically close the debate inwardly on what to think about evolution. I think i used to just accept it, but when I think really hard about all the classical indicators of evolution, they could just as easily point to something else... Monkeys and Humans are close, sharing 98% DNA? Then what accounts for the 2%? Obviously that 2% is quite big given the variation in size and different observable parts of a person... I can't sex a monkey and make a hybrid monkey-man, yet lions and tigers have a LARGER (as far as i can find out) gap in their DNA, yet they can make hybrid species... I don't understand it! Does anyone know anything about (or a place to find info about) biology and tests done involving mutations and an observable change or cause of change? Or something that would cause the mutation to persist? I'm not much of a scientist but I don't know how to prove based on anything i can find that evolution MUST have happened.
Meyer gains a lot of traction with laymen by intentionally conflating two pairs of words: (1) 'information" and "meaning,' (2) function and purpose.
Of course meaning requires an intelligence--by definition. But information can exist as a property, without being sensed by an intellifgence. Unfortunately, if you look up "information," every general dictionary in the world defines it with the characteristics of "meaning."
"Function' is what something does, while "purposae" is what an intelligence wishes something to do. One operates retrospectively, the other prospectively. Unfortunately again, even scientists often say one when they mean ther other.
@Olorin
Yes, that is what I had a problem with when I first started thinking about ID. Information has a number of specific mathematical meanings, yet it wasn't clear to me that they were matched to my intuitive idea of information as meaning.
However, here's how I've come to think of it. Information does actually have two aspects. When we look at a book's page, we immediately notice there is information there, but we do not know yet what it is about. Only once we start reading the page do we actually get the meaning.
Hofstadter addresses this issue in his excellent GEB. He talks about the multiple layers a message has. The outer layer tells another entity that a message exists, so the entity knows to focus its resources to extract the message. Then, the inner layer gives the entity the actual content.
To use philosophical language, I call this distinction that of the existence and essence of information.
The function/purpose issue you point out is similar. Function gets at the existence of purpose, that an object is meant for something. And purpose tends to refer to the actual essence, what the something is.
Now, bringing this discussion back to science, computer science also addresses this sort of distinction with syntax and semantics. You need syntax to have semantics, but the syntax does not necessarily tell us anything about the semantics, merely that semantics exist. It is quite easy to algorithmically detect and evaluate syntax. Noam Chomsky has done excellent work here. However, it is provably impossible to generally evaluate semantics, since you run into things like the halting problem.
Thanks for the good question Olonrin, you've helped me clarify my thinking a bit.
@Gus: "Obviously that 2% is quite big given the variation in size and different observable parts of a person... I can't sex a monkey and make a hybrid monkey-man"
This statement is flawed. The 2% doesn't have to be a big difference, it just has to be important to the process of reproduction. It could be something quite simple that simply prevents the fertilization process. Also the amount of difference does not directly correlate to a size difference. The difference between a dwarf (3 feet tall) and a giant (9 feet tall) can be extremely minor.
Blood types seem like a pretty inconsequential difference between humans, but can prevent fetal formation.
Besides, who has tried to create human-monkey hybrids in that fashion? Do you have any experimental evidence for your statement?
"...weather prediction... Meteorologists collect huge amounts of data ... So the information they collect is "specified"
Jeffrey are you trying to equate uncollected and uncollated data with information!
The weather guys collect the data (not information yet) and turn it into data. Random bits of weather data is not information.
The weather guys collect the data (not information yet) and turn it into data. Random bits of weather data is not information.
That dodge won't work. Things like wind speed, wind direction, and temperature are not "random bits of weather data"; they are physical attributes of the natural world. Neither Meyer nor real information theorists make any distinction between "data" and "information".
I could just as well claim that a sequence of DNA bases is also "just data" until it is collected - then it also becomes information.
Jeffrey, okay I see your point, but the major differnce between your example of weather data and DNA is that DNA is a blueprint to build something - quite "specifically". Weather data translated to weather information is open to interpretation.
Can you think of another example which will help me understand your point better?
Dolly:
So now you've added another criterion: information must not be "open to interpretation", or it isn't really information?
In that case, much of the genome is not information because we don't know exactly what it does.
These lame dodges, all to avoid the evident conclusion that Meyer's claim is false, don't impress me.
As for another example - based on scanning your blog it seems clear to me that no amount of examples will convince you.
Now I recall that Richard Dawkins gave an interesting (and fun) talk about this issue: "The purpose of purpose". It may be enlightening to some. PZ Myers was there.
http://www.youtube.com/watch?v=mT4EWCRfdUg
DNA is imperical proof of design for life. It is a physical representation of information. Information is immaterial and cannot be created by molecules. The information that derfines life forms existed first, and the DNA which represents that information was created afterward.
http://www.youtube.com/watch?v=rhOu-iPuIwM
DNA is information stored and retrieved which complies with linguistics law. These are immaterial properties which cannot be produced by that which is material. It is impossible for nature to be the creator of DNA. Science has proven all life is a product of supernatural power and intelligence - God.
http://www.youtube.com/watch?v=3zQZrWrDnMo
IronWill:
Instead of making assertions for which you present no evidence, how about dealing forthrightly with my critique?
Information is immaterial and cannot be created by molecules.
Give an example of information with no physical basis.
DNA is information stored and retrieved which complies with linguistics law.
What is "linguistics law"? How does DNA comply with it?
You do know, I hope, that DNA doesn't resemble natural language at all. For one thing, it appears to be only slightly compressible. By contrast, natural language utterances exhibit a lot of redundancy.
When I first heard of Intelligent Design, I thought it would show some interesting arguments (even though they would/might be wrong) against the Theory of Evolution, and I would have a good time thinking (even if it was to refute such arguments).
However, I'm starting to see that this will never get fun! Their "arguments" are obviously statements made by people who have not understood the TofE yet. It's not a shame not to understand something: you just have to ask for help, and try to learn. But to think that, because of your stubbornness, you're creating a new branch in science, ah! And to think that it should be taught in schools!
Those arguments in ID books could be of very value in the area of 'Science Education": a teacher who has made a compilation of interesting doubts arisen by curious students during Biology classes, and showed how to answer them properly. But to think that this is being made by guys with Ph.D is saddening.
As the Epstein and Shallit paper linked earlier shows, Dembski can't even agree with himself about what units info schminfo has.
To Dolly's comment: "...DNA is a blueprint to build something - quite "specifically". Weather data translated to weather information is open to interpretation."
Actually, weather "data" is also a blueprint to build something: tomorrow's weather. Tomorrow's weather is going to happen based on this information (data), whether we collect and interpret that data or not. There are two reasons it's "open to interpretation": first, we don't have enough "information" to predict tomorrow's weather accurately enough; second, there's a probabilistic factor involved, related to the environment in which tomorrow's weather unfolds.
This is precisely the same level of "information" as is present in DNA, with the same two caveats.
To see a good example involving conflation of 'function' and 'purpose', look at the first paragraph under "IronWill Said...." above.
Creationists all have a problem with teleology. They think nothing can happen without being planned beforehand.
Ah...
Creation apologist smackdown is like hot cocoa for my psyche.
You guys make me feel all warm and happy.
Carry on.
Speaking as someone with a background in rhetoric, "[A]n arrangement or string of characters, specifically one that accomplishes a particular outcome or performs a communication function" sounds like a passable definition of "speech act" to me.
@IronWill:
Information is immaterial and cannot be created by molecules.
Or, rather, information arises materially and then we represent them abstractly. Like numbers.
The very presence of molecules, particles, energies etc. can be seen as constituting information.
In tropism, a physical directional stimulus (incoming photons from the sun, for instance) affect a plant cell's chemicals in such a way that the growing plant ultimately bends towards or away from the source of light. We can think of it as if the stimulus was information that the biological system "interprets", reacting accordingly. No mind is involved in the generation or interpretation of the information.
Another example: The forces provided by the surroundings molecules is all the information that cold water molecules "interpret" in order to group together into snowflakes of intricate geometry. There's no "snowflake blueprint" anywhere. And again, no mind is involved.
Things, then, happen to constitute information themselves. We humans prefer to label them as such when we identify such interactions that are part of longer series of perceivable interactions, what makes us think of them as meaningful.
It is impossible for nature to be the creator of DNA.
New DNA molecules are formed every day — naturally — through the interaction of the molecules within a cell. And nothing we know precludes that alternate building paths exist. Your claim that it is impossible is based solely in your haste to dismiss the "disturbance" of Science and return to cuddle your cherished mythology.
Frank, Meyer seems to treat specified complexity consistently as an analog quantity. Dozens of times in Signature he proclaims that natural causes cannot produce "significant amounts" of SC. And that a certain amount of SC "exceeds the probabilistic resources" of a system.
So one must ask not only what is the definition of SC, but where is the threshold.
Blake,
I'm not familiar with the Epstein and Shallit paper. Do you have a link?
Wesley R. Elsberry
If Dembski is the "Isaac Newton of Information Theory," is Meyer the "Mortimer Snerd?"
When Creationists talk about information, it always makes me think of seismic signals. We can look at seismograms and the information they contain helps us describe parts of the Earth that are hidden from view. Humans don't design the seismic signals--like weather data, they are determined by the physical properties of the media involved and some natural, physical event.
"I'm not familiar with the Epstein and Shallit paper. Do you have a link?"
This kind of folly amazes me. How can one not see that Shallit linked to this article in the text!? It was in the text! Functional illiterates should keep away from Blogs such as this.
Filipe,
There is a link above to the paper that I co-authored with Jeff. There is no link to a paper with anyone named "Epstein" as a co-author with Jeff. I was trying to note that Blake had garbled my name, but apparently not being direct about it was my mistake.
Wesley R. Elsberry
Dr. Elsberry:
Surely. I see that I might have made a mistake as well, in not making clearer that my comment was sarcastic.
Read the 9th, the 10th, and the 11th comments made here, and you'll see why.
"an arrangement or string of characters, specifically one that accomplishes a particular outcome or performs a communication function".
Just the sort of nonsensical definition one would expect from a creationist. Information is not equivalent to an arrangement. Information is a property of an arrangement, e.g. the compressibility of the arrangement. Failing to mention a property means the definition is not just vague. It is utterly vacuous.
Selection isn't random!!!
It depends what you mean by "random". To a non-probabilist "random" tends to mean having a uniform distribution. To a probabilist "random" can refer to any event or process whose outcome is not fixed, regardless of its probability distribution. A uniform distribution is just one kind of random variable. To a probabilist natural selection is random (stochastic) since it involves unpredictable events. Even the best adapted individual may meet with an unlucky accident and fail to reproduce.
There's a discussion between Meyer and Peter Atkins where he throws around all these terms as if they have meaning - http://www.premierradio.org.uk/listen/ondemand.aspx?mediaid={5EDC2A06-01E6-411A-B211-C998A6AFB902}
He just makes stuff up as he goes along.
Premise 1 - DNA has 'specified complexity'
Premise 2 - 'Specified complexity' can only be caused by intelligent agents
Conclusion - DNA was made by an intelligent agent
It's just embarrassing really.
Daniel, Meyer's claim isn't even that strong. Premise 2 is more like "wherever we see an intelligence, we see specified complexity."
This converts Meyer's claim into a logical fallacy of affirming the consequent. (For example: (1) All lawn flamingos are pink; (2) An object in my yard is pink. (3) Therefore, the object in my yard is a lawn flamingo.)
I'm confused by this statement:
"For example, consider a computer program P that carries out some function, and the identical program P', except n no-op instructions have been added. If he uses the length measure, then Meyer would have to claim that P' has something like n more bits of creationist information than P. (In the Kolmogorov theory, by contrast, P' would have only at most order log n more bits of information.)"
My impression is that within Kolmogorov theory, P' would contain no more information than P, since the no-ops do not change the program's output.
Am I missing something?
Keiths:
Yes, you seem to be confusing the Kolmogorov complexity of the program P (which, after all, is just a string of characters), with the program that occurs in the definition of Kolmogorov complexity.
Here you should think of P as just a string. Applying the definition of Kolmogorov complexity, we want the smallest program Q and input x such that Q outputs P on input x. If we now change P to P' by adding n no-op instructions, say at the end, then Q needs only change by adding at most log n bits to specify n.
"I'm not familiar with the Epstein and Shallit paper."
Oops. My most sincere apologies — I've read that paper and recommended it to others, so I should be able to get these things right.
@Gus
The reason for tigers and lions being able to cross breed but not humans and chimps is due, not to the DNA sequences themselves but, in fact, to the number of chromosomes that the DNA is stored as. Lions and tigers both have 38 chromosomes in total. This means that when they are segregated and then mixed (basic sexual reproduction) there are 19 chromosomes from each parent which would pair up to make up the 38 in the offspring. As a point of fact a lot of these sorts of interbreed offspring tend to be sterile, for example horse + donkey = ass (Sterile). Humans have 46 chromosomes but chimps have 44. This means that the human parent would contribute 23 and the monkey momma would only contribute 22, this would mean that a viable egg couldn't be produced. I hope this is clear?! Message back if not :)
BTW: Just found this blog... i will be back
@ Dr. Shallit
Thank you Dr. Shallit for taking the time to respond.
I respond more indepth to your post on my blog.
http://yters.blogspot.com/2010/01/dr-shallit-needs-to-be-more-precise-in.html
Also, I've been applying Dr. Dembski's work in empirical experimentation, so I can't say it's not sufficiently rigorous.
http://www.box.net/shared/u13u3agxqg
Best,
Eric
@iantracy603
Chimpanzees have 24 pairs of chromosomes (total 48), not 22 pairs. In the human genome, 1 pair of human chromosomes used to be 2 pairs in our ape ancestors, but had fused together.
@ Monika
oops, yes, you are absolutely right. My mistake. The principle was right though but thanks for pulling me up on that one :)
Neither Shannon's nor Kolmogorov's theory has anything to do with meaning.
And that is the problem:
As Warren Weaver (collaborated with Shannon) said- "The word information in this theory is used in a special mathematical sense that must not be confused with its ordinary usage. In particular, information must not be confused with meaning"
Yet anyone familiar with information technology knows that information is all about meaning/ function.
Shannon info can only provide an information carrying capacity- not how much real information is there.
IOw it appears that Shannon and Kolmogorov are the culprits here, not Meyer.
The type of information Meyer is talking about is teh difference between Stonehenge and a pile of stones.
It is the difference between a rock and an artifact.
It is the difference between intent and accident.
It is the difference between being able to communicate and not having any idea what anyone else is saying.
It is what allows you to use your computer.
It is what tells the stuff I am typing where to go.
The world cannot survive without it.
And Jeffrey sits back and rants about it like a lunatic as if Meyer just made it up.
Well done...
Joe:
You've already failed my challenge and were unable to detect specified complexity when it was provided, so you're in no position to talk.
anyone familiar with information technology knows that information is all about meaning/ function.
False. Produce a single quote from an information theorist backing up your claim.
@ Dr. Shallit
The explanatory filter only claims to not produce false positives, so if Joe can't detect design then this doesn't invalidate the claims of ID.
Furthermore, Joe's entire comment is about how information theorists are not using the term "information" how most people do, whereas Dr. Meyer is.
So, it is a complete non sequitur to challenge Joe's claim by asking for information theorists who talk about information in the same way as Dr. Meyer.
Joe's entire comment is about how information theorists are not using the term "information" how most people do, whereas Dr. Meyer is.
Take it up with Meyer, because he has claimed on more than one occasion that "information theorists" refer to "specified complexity".
And does Meyer really use "information" the way most people do? For example, wouldn't the average person recognize that data such as wind speed, wind direction, and so forth constitute "information"?
So, it is a complete non sequitur to challenge Joe's claim by asking for information theorists who talk about information in the same way as Dr. Meyer.
It's rather strange for someone who has no training in information theory to claim that everyone who does have such training somehow magically gets the fundamental idea of "information" wrong.
@ Dr. Shallit
True, Dr. Meyer does make that claim, but it seems quite plausible to me that information theorists talk about "specified complexity" as distinct from "information." I'm also like Joe, not literate in information theorist terminology, but if my assumption is correct, then Dr. Meyer is merely saying that the information theorist's "specified complexity" is the same as the average Joe and myself's "information."
To attempt to now bridge the terminology issue between ID and common information parlance, an information theorist's use of the term "information" would correspond to Dembski and Meyer's use of "complexity." A very complex domain is the same as a channel with high carrying capacity.
Thus, in fact, information theorists are not talking about completely different concepts that an average person, but merely a highly refined representation of half of "specified complexity." Drs. Dembski and Meyer are concentrating on defining and refining the concept as a whole.
Sadly, information theorists do not talk about "specified complexity". You can verify this by, for example, going to the Mathematical Reviews website, which attempts to review every single noteworthy mathematical article and book, and searching for the term. There are exactly three usages found of the term, one from Dembski's book and two in contexts unrelated to information theory.
Meyer himself once admitted to me that he knew of no information theorist but Dembski who used the term; his implication that it is a widely used term was deceptive.
Have you read my long paper with Elsberry (easily found with a web search) explaining why the concept of "specified complexity" is bogus?
I've skimmed your paper, but to be honest I don't have high hopes that you'll succeed in either refuting Dembski and Meyer's work or showing it is nonsensical.
I've already read loads of anti-ID material, such as Wein's book, a number of posts on Good Math/Bad Math, and Wolpert's article on Dembski's use of the NFLT and *none* of it actually address their arguments. It's all just a bunch of trivial strawman arguments and disorganized writing scattered with random invectives, and frankly just frustrating to read.
By all means, please show me why ID is wrong, I'd really like to know, but don't say you have and then throw me a bunch of gibberish. My patience is just about gone here.
But, I'll make some time in my schedule to decipher your paper and follow up here.
Regarding the usage of "specified complexity":
I did the search you recommended and only turned up an article talking about Dembski's use of the term. I turned up some material on google scholar by searching for: ("specified complexity" -"intelligent design" "information theory")
But, I would have to say it is not a term in widespread use.
Eric:
Wein has not written a book, but his critiques are spot-on and decisive. To say that he doesn't address Dembski's arguments is, quite frankly, absurd.
By all means, please show me why ID is wrong.
You can lead a creationist to knowledge, but you can't make him think.
My paper with Elsberry has been available for years, but so far no one has offered any refutation of its arguments.
Like I thought, you also misrepresent Dembski's work. Ah well, no surprise there.
http://yters.blogspot.com/2010/03/shallit-misrepresents-dembskis-work.html
It's been...a use of time...anyways, I'm done.
Eric:
I'm afraid you have confused your misunderstanding with my misreprestation.
Dembski has been remarkably inconsistent in his discussion of CSI. Like we say, sometimes numbers on credit cards constitute CSI for Dembski, whereas other times, in order to constitute CSI, you must have at least 500 bits. So which is true?
You're saying that computing an objective measure won't depend crucially on my background knowledge, so I can calculate the molecular mass of this computer without knowing any chemistry or physics?
You seem very confused about the meaning of Dembski's "background knowledge". When I compute the molecular mass of a compound, everyone all over the world can agree on my calculations. But when I compute the CSI of a string like "Jina lako nani", for one person it will not match any prespecified pattern, while for Swahili speakers it will. Can you suggest any other physical or mathematical quantities whose values depend on whether you speak English or Swahili?
You seem very confused about our discussion of causal history. Our point is that Dembski uses causal history inconsistently. Sometimes, even when the causal history is known, he ignores this to compute his probability. Other times he depends on it.
First they assume a uniform probability over bit string distributions, even though they explicitly say this is not the true event space.
You seem very confused again. We say explicitly that we do this because Dembski himself uses uniform probability when it is inappropriate. We are illustrating why Dembski's uses of uniform probability is wrong.
I can't dispute that you are reading anti-ID papers. But you seem to be reading them without comprehension.
Visit my blog for response:
https://www.blogger.com/comment.g?blogID=10939184&postID=4104687726113390799
In short, Dr. Shallit doesn't know whether his hair is black or straight, it can't be both!
In which I take a couple minutes to refute Dr. Shallit. Again.
http://yters.blogspot.com/2010/03/shallit-misrepresents-dembskis-work.html#3696612037341977262
Sorry to keep doing this, but if you've really refuted Dembski I've yet to see it. I show from basic AIT theory, which all competent professors should know, that Dembski's COI is a necessary truth.
Eric:
You would look less foolish if you spent more time trying to understand our critique and less time making triumphal pronouncements of your refutations.
Yes, you do make part of my argument in your appendix, which I didn't read. So bad on me.
First, COI still obtains as long as the bit string size is fixed and my argument still stands.
Second, you claim that with the ability to double the string length also comes the ability to algorithmically increase the CSI (or SAI) in the string.
Unfortunately, this is yet another misrepresentation of Dembski's work, though a much better one than the others, and got me thinking for a bit. To see why it doesn't match the explanatory filter we have to focus on the C in CSI.
C is measured according to the event space, sample space, what have you. It is comprehensively described by the concatenation of the potential input and output pairs. When you select a function, what you are actually doing is selecting a subset of these concatenations. And, as a concatenation of bit strings, AIT again applies. Most concatenations are random, only some are compressible.
Seen in this light, the COI again becomes obvious. The SAI is not created by the function, but by your selection of the particular function. And, as we both know, that particular selection of a subset of concatenations with high SAI cannot be done algorithmically, since this would presuppose a universal compressor.
So, I didn't even have to make up a new argument, I've again refuted you using my first argument which you could have responded to directly instead of making me spell it out for you, Dr.
Your problem #1 seems like a pretty lame attack, considering it doesn't depend on the person. It depends on whether the person speaks the right language. All languages, by definition, have a commonly agreed upon (and therefore) objective measure. Similarly, DNA can be studied to determine what it means, as _Nature_ and other scientific journals can attest.
Your problem #1 seems like a pretty lame attack
Actually, it is the main problem with the Meyer/Dembski definition of information. According to Dembski, measuring information depends on the "background knowledge" of the person measuring the information.
considering it doesn't depend on the person.
Oh, but it does. Go read Dembski.
All languages, by definition, have a commonly agreed upon (and therefore) objective measure.
You don't know very much about language, do you?
Similarly, DNA can be studied to determine what it means, as _Nature_ and other scientific journals can attest.
You don't know very much about DNA, either, it would seem. Hint: go read about non-coding DNA.
That is a RIDICULOUS response. If I, not knowing programming languages, look at your program, does it cease to be objective information? Furthermore, what is "background knowledge?" _Knowledge_ comes from what? Someone's feelings? Obviously, it comes from something objective, _by definition_. Furthermore, I know quite a bit about language, seeing as that was my focus in college. As far as "non-coding" DNA, are you talking about DNA with regulatory function? I do know something about that, and it does have information coded into it. This has been proven.
Dembski is correct. How can one measure information without background knowledge? You say that _knowledge_ (as in _background knowledge_) is subjective? Isn't that like saying that fire is cold?
That is a RIDICULOUS response.
Hint: putting your response in capitals does not make it more impressive.
If I, not knowing programming languages, look at your program, does it cease to be objective information? Furthermore, what is "background knowledge?"
It is always a pleasure to see someone who backs Meyer turn into an incoherent frothing rage when he suddenly realizes what a con job Meyer has pulled on him.
Meyer bases his account of information theory on Dembski's work. Read No Free Lunch by Dembski, where the role of background knowledge in deciding whether or not something constitutes creationist information is discussed in detail. After you've read that, maybe you will be prepared here. But for now you have no idea what you are talking about.
_Knowledge_ comes from what? Someone's feelings? Obviously, it comes from something objective, _by definition_.
You seem to be confusing "knowledge" with "information". Do you understand what those words mean?
Furthermore, I know quite a bit about language, seeing as that was my focus in college.
Then immediately write your college and demand your money back, because any college graduate who thinks that "All languages, by definition, have a commonly agreed upon (and therefore) objective measure" has clearly failed to learn much. If all languages have objective measures, why do dictionaries differ at all? Why do dictionaries have some words and not others? Why do courts disagree on the meaning of the words such as "bogus"?
As far as "non-coding" DNA, are you talking about DNA with regulatory function? I do know something about that, and it does have information coded into it. This has been proven.
You seem extremely confused. Most non-coding DNA has no function at all -- as far as is currently known.
Dembski is correct. How can one measure information without background knowledge?
Information theory has a long history. In the most popular theory of information that is probability-free -- namely, the Kolmogorov theory -- there is no use of "background knowledge" in evaluating the complexity of strings. I would suggest you read up on it.
You've totally misunderstood me, which may be partly my fault. It's clear that you're not stupid but a sophist. (It's easy to get the two confused). Let me try again, now that I know I'm talking to a sophist.
You make the claim "Quantities in mathematics and science are not supposed to depend on who is measuring them." But, as I'm trying to explain, your example isn't about the person (unless you're equivocating). I'm mocking your insistence that it is about the person. It's about the language, which is measurable (Otherwise, where do dictionaries come from? Mind readers?) This _supports_ Dembski's definition of how information must be measured, i.e. use of _knowledge_, which as you know is based on objectivity, which as we all know, is scientific.
You say, "You seem to be confusing "knowledge" with "information". Do you understand what those words mean?."
My claim that _knowledge_ indicates something objective, _to you_, means that I'm confusing it with information? ha ha... So you're saying that _knowledge_ is subjective? This is more sophistry on your part. We all know that knowledge is based on objectivity, meaning it is scientific. So your claim that Dembski's definition isn't scientific and depends on WHO is doing the interpreting is trickery.
"You seem extremely confused. Most non-coding DNA has no function at all -- as far as is currently known."
Really? Because _Nature_ says the following: "Researchers from an international collaborative project called the Encyclopedia of DNA Elements (ENCODE) showed that in a selected portion of the genome containing just a few per cent of protein-coding sequence, between 74% and 93% of DNA was transcribed into RNA2. Much non-coding DNA has a regulatory role."
Not only that, what little DNA is assumed to be junk can be measured via knowledge, i.e. objective measures, as Dembski points out. Microbiologists can and do look at gene sequences and determine what it expresses, e.g. a particular type of enzyme.
"In... the Kolmogorov theory -- there is no use of "background knowledge" in evaluating the complexity of strings."
Since Dembski wasn't talking about measuring complexity of strings but was attempting to measure whether or not there is specificity and what type of specificity, this is irrelevant.
Furthermore to answer your question above, "Why do dictionaries vary," dictionaries vary because there is _some_ variation in language. That doesn't mean that, taken as a whole, there isn't information found in language. Your own example illustrates this, in fact. You can't have it both ways, using an example of how speaking the right language results in the transfer of clearly objectively measurable information.
Dear Anonymous:
In the future, please use a pseudonym, because I can't tell if I'm arguing with one person or two.
I'm mocking your insistence that it is about the person.
But it is. Different people will have different background knowledge. Read Dembski. So far you seem to be arguing without any knowledge at all of what Dembski is claiming.
We all know that knowledge is based on objectivity, meaning it is scientific.
It doesn't advance your argument when you (a) consistently confuse "knowledge" (which is not discussed in either Dembski or Meyer) with "information" and (b) make claims about things "we all know" which are spurious.
So your claim that Dembski's definition isn't scientific and depends on WHO is doing the interpreting is trickery.
You are extremely confused. Dembski says, for example, that credit card numbers are constitute CSI. But who knows the credit card number except me and the credit card company? Do you know my credit card number? Clearly this sort of knowledge does depend on the particular person interpreting a string of digits.
Really? Because _Nature_ says the following:
You seem to be reading without comprehension. More than 98% of the human genome does not encode protein. Just because bases are transcribed (80% are, by one estimate) doesn't mean they have function.
Most professional biologists I know are convinced that much of DNA is, indeed, junk.
Since Dembski wasn't talking about measuring complexity of strings but was attempting to measure whether or not there is specificity and what type of specificity, this is irrelevant.
You seem confused. You asked, "How can one measure information without background knowledge? ", and I answered your question. Traditional information theory -- the kind that is studied by mathematicians and computer scientists, not creationists -- doesn't talk at all about "specificity".
That doesn't mean that, taken as a whole, there isn't information found in language.
Straw man. Of course there is information in language. I was answering the specific claim that All languages, by definition, have a commonly agreed upon (and therefore) objective measure., which is clearly false.
Dr. Shallit:
'Traditional information theory -- the kind that is studied by mathematicians and computer scientists, not creationists -- doesn't talk at all about "specificity".'
I explained this in an earlier comment. You had the same disagreement with Joe. The problem is your are talking kernels and corn.
Traditional information theorists (IT) are only talking about part of the broader concept of information, only the complexity part.
The ID notion of CSI enlarges on normal IT work by also including specificity.
Additionally, specificity is not only the prerogative of IDers, contra my earlier concession above. It has since come to my attention that specificity is used by non ID biologists discussing information in DNA:
"[L]iving organisms are distinguished by their specified complexity. Crystals are usually taken as the prototypes of simple, well-specified structures, because they consist of a very large number of identical molecules packed together in a uniform way. Lumps of granite or random mixtures of polymers are examples of structures which are complex but not specified. The crystals fail to qualify as living because they lack complexity; the mixtures of polymers fail to qualify because they lack specificity." (Leslie E. Orgel, The Origins of Life: Molecules and Natural Selection, pg. 189 (Chapman & Hall, 1973).)
Leslie seems to be a very solid biologist:
http://en.wikipedia.org/wiki/Leslie_Orgel
Educated, worked at Oxford, Cambridge, CalTech, UoC
So, perhaps the reason you don't know much about specificity in IT is because it is cutting edge.
Dr. Shallit:
You say, 'Traditional information theory -- the kind that is studied by mathematicians and computer scientists, not creationists -- doesn't talk at all about "specificity".'
However, as I explained earlier, this is really a matter of comparing kernels to corn.
Information theory (IT) really just takes a kernel of the broader concept of information, so can't really be considered a comprehensive definition of the original concept. To be specific, it mainly deals with the C (complexity) in CSI, but not the specificity.
Also, contra my concession above, non IDers do talk about specificity regarding biological information. For instance:
"[L]iving organisms are distinguished by their specified complexity. Crystals are usually taken as the prototypes of simple, well-specified structures, because they consist of a very large number of identical molecules packed together in a uniform way. Lumps of granite or random mixtures of polymers are examples of structures which are complex but not specified. The crystals fail to qualify as living because they lack complexity; the mixtures of polymers fail to qualify because they lack specificity." (Leslie E. Orgel, The Origins of Life: Molecules and Natural Selection, pg. 189 (Chapman & Hall, 1973).)
And, note, Leslie is no fringer or slouch biologist. He has been educated, worked at: Oxford, Cambridge, CalTech, UoC.
So, perhaps the reason you are not familiar with specificity in IT literature is that it is a cutting edge (though from the 70s :P) notion. What do you think?
Oh, and interesting note from his wiki page:
http://en.wikipedia.org/wiki/Leslie_Orgel
"In his book The Origins of Life, Orgel coined the concept of specified complexity, to describe the criterion by which living organisms are distinguished from non-living matter. He has published over three hundred articles in his research areas."
Interesting, the term "specified complexity" was coined by an ardent evolutionist:
"His name is popularly known because of Orgel's rules, credited to him, particularly Orgel's Second Rule: "Evolution is cleverer than you are"."
Eric:
Yes, it's true that Orgel used the term "specified complexity", but he gave no formal definition of it, wrote no papers about it, and did not use it in the same way Dembski did. For example, Orgel's use is restricted to biology; he provides no way to measure it; he does not tie it to probability theory, etc. To pretend that it is really the same concept is, I think, not completely honest.
So, perhaps the reason you are not familiar with specificity in IT literature is that it is a cutting edge (though from the 70s :P) notion.
Nobody uses it but creationists.
Dr. Shallit, you say:
"Yes, it's true that Orgel used the term "specified complexity", but he gave no formal definition of it, wrote no papers about it, and did not use it in the same way Dembski did."
Whether it is exactly the same or not, it is clear that "specified complexity" is a term in use by non ID/creationist scholars to talk about unique characteristics of biological systems.
google scholar this: Orgel "specified complexity" -"intelligent design"
So, while I agree it is not in as widespread use as, say, Kolmogorov complexity, it is clear that your statement:
"Nobody uses it [specificity] but creationists."
is false. Notice that while some of the search results are clearly IDists/creationists there are also recent mainstream papers (2005, 2009) which use the term.
Whether it is exactly the same or not, it is clear that "specified complexity" is a term in use by non ID/creationist scholars to talk about unique characteristics of biological systems.
You're being silly - it's crucial to know whether it is the same concept or not. For example, both mathematicians and agrarian economists talk about "field", but what they mean by it is completely different.
it is clear that your statement:
"Nobody uses it [specificity] but creationists."
is false.
I maintain my statement is correct - I was not talking about "specificity" per se (which indeed has some meaning in biological systems) but rather "specified complexity". If you have some citations that refer to "specified complexity" the way Dembski uses it, not by creationists, please present them.
Furthermore, _you_ made the claim that Dembski says that measuring information requires background information, and by extension, Meyer does, too. But now you say it "is not discussed in either Dembski or Meyer?" Well, I believed you in good faith. I haven't read _No Free Lunch_ for example, so I don't know. Whatever. If they don't discuss knowledge, why are we talking about it? Because you claimed Dembski measures information via "background knowledge."
You say: "You seem to be reading without comprehension."
I don't think so. It's pretty hard to interpret this: "Much non-coding DNA has a regulatory role."
I thought you were going to leave the biology to the biologists? Why don't you ask about this recent _Nature_ article, and the cited research. See if _Nature_ is wrong or not. I read it with perfect comprehension, relying on objective use of my background knowledge of the English language.
"Traditional information theory -- the kind that is studied by mathematicians and computer scientists, not creationists -- doesn't talk at all about "specificity"."
Weird, because I thought we were talking about Dembski's non-traditional approach and its merits and not whether it was traditional or not, which may or may not tell us about its merits.
You say: "Of course there is information in language. I was answering the specific claim that All languages, by definition, have a commonly agreed upon (and therefore) objective measure., which is clearly false."
My point was that it is objectively measurable with enough background information/knowledge. The more background knowledge, the more objectivity. It's so clear that it's false that the dictionary agrees with me. Perhaps you should write them and complain that their linguistic skills are lacking.
"But it is. Different people will have different background knowledge."
Like different scientists will use different scientific methods; therefore, all science is subjective? ha ha. Wow. Given your example, which is really my point of contention, we have you hypothetically applying a _methodology_, i.e. "I might be tempted to ignore it as gibberish."
Assuming, huh? That's a behavior not a person, as with the scientific method. A linguist attempting to learn a new language would obviously _not_ assume it's jibberish, nor would any individual have to. Any person can apply the _same_ methodology to learning or researching a language. If you have a linguist handy or know how to use phonetic symbols and do some research on your own, you can, in fact, get an objective measure of that person's native language and what that particular word is. Again, _not the person_. It's about method. Or, we could just throw out all science, because we rely on people to follow methods and some scientists might choose not to and make assumptions instead.
"It doesn't advance your argument when you (a) consistently confuse "knowledge" (which is not discussed in either Dembski or Meyer) with "information" and (b) make claims about things "we all know" which are spurious."
If I'm confused about knowledge, why do dictionaries agree with my definition, e.g.
"a body of words and the systems for their use common to a people who are of the same community or nation, the same geographical area, or the same cultural tradition"
Wow, it sounds like _by definition_, language is commonly agreed upon. How did I do that, when as you claim, language is so subjective? How would I predict what was in a dictionary? Magic!
As far as making claims about things we all know, it'd be more accurate to say that native speakers of English almost all know what _knowledge_ means. We can go do a survey and see. Or we can depend on a dictionary that has done just that quite thoroughly:
"...the fact or condition of being aware of something..."
or
"acquaintance with facts, truths, or principles."
Wow, that sounds like knowing is very much based on "something" or "facts," which are objective.
If you have some citations that refer to "specified complexity" the way Dembski uses it, not by creationists, please present them.
One example from the query I mention:
"Three subsets of sequence complexity and their relevance to biopolymeric information"
http://www.tbiomed.com/content/2/1/29
An image in the paper:
http://www.tbiomed.com/content/2/1/29/figure/F4
'The Y1 axis plane plots the decreasing degree of algorithmic compressibility as complexity increases from order towards randomness. The Y2 (Z) axis plane shows where along the same complexity gradient (X-axis) that highly instructional sequences are generally found. The Functional Sequence Complexity (FSC) curve includes all algorithmic sequences that work at all (W). The peak of this curve (w*) represents "what works best." The FSC curve is usually quite narrow and is located closer to the random end than to the ordered end of the complexity scale. Compression of an instructive sequence slides the FSC curve towards the right (away from order, towards maximum complexity, maximum Shannon uncertainty, and seeming randomness) with no loss of function.'
Now Dembski:
http://www.counterbalance.org/id-wd/cansp-frame.html
"A repetitive sequence of bits is specified without being complex. A random sequence of bits is complex without being specified. A sequence of bits representing, say, a progression of prime numbers will be both complex and specified."
Dembski's description of CSI exactly matches that of the pubmed authors, who are not creationists as far as I can tell.
Mordie:
I see why you are confused on one point, and I apologize for that. Dembski does indeed discuss "background knowledge" in his book. I was taking your use of "knowledge" as a more general concept - and indeed, as a more general concept it is not (to my memory) discussed in No Free Lunch. The principal topic of No Free Lunch is information, not knowledge.
Now - on other points - you did not contest my credit card example, so you must admit that background knowledge is subjective.
As for the scientific method, your example is spurious. For Dembski, the quantities he advocates measuring are actually different from person to person, depending on each individual person's background knowledge. This is not the case for the other things science measures, such as atomic mass, mass of an electron, etc. Scientists may use "different scientific methods" - which you have yet to show - but they do not generally come up with different quantities depending on their background knowledge, do they?
My point was that it is objectively measurable with enough background information/knowledge.
It's not. Disputes about the meanings of words go on all the time, even among professional linguists and lexicographers.
Mordie:
When I talk about my knee hurting, is that "knowledge"? And for you, would it be subjective or objective?
One caveat on my recent post. There is one difference between the pubmed authors and Dembski's usage of the term.
Dembsi also includes the notion of improbability, whereas the pubmed authors only talk about compressibility/randomness and specificity.
Eric:
I've read that paper, and most of it is - I hate to say it - pure gibberish. For example, they talk about "random sequence complexity" and "ordered sequence complexity". Both are their own self-invented terms; the first seems to be the same as Kolmogorov complexity, but for the second they do not even give a well-defined way to calculate it.
They claim "Yet empirical evidence of randomness producing sophisticated functionality is virtually nonexistent" - yet there is an entire field of study, namely artificial life, that has example after example of such complexity - not even referenced in the paper.
And, as I said before, neither author is an information scientist or publishes in the information theory literature. These kinds of papers get published in fringe journals, often refereed by referees who don't know the relevant mathematics.
I'll have to take a look at the paper myself. I can infer a coherent meaning from those terms, but I'll have to see if it actually exists.
At any rate the graph I linked to does match Dembski's work, so at least I have shown your claim is not strictly true, though perhaps it is still generally true.
As for your claim that artificial life produces complex functionality through randomness, I'm pretty sure this is false. Rather, it is the random sampling of a pretty specific space of features along with a fitness function that brings about complex functionality.
I can speak somewhat knowledgeably about this since I experimented with a form of artificial life to generate self organization for my thesis.
Eric:
I agree with you that randomness alone is not producing complex functionality, but that's not how I read the Abel-Trevors claim. But their paper is so poorly written, it's hard to be sure exactly what their claim is.
If you can tell me how precisely to measure what they call "ordered sequence complexity", I'd appreciate it. For example, what is the ordered sequence complexity of
111111111111111111111111111111111111111
and
010101010101010101010101010101010101010
and
011010100010100010100010000010100000100
?
You say: "Now - on other points - you did not contest my credit card example, so you must admit that background knowledge is subjective."
Yes, I neglected that accidentally. I was going to respond. In the case of a credit card number, how do you know it? You know base 10 and the symbols that correspond to that, which are all intelligently designed. While the particular sequence is not intelligently designed, they have quite a bit of meaning involved underneath, all of which can only be know through use of symbols, which are intelligently designed.
In order for me to communicate to a computer those symbols, the use of those intelligently designed symbols must be translated into binary code, all of which is an intelligently designed process, meaning that intelligent design was behind the entire process.
In that sense, there is a lot of truth in saying that one's credit card number is intelligently designed. That said, I am not a big expert on Dembski, so if I made an error here, I won't be terribly surprised. I'd rather hear Dembski himself give a response. Perhaps I'll have to go read more of his work to give an adequate response to this.
"For Dembski, the quantities he advocates measuring are actually different from person to person, depending on each individual person's background knowledge."
Judging by your example, which you cite as evidence that Dembski and Meyer's definition is subjective, that's not really the case. You'll have to better argue that their definition is subjective. I don't buy it.
"but they do not generally come up with different quantities depending on their background knowledge, do they?"
Those three people could come up with different results, if their methodology were flawed.
Three people would be unlikely to come up with different definitions of words of a foreign speaker if they rely on methods of linguists or proper research, provided they had a large enough sample size and relied on enough background information (the more, the more likely they'd come up with the same results). Ten linguists would, I predict, come up with nearly identical results.
"Disputes about the meanings of words go on all the time, even among professional linguists and lexicographers."
And yet we're still able to communicate and even predict responses from those to whom we speak. You've decided that the exception is the rule, it appears. Dictionaries, for example, line up very well, with only some exceptions. Those exceptions are few and far between compared to the correlations, which means I am way more right than you in asserting the objectivity involved in language.
"When I talk about my knee hurting, is that "knowledge"? And for you, would it be subjective or objective?"
It's fairly objective, because the hearers have experience with pain and know what that means. If someone tells me that their knee hurts (or communicates it by wincing and holding their knee), I can predict that laughing in response would likely result in a hostile reaction. How would I make such a prediction, unless I was getting something objective?
Mordie:
I think you keep wandering off and not addressing the main issue.
The issue with a credit card number is not whether it is "intelligently designed", but whether it can be fairly said to be dependent on the particular person who is interpreting the string of digits. It clearly is. You fail to address this.
When you say "You'll have to better argue that their definition is subjective. I don't buy it, it's not clear to me that I can do any more, because you are arguing without reading Dembski.
Had you read Dembski's book, you would see that he is claiming exactly what I said: the amount of "complex specified information" inherent in something depends crucially on the background knowledge of the particular person examining it, and hence is not a universal quantity. That was the point of my criticism #1 and as far as I can see, you have failed to refute that. Isn't it odd that you want to argue this without even reading Dembski?
As for language - nobody is disputing that we can communicate; this is another of your straw men. But we do so despite the fact that meanings of words are not precise, and do not convey the same thing to everyone. Misunderstandings are common -- something which you can see in our own exchange.
True. For some reason, your example left me befuddled, but after some thought, I realized my response was, as you say, totally trailing off, and I now have one that is appropriate after a little thought.
You say it's clearly subjective, i.e. "dependent on the particular person who is interpreting the string of digits. It clearly is."
I disagree completely. You said yourself that at least two know it. In this particular case, a methodology (and not just a mind or a person) can enable others to also know it. Just as with, for example, an anthropologist attempting to decipher "Uazekele?" via observations and study, a criminal can get his own observations via coercion or through trickery. Obviously, this requires opportunity, but when do observations _not_ require opportunity? That doesn't mean those observations are subjective and don't have results in the real world. Once the number is obtained by someone else, they can potentially use it to access my account. Clearly, that knowledge gave them more control in the real world and not just subjectively, meaning that the credit card number really was... well... my credit card number. That one other entity knowing it makes it real (that and the protocols involved).
You say, "because you are arguing without reading Dembski."
I have read some Dembski but not a great deal. From what I read, he was logical, and I couldn't find flaws in his logic.
You say that "the amount of "complex specified information" inherent in something depends crucially on the background knowledge of the particular person examining it, and hence is not a universal quantity."
You're contradicting your description of what Dembski said. Before you said that Dembski said that the _measurability_ of how much information inherent in something depends on background knowledge. Which is it? Give me an exact quote, instead, because it seems now that you're attempting to deceive me. _Measurability_ and reality are two very different things, obviously. I'm going to guess he said that its _measurability_ depends on background knowledge. This is actually easily defended. I believe I did so above. Again, it's not about WHO. It's simply about being able to observe or measure, which depends on, as with all observations, opportunity and method.
You say: "But we do [communicate] despite the fact that meanings of words are not precise, and do not convey the same thing to everyone."
Not precise by what standard? Is it at 98% per word? What if the same thing is being expressed two different ways? Doesn't that exponentially decrease the chances that a word will be misunderstood? I would say that between two skilled communicators, losses in translations drop dramatically, the more time and effort is put into expressing said idea.
Two native speakers of a language have an enormous sample size to depend on, making the meaning of the most commonly used words very precise.
Misunderstandings are common, largely due to small sample sizes, lack of attention spans, logical fallacies, cognitive dissonance, etc. People often hear what they want to hear, but that is a choice. Ultimately, two sincere people can get a profound understanding of each other's minds via language.
In this particular case, a methodology (and not just a mind or a person) can enable others to also know it.
I don't agree. But let's see how you would do it in practice. Say you encounter the following string of digits: 174520669358005603046598506479. What methodology would you follow to determine whether or not this string was "specified", in the sense of Dembski?
Which is it? Give me an exact quote, instead, because it seems now that you're attempting to deceive me.
Look, I have already devoted a lot of time to your criticisms (which I find trivial and baseless), but I'm not going to devote any further time if you repeatedly accuse me of dishonesty.
You need to read No Free Lunch before we can have a substantive discussion. Basically, you're asking me to teach you Dembski's theory, and I'm not interested in doing that because (a) it is worthless and (b) you are not a congenial student.
Ultimately, two sincere people can get a profound understanding of each other's minds via language.
All that has little to do with your original claim, which is the one I was pointing out was unfounded: that "All languages, by definition, have a commonly agreed upon (and therefore) objective measure.
You say: "I don't agree. But..."
That's quite an argument.
You say: "Say you encounter the following string of digits: 174520669358005603046598506479. What methodology would you follow to determine whether or not this string was "specified", in the sense of Dembski?"
I'm not an expert. This is clearly an argument from ignorance, and we both know that absence of evidence is not evidence of absence. This proves nothing and represents a logical fallacy, if you think it does.
You say: "Look, I have already devoted a lot of time to your criticisms (which I find trivial and baseless), but I'm not going to devote any further time if you repeatedly accuse me of dishonesty."
In this case, you brought it on yourself. You completely contradicted yourself. There is an enormous difference between you saying, "Dembski says that _measurements_ depend on background knowledge," and, "Dembski says that whether something contains information or not depends on background knowledge."
This looks like you've conveniently moved the goal post. The first time I accused you of sophistry, I admit I was mistaken. I was beginning to develop some respect, despite your manipulative, dishonest use of the word _creationist_ to describe ID proponents. But this reminded me that the very first thing I read from you is something about "creationist information," which is clearly just a means to trash your opponents and takes away from what should be the dignity of public debate.
You say: "Basically, you're asking me to teach you Dembski's theory..."
Not really. I simply asked for a clarification of the claim you made about Dembski. You've contradicted yourself and when I demand clarification, you tell me to go read Dembski's book. Dembski, as far as I can tell, is not the one who contradicted himself. You are.
You say: "All languages, by definition, have a commonly agreed upon (and therefore) objective measure."
The dictionary supported this, too. I didn't need any more support for that other than a dictionary, and I posted a quote from it. I don't need to argue that point at all, as it was proven.
I'm not an expert. This is clearly an argument from ignorance, and we both know that absence of evidence is not evidence of absence. This proves nothing and represents a logical fallacy, if you think it does.
OK, I'll note that you have no answer to offer here. So your claim that a methodology (and not just a mind or a person) can enable others to also know it is bogus, because you can offer no methodology to do so.
There is an enormous difference between you saying, "Dembski says that _measurements_ depend on background knowledge," and, "Dembski says that whether something contains information or not depends on background knowledge."
Actually, Dembski is rather vague on this point, so you should probably take it up with him. Either way, it supports my point, which was that quantities in science do not typically differ depending on who is measuring them. This is not true of Dembski's measure. My reading of Dembski is that the specified complexity in something is always computed relative to the background knowledge of the agent doing the inferring, and this is clearly supported in his book (which I don't have in front of me). Furthermore, he does not seem to think that there is an absolute measure of specified complexity. If you feel differently after reading his book, let me know.
despite your manipulative, dishonest use of the word
More accusations of dishonesty. You've had your warning. I don't tolerate repeated accusations of dishonesty here.
takes away from what should be the dignity of public debate
Get real - you've been beating your chest about how stupid I am, how sophistic my arguments are, and how dishonest I am, ever since you arrived.
The dictionary supported this, too. I didn't need any more support for that other than a dictionary, and I posted a quote from it. I don't need to argue that point at all, as it was proven.
Triumphal claims about having "proved" one's point, when it is trivially shown false by the fact that dictionaries differ on the meanings of words, is not impressive at all.
Look, if you want to have a good discussion, go read Dembski's book, No Free Lunch, go read my critique, published in Synthese of his claims, and come back when you're in the mood to talk and not accuse me of dishonesty. Until then, I've exhausted my patience.
By the way, if what you say is true that the _biggest_ problem you have with ID is that it is too subjective, then how can you consider my attacks on this claim to be trivial?
Your blog seems to indicate that the #1 problem with Dembski's definition of information is that it includes language. I'm a relative expert on language. I hardly think my arguments that language is not subjective (or at least, it is far more objective than subjective) are baseless considering I am a relative expert on the subject. You clearly are not.
If you want to talk about methodology, why not stick to your claim that the problem is with Dembski's inclusion of language as part of his definition of information is its subjectivity? Language is my thing, not numbers. Why did you demand I suddenly be an expert on numbers, instead?
Furthermore, your only other argument as to why ID is subjective (which is its _biggest_ problem, according to you), is what Dembski supposedly said. But on this point, _you contradicted yourself_ and refuse to clarify!
It seems that your own blog is designed to attack ID, and to you, the _best_ attack is to say that it shouldn't include language, as it is subjective, as it is its "biggest" problem. But now you say, "This is not productive... you're not a good student."
I'm only asking you to defend your claim that ID is too subjective. You're doing a poor job of it.
You say: "OK, I'll note that you have no answer to offer here. So your claim that a methodology (and not just a mind or a person) can enable others to also know it is bogus, because you can offer no methodology to do so."
Ask me about language and methodology... wait, I already explained that, and you had no argument other than, "I disagree." By the way, I realized that your string of numbers could be investigated by the FBI to possibly be a credit card number or bank account number. That would be a start. It's a lot better than your example of just assuming that something is nonsense.
You say: "Actually, Dembski is rather vague on this point, so you should probably take it up with him."
So you don't even know _what_ he said? I see. Perhaps you ought to leave the reading comprehension to me.
You say, "Either way, it supports my point."
It does? Well, you can't make that claim, because you don't know what it says, apparently, judging by how you contradict yourself.
You say: "Get real - you've been beating your chest about how stupid I am, how sophistic my arguments are, and how dishonest I am, ever since you arrived."
At first, I half-kiddingly suggested that I had thought you were stupid. I made a case that your example was "lame." That was largely based on my first impression of you, as you used the made-up term (and hypocritically, grossly imprecise) "creationist information." As I said, I was starting to develop some respect, until you totally contradicted yourself with regard to what Dembski said. What's worse, I ask for a reference, and you won't comply. You make a claim publicly, back it up.
You say: "Triumphal claims about having "proved" one's point, when it is trivially shown false by the fact that dictionaries differ on the meanings of words."
Do you want me to use multiple dictionaries? Because I could. They agree, in fact. I checked.
You say, "You are arguing from a position of ignorance. You haven't read No Free Lunch; you haven't read my critique published in Synthese; you throw out guesses about what you think Dembski said despite the fact that you haven't read his book."
I have read Dembski's own words regarding the core elements of his theory. It was only about 70-100 pages, but it got the job done. I have read your critique here, which is the issue. I don't throw out guesses. I based my "guesses" both on your claim (which I initially believed on good faith) and my own extrapolation of what I have read from him. He's certainly not stupid enough to claim that the amount of information in a string is determined by background knowledge as opposed to being measurable with background knowledge.
Casey Luskin has replied to you in an e-book just released by the DI.
Signature of Controversy: Responses to Critics of Signature in the Cell:
http://www.discovery.org/scripts/viewDB/filesDB-download.php?id=6861
Start on Ch. 18
Interest critique. The objections, though pointed, don't seem to me persuasive. Nevertheless, a few of them may be useful for further research and discussion. To me, the most interesting (and clever) objection you raised was "Problem 6". It occurs to me, however, that "specificity" comes in degrees (we understand this at an intuitive level, and I think I can give a precise analysis of this if you ask). If that's so, then a sequence of code that doesn't compile could be "specified" by virtue of being describable as one line of code from one that does.
Anyway, I didn't see anything that addresses the core of Miller's arguments even if you are right about some of the finer technical difficulties with defining "specified complexity".
It occurs to me, however, that "specificity" comes in degrees (we understand this at an intuitive level, and I think I can give a precise analysis of this if you ask). If that's so, then a sequence of code that doesn't compile could be "specified" by virtue of being describable as one line of code from one that does.
Congratulations! You've rediscovered Kolmogorov complexity. Too bad, however - in the Kolmogorov theory, a random string has the highest complexity. Also too bad: mathematicians have studied this measure for 40 years, and it doesn't have the properties Meyer claims.
Thanks for the reply considerations. I actually don't recall Meyer discussing "Kolmogorov complexity," but perhaps your thought is that he's alluding to it via Dembski? My impression was that Dembski's notion of "specified complexity" is more general than Kolmogorov complexity. I might be wrong, though. Anyway, I appreciate your desire for technical precision. That's valuable.
No, my thought is not that Meyer is "alluding to it via Dembski", whatever that means. Rather, I'm pointing out that there is a well-recognized measure of complexity that handles all the problems I raised -- namely, Kolmogorov complexity. However, in this measure, a random string has the highest complexity and hence there is no problem to produce complexity via a random process. When we uses this measure, Meyer's whole house of cards collapses.
Jeff,
That makes sense. Your argument, then, if I understand it, seems to be something like this:
1. The solution I gave to Problem 6 depends upon thinking of information as "Kolmogorov information"
2. But Kolmogorov information isn't the same as the notion of "information" Meyer has in mind (as his has different properties).
3. Therefore, the solution I gave to Problem 6 is incompatible with Meyer's notion of information.
I'm skeptical of premise 1. The quote on page 86 seems to be an initial characterization from Webster's. Later, Meyer approves of thinking of information as "specified complexitiy" a la Dembski. But I'm (presently) skeptical that my solution to Problem 6 can't sensibly be given in terms of Dembski's notion. I'm very open to further data.
I'm sorry sir, but you are very mistaken. Meyer knows exactly what he's talking about. Research is all pointing towards intelligent design. Reject it if you want, but to all true scientist, all science points to an intelligent design. No random events could produce all the natural elements we see today. Reality check....God created the heavens and the earth.
Dear Epp:
Wow, you have really convinced me with your penetrating analysis.
Strangely enough, you are unable to offer a single rebuttal to anything I said. What a breathtaking display of incompetence!
Speaking of incompetence, there's no need to submit the same comment three times.
If I show you both, I have introduced your uncertainty by -log2 1/4 = 2 bits.
Hate to be so particular, 2 years later, but the term "introduced" should be "reduced".
To reword it slightly - when you reduce your uncertainty you gain information. Reducing your uncertainty by 1 bit means gaining 1 bit of information. Its worded a little akwardly is all.
Thank you for the post (even though it's two years later now!). I've been reading Gitt and been incredibly frustrated by the pseudo-math, misrepresentation of Shannon's theorem, and blatant question-begging. Nice to see that someone else has seen it and responded so thoroughly.
Thanks for pointing out the typo. I fixed it.
Professor Shallit. I'm amazed by your patience with some of the responders to your blog. The Dunning–Kruger effect is much in evidence. Two common arguments against evolution from ID advocates are - a) violation of 2nd law of thermodynamics and b) creation of information from nothing (as they would put it). It is clear that a) is not taken seriously by physicists. Is anything they say about b) taken seriously by experts in information theory?
In a word, no.
@eric
i would really like to read your blog. i am just an ignoramus searching for answers. i do not intend to join in the argument because i have more questions than answers. if not being a "pest" helps, would you please invite me? thank you.
"So now you've added another criterion: information must not be "open to interpretation", or it isn't really information?
In that case, much of the genome is not information because we don't know exactly what it does.
These lame dodges..."
After viewing this interesting debate with Dolly, I think you're the one doing the dodging. Whether it's called data or information, it hardly matters, and Dolly didn't "add" this criterion. It's obvious from Meyer's book, which I guess you just skimmed.
In the DNA case, the information (or data -- whatever, it doesn't matter) codes for something that doesn't require intelligent interpreting to implement.
In the weather case, information codes for something that does require intelligent interpreting to implement.
You call her clarification a dodge, but it's your analogy that doesn't hold water.
It's obvious from Meyer's book, which I guess you just skimmed.
You guess wrong. I read every page, and my detailed reviews of his book are easily available with a google search.
In the DNA case, the information (or data -- whatever, it doesn't matter) codes for something that doesn't require intelligent interpreting to implement.
How is that relevant?
In the weather case, information codes for something that does require intelligent interpreting to implement.
Not true, but even if were true, how is it relevant?
I built a weather station when I was a kid. The weather station reported things like wind speed and direction using very simple circuits. How did that require "intelligent interpreting"?
ID advocates love to throw around vague terms like "intelligence", but unlike real scientists, they never tell us how to decide whether something is "intelligent", and they do not describe "intelligence" in measurable units.
I'm impressed how you made those simple circuits without any intelligence.
I'm impressed how you made those simple circuits without any intelligence.
Oh, no, not that perverse stupidity again.
Are you really going to claim that no experiment done by humans can demonstrate anything about the non-guided origins of something? If I put a pan of water in the refrigerator to see how it behaves, are you going to claim that this just shows that intelligence is needed for freezing?
The stupidity of ID advocates is beyond fathoming.
The question does not make sense.
Tell it to Dembski, then. He's the one claiming that "specified complexity" can be determined without knowing the origin of the string. After all, we don't usually know the precise origin of any particular substring of bases in DNA; yet Dembski and his friends are happy to compute its specified complexity.
There is even no ultimate metric known for complexity for source code in programming languages
Sure there is. It's called Kolmogorov complexity.
Could you tell where the claim is made?
Go and read any of his books where he talks about how to compute specified complexity via the procedure (a procedure which, by the way, neither he nor his supporters actually uses in practice except in toy cases) using rejection regions, rejection function, and so forth. There is nothing in that procedure about the origin of the object you are testing.
If I am taking whatever information as my password, its complexity measure changes as a password unless the hacker can read my mind
This is so incoherent, I have no idea what you are saying.
but not ultimate
What is your definition of "ultimate"?
Please correct me with a quote and reference if I am misinterpreting Dembski.
I can't, because I don't understand what you are saying.
I linked to this article from your comments on Meyer’s debate with Atkins, in which you said Atkins should have used weather as an example of naturally originating information.
Speaking as a mere layperson in Brazil, I found the comparison with the weather to be rather silly. Weather phenomena are one thing, and the systems created to describe them are another. The former are easily explained by natural laws, while the latter were necessarily created, and please don’t insult the bright scientists who developed them by saying they weren’t.
It’s like saying the languages humans developed to describe the world around them are equivalent to the things they describe. No, sir – a rock is not the same as the word “rock”, and heat is not the same as “40° C”.
Information theory be damned (I am entirely ignorant in this area), a code or language that can be understood, interpreted and acted upon with precision – like DNA – is something we would without hesitation attribute to an intelligent agent if we were not required to seek a natural cause, and that is Meyer’s point.
Your nit-picking with the definition of information seems to be an attempt to broaden it so that it includes phenomena that can easily explained by natural causes, which Meyer is obviously not referring to.
I am entirely ignorant in this area
Yes, I can tell that. But it's easy to remedy your ignorance -- all you need to do is go to the library. Renyi's "A Diary on Information Theory" is a good start. Why aren't you reading it?
a code or language that can be understood, interpreted and acted upon with precision – like DNA – is something we would without hesitation attribute to an intelligent agent
Why?
Your nit-picking with the definition of information
I'm just using a standard definition of the term as understood by people who actually do information theory. (Hint: Meyer isn't one of them.)
Re-quoting:
p. 291: "Either way, information in a computational context does not magically arise without the assistance of the computer scientist."
p. 341: "It follows that mind -- conscious, rational intelligent agency -- what philosophers call "agent causation," now stands as the only cause known to be capable of generating large amounts of specified information starting from a nonliving state."
p. 343: "Experience shows that large amounts of specified complexity or information (especially codes and languages) invariably originate from an intelligent source -- from a mind or personal agent."
p. 343: "...both common experience and experimental evidence affirms intelligent design as a necessary condition (and cause)
of information..."
Every time I see such assertions I know that he people who make them have no idea of how either "design" or "intelligence" works. It is especially ironic to see the phrase "does not magically arise" used when they are basically claiming that humans use magic to think and design things.
In fact what goes on in design (speaking as a turbine design engineer for many years) is ... evolution: trial and error; survival of the fittest in the marketplace; Edison's hundreds of experiments with materials for light-bulb filaments and battery components. We have all seen cars and phones and computers and many other things evolve in our lifetimes.
Speaking of computer programming (as the first quote does), I wonder whether the author has ever heard of Gall's Law of System Design?
"A complex system that works is invariably found to have evolved from a simple system that worked. A complex system designed from scratch never works and cannot be patched up to make it work. You have to start over with a working simple system." – John Gall, Systemantics: How Systems Really Work and How They Fail (1975, p.71)
I wonder if any of these "design theoreticians" has ever designed anything complex from scratch that worked on the first try? I doubt it extremely.
Here is my counter-claim: human design work and human problem-solving in general is an evolutionary process. Nothing in the history of human technology has ever sprung into being in final and complete form without precursors and without trial-and-error. This in fact is the only way new information has ever been observed to be created. IDists have no example of the type of magical process which they claim is more likely to occur than evolution. The reason they don't know this is because they have never bothered to study design and intelligence and don't understand how they work.
Hello !
I am begining to have the impression that "specified information" is not a quantity, but rather is a boolean classification. Colloquially, as it relates to biological "building blocks of life" ; "specified information" results when the probability of an event derieved from a mathematical set of outcomes whose elements are all deemed "functional" by contemporary biology, exceedes 1 in 10^150 then the event is dubbed "specified information".
Also see "Dembski's Defition" of "complex specified information".
"complex specified information (CSI) as being present in a specified event whose probability did not exceed 1 in 10^150"
In his book, Dr. Meyer calculates the number of possible interactions in the known universe to be roughly 1 in 10^140 = 1 in (10^80 Elemental Particles)*(10^43 Interactions Per Second)*(10^17 Seconds {Age of the Universe @~3Billion years old}). [Pg. 216-217]
In his book, Dr. Meyer calculates the probability of a single, functional, medium sized protien (150 Amino-Acids) arising from "prebiotic soup" to be about 1 in 10^164 = (Pobability of incorperating peptide bonds)*(Probability of incorperating only left-handed amino acids)*(Probability of achieving correct amino-acid sequencing)= 1 in (10^43)*(10^45)*(10^74) [Pg. 212]
In his book, Dr. Meyer calculates the probability of a single cell organism (250 Protien) arising from "prebiotic soup" to be about 1 in 10^41,000 = 1 in (10^164 events necessary to form 1 Protien ) ^ 250 Protiens [Pg. 213]
Because the probabilities of a single, functional, medium sized protien and much less the probability of a single celled organisum mutating into existance by chance are orders of magnitude (14 and 40,861 orders respectively) less probable than 1 in 10^150 ; then the event that life does exist is evidence of "complex specified information." and furthermore indicates design.
Thank you for asking hard questions !!! You guys are awesome !
Have a fantastic day everyone !
Sincerely,
--
Jordan D. Ulmer
Why is the opposite of "mutating into existence by chance" (a claim nobody makes, by the way) "design"? This shows you haven't thought very seriously about the issue.
Because intelligence is the only cause we know of that creates these highly improbable configurations. Writing is one good example. The probability of creating a short sentence with words in the dictionary, and with word lengths distributed accorded to normal usage, through chance are extremely small. Making the sentence grammatically correct and meaningful makes the probability even smaller. So, a single well formed sentence is too improbable to be created during the lifetime of the universe through chance. Yet humans regularly write such sentences.
So, when we see such highly improbable configurations in biology the best explanation for their origin is intelligence. It can't be human intelligence. The best explanation is God's intelligent design.
You seem very confused, Eric. First, I was asking Jordan why the only alternative to pure randomness is design. Your answer didn't address this.
Second, it is simply NOT true that " intelligence is the only cause we know of that creates these highly improbable configurations". Have you ever seen NH's "The Old Man of the Mountain". Surely a rock that looks just like an old man's face is improbable; yet there is no evidence it was created by intelligence.
Your example of writing, a characteristic human activity, is laughable. Of course, humans write. But why do you say their writing is improbable? Sit Shakespeare down at a table, and I'd say it is extremely likely he would produce a sonnet. So the probability is high. It is only low if you compute it relative to a uniform distribution. But why is that the correct choice?
I'm sorry you've been hoodwinked by Dembski and Meyer. To avoid being snookered in the future, read my long paper with Elsberry where we show exactly how they carry out their bait and switch.
Design is the only known alternative. You claim natural processes can create improbable configurations, such as an old man's face. However, there are many things that look like faces, so that configuration is not highly improbable.
The uniform distribution assumes maximum entropy. This is Bernoulli's principle of insufficient reason.
I read your long paper awhile ago, and it is one reason I became interested in Intelligent Design when I was on the fence. I did not see you addressing Dembski's points fairly.
Again, you seem very confused.
Sometimes Dembski computes the probability relative to the actual history of the object in question. Sometimes he computes it relative to a uniform distribution. He changes from one to the other as it suits him.
Saying "The uniform distribution assumes maximum entropy" is not an answer. Why is this the correct one to use, especially if one knows the process being considered does not generate objects with a uniform distribution?
"there are many things that look like faces, so that configuration is not highly improbable": not good enough. Do the calculation and tell us exactly how many bits of bogus "specified complexity" were in the Old Man of the Mountain. Now compare with how many bits are in Mt. Rushmore.
Jeffrey: " It is only low if you compute it relative to a uniform distribution. But why is that the correct choice?"
Eric: "Design is the only known alternative."
Eric, do you know what "uniform [probability] distribution" means? It means that all possible outcomes are equally probable. But that makes your claim absurd. Naturally-occurring events are generally the result of complex processes, which render some outcomes more likely than others. The weather does not conform to a uniform distribution. Rain, sun and snow are not equally likely. (Of course the probability distribution also depends on what you choose to identify as different outcomes. Is heavy rain a different outcome from drizzle?)
Evolution is another complex natural process, and it would be equally absurd to claim that all possible outcomes of evolution must be equally probable (but for design). Even Dembski doesn't make such an absurd claim. But he sometimes omits to take non-uniform probability distributions into account when he should, and puts too much emphasis on uniform ones.
I suggest you refrain from concluding that the scientific community has made a trivial error and got evolutionary theory all wrong, at least until you've learnt the basics of the relevant subjects.
(Sorry if this is a double post.)
Jeffrey: " It is only low if you compute it relative to a uniform distribution. But why is that the correct choice?"
Eric: "Design is the only known alternative."
Eric, do you know what "uniform [probability] distribution" means? It means that all possible outcomes are equally probable. But that makes your claim absurd. Naturally-occurring events are generally the result of complex processes, which render some outcomes more likely than others. The weather does not conform to a uniform distribution. Rain, sun and snow are not equally likely. (Of course the probability distribution also depends on what you choose to identify as different outcomes. Is heavy rain a different outcome from drizzle?)
Evolution is another complex natural process, and it would be equally absurd to claim that all possible outcomes of evolution must be equally probable (but for design). Even Dembski doesn't make such an absurd claim. But he sometimes omits to take non-uniform probability distributions into account when he should, and puts too much emphasis on uniform ones.
I suggest you refrain from concluding that the scientific community has made a trivial error and got evolutionary theory all wrong, at least until you've learnt the basics of the relevant subjects.
If the background process creates a highly non-uniform distribution, then it too has high CSI. The difficulty with measuring CSI is removing the background CSI.
The faces example clearly has no CSI. With faces, many different processes can create faces. I could drop a handful of sand and accidentally create a face. However, only people can write. So only intelligent agency can create configurations with such high specificity.
You avoided the questions. No surprise. You can't do the calculations on which the bogus CSI measure is built!
You need to talk to Robert Marks II, because he thinks Mt. Rushmore has a lot of specified complexity.
Your claim about non-uniform distributions having CSI already invalidates your argument, since as Richard pointed out, many natural processes result in non-uniform distributions but don't have designers.
You're equivocating between crags that look like face silhouettes and meticulously designed sculptures.
You beg the question by assuming natural processes don't have a designer.
Such equivocation and fallacious reasoning is why I didn't find your book very compelling.
You're equivocating between crags that look like face silhouettes and meticulously designed sculptures.
OK, show me how your measure distinguished between them. Do the math!
You beg the question by assuming natural processes don't have a designer.
If all natural processes are designed, then what the heck are you distinguishing in your designed versus non-designed dichotomy?
Such equivocation and fallacious reasoning is why I didn't find your book very compelling.
I didn't write a book about intelligent design. Don't know what you're referring to. But please tell the truth! You don't find our reasoning compelling because you're a religious fundamentalist with an a priori commitment to deny evolution.
OK, show me how your measure distinguished between them. Do the math!
I don't have time to write a computer simulation and show the math. However, it is obvious that I can drop a handful of sand and create a silhouette or basic face, whereas I could never do that with the Mount Rushmore faces.
Consequently, chance easily explains the former but cannot explain the latter. However, the latter also has high specificity, since the number of configurations matching the description "meticulously designed face of president" is very small.
But, once I have the time, it would be an interesting simulation to write.
If all natural processes are designed, then what the heck are you distinguishing in your designed versus non-designed dichotomy?
Even if all natural processes are designed, the processes themselves cannot create CSI. So, if we could account for the CSI already latent in the process, we would see that all artifacts they create have 0 CSI once the latent CSI is removed.
I didn't write a book about intelligent design. Don't know what you're referring to. But please tell the truth! You don't find our reasoning compelling because you're a religious fundamentalist with an a priori commitment to deny evolution.
You wrote a book critiquing intelligent design. It is full of the type of argumentation I find unconvincing.
I did not find the book unconvincing because I have a prior commitment to deny evolution. I initially wanted to believe in evolution and be an atheist, but the lack of good arguments on their side was too overwhelming and I couldn't in good conscience do so.
I don't have time to write a computer simulation and show the math.
It seems nobody who practices the pseudoscience of intelligent design has the time, not just you. Evader.
the processes themselves cannot create CSI
Sure they can. We gave an example in our paper.
You wrote a book critiquing intelligent design.
What is the title of this nonexistent book I have written? I am genuinely curious, because I am unaware of it.
It seems to me that the mistake generally made is not distinguishing between information itself and the encoding of information. Information proper is a metaphysical concept. Information is *about* something. "What do you know about the new boss?". That's information. Information is not math. Information is an abstract concept, but information is coded so it can be stored or communicated. It is easily seen that information is not the coding by noting that information can frequently be coded by disparate means. I can talk about the new boss, or hand you a written article the contains the same info.
So all the tech talk is about the coding of information, not information itself. And note that all of Shannon et al's work was with pre-coded information - namely voice. So his information theory is really about how to efficiently communicate codes, not information proper.
So, returning from math to language, if a given medium contains information, then there must have been an informer. That's simply what information means. So if the DNA contains information, then there was a sentient creator. The evolutionist must say that DNA *looks like* coded information, but isn't really information at all. If the evolutionist grants that there is actually information coded into the cell, then he is granting a designer.
Mark: perhaps you would benefit by reading some basic textbooks about information theory.
As for "if a given medium contains information, then there must have been an informer", this is a non sequitur. Think about varves: they obviously contain information, but who is the informer?
"The evolutionist must say that DNA *looks like* coded information, but isn't really information at all." No, scientists just think you don't understand the definition of information.
"perhaps you would benefit by reading some basic textbooks about information theory":
All one needs is Meyer's book. If in a contract we find defined terms, then we must interpret the contract based on those definitions. The same goes for Meyer's book. He has narrowed his definition of information to what he argues requires design. That's his privilege as the author. By referring to other definitions of information out there that exclude the need for design, you're attacking a strawman.
I also think I could "Give an example of information with no physical basis": the written language. Objects shaped as letters only become an alphabet and a written language when a convention is established by an intelligent agent. That convention is not physical. Perhaps this applies to DNA as well. Is the convention that governs the DNA code physical and how dit it arise?
"All one needs is Meyer's book": why would you think that the definition imposed by Meyer, which has nothing in common with the definition used by all other scientists, and which has the problems I mentioned above, would be definitive?
You didn't reply to either my example about weather prediction or about varves, both of which defeat Meyer's claims.
"All one needs is Meyer's book. If in a contract we find defined terms, then we must interpret the contract based on those definitions. The same goes for Meyer's book. He has narrowed his definition of information to what he argues requires design. That's his privilege as the author."
That would only be true if Meyer was careful to distinguish between Meyer-information and other senses of the word "information", and if he presented serious arguments to support the TWO claims (a) that organisms exhibit Meyer-information, and (b) that Meyer-information cannot arise by natural evolution.
However, he does no such thing. Instead he relies on appeals to intuitions about "information" in colloquial senses of that word.
a rose by any other name: I don't really care if you call it "information" or invent an entirely new term. In the end, there are very specific complex sequences of nucleotides that ultimately give rise to extraordinarily complex, sophisticated, and often elegant functions. There are many times more ways those same molecules could be arranged which would result in no function whatsoever. Some explanation is required for how those sequences arose (from nothing). Once all is "up and running" adaptation is not that impressive.
Hint: they didn't arise "from nothing".
Post a Comment