I don't agree, so I asked the illustrious Marks for a calculation or other rationale supporting this claim.
After three months, no reply. So I asked again.
After six months, no reply. So I asked again.
After one year, no reply. So I asked again.
After two years, no reply. So I asked again.
After three years, no reply. So I asked again.
Now it's been four years. Still no reply.
The illustrious Marks also recently supervised a Ph. D. thesis of Eric Michael Holloway. In it, the author apparently makes some dubious claims. He claims that "meaningful information...cannot be made by anything deterministic or stochastic". But if you want to actually read this Ph. D. thesis and learn how this startling claim is proven, you're out of luck. And why is that? It's because Eric Holloway has imposed a 5-year embargo on his thesis, meaning that no one can read it for five years, unless Eric Holloway approves. And when I asked to see a copy, I was refused.
Now, if there were some shenanigans going on -- for example, if a Ph. D. thesis were of such low quality that you wouldn't want anyone else to know about it -- what better way to hide that fact than to impose a ridiculously lengthy embargo? Perhaps an embargo so long that the supervisor would be safely retired by then and not subject to any investigation or sanction?
Then again, perhaps Eric Holloway is just following the example of his illustrious supervisor, who is adept at ducking questions for years.
I think this is the same Eric M. Holloway who is (at least was, in November 2017) in charge of the "The Baylor Student Chapter of the American Scientific Affiliation".
I've never heard of putting an embargo on a dissertation: nothing says "I stand 100% behind my work" like that does. I do see that Baylor's online repository for theses and dissertations lists this for a short description of his work:
Meaningful information is a mystery, where does it come from? It cannot be made by anything deterministic or stochastic, as will be proven in this study. What can make it? The proofs indicate the nature of the source; certain characteristics the source must have.
Apparently it won't be until this work is published in journals that the nature of those "proofs" will be known.
I did not note the link to "Beardocs" you had posted, which makes my reference to the same summary not needed. Sorry for missing something I shouldn't have missed.
EricMH has posted in this forum in eg this thread, if you want to try to engage him there
Thanks. I'm not optimistic, though.
So what's your take on information, Mt. Rushmore vs. Mt. Fuji?
Robert Marks is wrong, but likely not for the reasons you think:
I stopped reading after this bogus claim: "Information is always created by an intelligent agent".
The obvious answer about the photograph is that a photograph of Mt. Rushmore might contain more, less, or exactly the same amount of information as a photograph of Mt. Fuji. It depends entirely on resolution, size, compression, and other factors irrelevant to the subject matter.
You always stop reading as soon as you see something you don't like? That won't get you too far... but I'll work with you: why don't you give an example of information that is not created by an intelligent agent? And please don't say DNA or something else that has a disputed origin (even though, of course, DNA is data, not information).
As far as Fuji/Rushmore, do you think there is no difference between information and data? Because all those items you cite affect the data, but not the information. Say the $64,000 Question is: "name four US presidents". What picture would that competitor reach for? How about when asked to "indicate a symbol of Japan"? Finally, what's the information value of either picture to your pet?
And before you reach for Shannon, was he really concerned with actual information (what grandma wanted to communicate over the phone) or merely with data transmission (how well grandson understood grandma over the phone)? See A Mathematical Theory of Communication By C. E. SHANNON: “Frequently the messages have meaning; that is they refer to or are correlated according to some system with certain physical or conceptual entities. These semantic aspects of communication are irrelevant to the engineering problem.”
It's real easy to see why "resolution, size, compression, and other factors irrelevant to the subject matter" do not impact the information content of the picture. Just imagine a white noise picture and vary all those parameters at will. None of those tweaks will change the fact that there is no information whatsoever in that picture. Of course, you - the intelligent agent - can photoshop even a white noise picture into a piece of art, hence add information.
You're very confused. When you say "imagine a white noise picture and vary all those parameters at will. None of those tweaks will change the fact that there is no information whatsoever in that picture" you have it exactly backwards. True white noise contains the maximum possible amount of information, in the Kolmorogov sense.
I dpn't have the time or interest to teach you information theory, but if you are interested, you can read the book by my colleagues Ming Li and Paul Vitanyi on the subject.
Oh, and here's an example of information not created by an intelligent agent: varves.
(If pressed, I would argue that "intelligent agent" is not well defined, by the way. But this example seems like an indisputable counterexample under almost any reasonable interpretation.)
So, when you need to know something, do you ask the White Noise Oracle? And I'm the one confused? Do you get better results if you adjust the resolution, size, compression, and other factors of the White Noise Oracle?
It seems you confuse optimally encrypted data that looks almost like noise with actual white noise. And of course you forget the deciphering key that is, well, the key to extracting the information from what looks like but is not white noise...
Robert Marks also brings up Kolmorogov information recently:
...and he also cites Paul Vitanyi. Why don't you two agree anyway? He thinks a bust of Lincoln has more information than a bowling ball based on Kolmorogov complexity. Of course, not to your pet, and not to many people (and never to a dumb rock) which makes information dependent on the intelligent agent and also makes him wrong.
Varves, fossilized trees, molecules etc. are of course not information but we do create information based on sampling and analyzing them. Do you understand the difference between Information, Data and Media? It seems not.
'Intelligent agent' is as well defined as anything else in this world which is not much for sure. You can say the same about 'matter', 'information', etc.
BTW, why do you even have go to varves? Is there something special about varves? Why not just say "everything" and join the physicists in "OMG, we're losing information in the black hole". At least Susskind admits he confuses ‘data’ with ‘information’ (1 hr 16 min mark) https://www.youtube.com/watch?v=HIJ44AtCCi8 ...
"So, when you need to know something, do you ask the White Noise Oracle?"
The question is not relevant to anything we are discussing. You are confusing "information" with "information about object x". They are not the same.
"Why don't you two agree anyway?"
Because Robert Marks doesn't really understand Kolmogorov complexity.
Have you read Li and Vitanyi yet?
1. "You are right". This fails when you conclude "noise is information", regardless of any object x, y, or z.
2. "The theory is wrong". This fails given that the theory was successfully tested and is being used where appropriate.
3. "You misinterpret the theory". Bingo! Like any theory, this one cannot be applied willy-nilly. This becomes apparent when you:
a) Reach nonsensical conclusions like"noise is information";
b) Cannot articulate the difference between Information, Data and Media;
c) Cannot explain why varves (but not everything else) are special and "information not created by an intelligent agent";
d) Cannot explain how "resolution, size, compression, and other factors" could change the information of a white noise picture;
e) See all other questions you left unanswered.
My understanding of the Theory of Information is very solid. You may or may not have an edge on this theory, but it doesn't matter as this discussion is clearly not about the theory of information. Instead this is philosophizing about 'information' and there you most certainly have no edge.
I can't force you to read Li and Vitanyi, but if you do, you might understand why you're wrong. Good luck!
P.S. You still fail to distinguish between "information" and "information about something".
"My understanding of the Theory of Information is very solid."
I'm not an expert in this area (Jeffrey is), but I do know enough to know that your self-appraisal is hilaroous, unless by solid you mean "the consistency of liquid Jello".
So you can offer only unsubstantiated claims, but no answers. Sad, but expected. Should I press for answers as you press Robert Marks? :)
Let me know when you extract information out of white noise. Any information... about something, about nothing, anything.
Again, you confuse "information" with "information about something". Why do you persist in this error, even after it's been pointed out?
Also, you lie when you say I have offered "only unsubstantiated claims". Everything I've said can be verified by consulting Li and Vitanyi. Have you read it yet?
"Let me know when you extract information out of white noise."
Give as precise a definition as you can of what you mean by "information".
"Consulting Li and Vitanyi" is a fools errand and I am not interested. Why don't you consult them and get the answers? Or get them involved and we'll have a conversation without intermediaries. But I have not seen them make the wild claims you do (I have not search either).
Your simple challenge is to "extract information out of white noise. Any information... about something, about nothing, anything." Don't matter if "information" or "information about something".
I think you have misunderstood me.
"Li and Vitanyi" refers to their BOOK, "An Introduction to Kolmogorov Complexity and Its Applications". When I suggested that you "consult Li and Vitanyi", I meant you should go and read their book.
It's a basic result of Kolmogorov complexity (and I teach it every year in CS 462 here in Waterloo) that a string of n bits chosen at random will, with very high probability, have the maximum amount of information possible over all n-bit strings, up to a small additive constant. This is one of the first theorems on the subject.
I am mystified why you do not know or understand this basic result. If you have no access to Li and Vitanyi's book, you can read it in my own textbook, "A Second Course in Formal Languages and Automata Theory".
"(I have not search either)"
I'm not surprised a bit.
Since you refuse to state what you mean by information* there doesn't seem to be any chance of figuring out exactly what your objection is. I find myself amazed at Jeffrey's patience.
* It seems that your use of "information" is best stated as "this thing tells me something" -- "message", in other words. If that's the case you are woefully off course.
Post a Comment