Friday, July 13, 2018

Discovery Institute Branches Out Into Comedy


That wretched hive of scum and villainy, the Discovery Institute, has announced that its nefarious tentacles have snagged a new venture: a situation comedy called the "Walter Bradley Center for Natural and Artificial Intelligence".

Walter Bradley, as you may recall, is the engineering professor and creationist who, despite having no advanced training in biology, wrote a laughably bad book on abiogenesis. Naming the "center" after him is very appropriate, as he's never worked in artificial intelligence and, according to DBLP, has no scientific publications on the topic.

And who was at the kick-off for the "center"? Why, the illustrious Robert J. Marks II (who, after nearly four years, still cannot answer a question about information theory), William Dembski (who once published a calculation error that resulted in a mistake of 65 orders of magnitude), George MontaƱez, and (wait for it) ... Michael Egnor.

Needless to say, none of these people have any really serious connection to the mainstream of artificial intelligence. Egnor has published exactly 0 papers on the topic (or any computer science topic), according to DBLP. Dembski has a total of six entries in DBLP, some of which have a vague, tangential relationship to AI, but none have been cited by other published papers more than a handful of times (other than self-citations and citations from creationists). Marks has some serious academic credentials, but in a different area. In the past, he published mostly on topics like signal processing, amplifiers, antennas, information theory, and networks; lately, however, he's branched out into publishing embarrassingly naive papers on evolution. As far as I can tell, he's published only a small handful of papers that could, generously speaking, be considered as mainstream artificial intelligence, none of which seem to have had much impact. MontaƱez is perhaps the exception: he's a young Ph. D. who works in machine learning, among other things. He has one laughably bad paper in AI, about the Turing test, in an AI conference, and another one in AAAI 2015, plus a handful in somewhat-related areas.

In contrast, take a look at the DBLP record for my colleague Peter van Beek, who is recognized as a serious AI researcher. See the difference?

Starting a center on artificial intelligence with nobody on board who would be recognized as a serious, established researcher in artificial intelligence? That's comedy gold. Congrats, Discovery Institute!

19 comments:

Unknown said...

I would have thought machine learning had something to do with AI. And neurosurgery something to do with natural intelligence. Thank you for correcting these misconceptions!

Jeffrey Shallit said...

Machine learning plays an important role in AI. But who among the new fellows is publishing actively in machine learning?

Neuroscience has a lot to do with natural intelligence. Neurosurgery, virtually nothing.

Unknown said...

Maybe the one with a PhD in machine learning, who works as a data scientist building machine learning systems in the Cloud AI group at Microsoft? Sure he is a newly minted academic, but claiming he has only one relevant publication in machine learning or AI is a bit of a lie.

William Spearshake said...

Will they shortly create a companion journal to the highly productive BioComplexity?

JimV said...

Sounds like a way to generate some artificial prestige in pursuit of natural cash, as in "today's speaker is so-and-so, of the Walter Bradley Center for Natural and Artificial Intelligence."

Jeffrey Shallit said...

Unknown: I don't know why you're being so coy. If you're talking about Montanez, I looked at his papers on DBLP and you can, too. https://dblp.uni-trier.de/search?q=Montanez%2C%20George

Which ones of those do you classify as AI papers?

Unknown said...

Machine learning, not AI specifically. For those I see a paper at IJCNN (international joint conference on neural networks), an ML paper at CIKM, one NLP paper at IRI, one ML paper at IEEE SMC, and one on Markov models at AAAI (a top AI conference, BTW). You may think the papers are junk, but others might disagree (there are, after all, four conference awards among those five papers). You cannot say that Montanez only has one relevant paper, or is unqualified to speak on ML. It would be professional (and ethical) to own up to your mistake.

Jeffrey Shallit said...

I looked at all the papers, but somehow missed the one in AAAI. I corrected my piece to reflect this, even before your comment.

The paper in IJCNN does not seem to be about artificial intelligence at all, in my opinion. The one in CIKM seems to be more about databases or information retrieval, and was not published in an AI conference. The one in IRI is clearly intelligent design apologetics. I don't see a paper at IEEE SMC listed in the source I gave.

Jeffrey Shallit said...

Oh, and a few other things.

First, I'm glad to see you have retracted your implication about "lie".

Second, I didn't say all his papers are junk. I said one specific paper is "laughably bad", so please don't put words in my mouth. If you want to defend that paper, go ahead. It should be interesting.

Third, I didn't say Montanez "is unqualified to speak about ML". Again, that's your interpretation.

Unknown said...

How is a paper at IJCNN on spatio-temporal latent state modeling not an ML paper? And how is a paper coauthored with MSR researchers using ML models to predict and analyze user behavior (CIKM paper) not an ML paper? And the paper using naive Bayes models and latent Dirichlet allocation ML methods somehow isn't an ML paper because it is also an "ID apologetics" paper? Is there some weird binary exclusion principle at work? I think your bias has become sufficiently clear at this point. Others can look at the papers themselves to see which of us is reporting things accurately. I'll let you have the final word to defend what is now clearly a mistaken initial implication (i.e., that Montanez is not an ML researcher or is somehow not qualified to talk on the subject, when he is a recent Carnegie Mellon ML PhD working at Microsoft building ML systems in an AI sub-org, on an ML team).

Jeffrey Shallit said...

Again, you're putting words in my mouth. Where did I say "he is somehow not qualified to talk on the subject [of machine learning]? Please stop it.

Not all papers that use machine learning are research contributions to artificial intelligence. Do you really think that a paper about religious content in intelligent design social media is something that researchers in artificial intelligence would want to know about? Perhaps not; it hasn't been cited even once.

If I use tensorflow to program up a clothing classifier, am I doing research in artificial intelligence?

Heck, I sometimes build little parsers for things. Am I doing research in programming languages when I do that?

Jeffrey Shallit said...

But I do take your criticism seriously. Perhaps I was too ungenerous to Montanez. I've rewritten my paragraph to reflect your criticisms.

Unknown said...

+1

Unknown said...

For Marks, maybe you could similarly update your post to reflect his impact in computational intelligence and neural networks: http://cognet.mit.edu/book/neural-smithing

philosopher-animal said...

Even if Marks is the next Alan Turing, it is sort of weird to create an institute with all the other worse-than-nobodies.

Lee Witt said...

"Starting a center on artificial intelligence with nobody on board who would be recognized as a serious, established researcher in artificial intelligence? That's comedy gold."

You're probably being far too kind with that. What do you think their end-goal with this could be?

Jeffrey Shallit said...

Their goal is the same as their goal with evolution: have their own "experts" that can cast doubt on it.
Their motivation is religious. Artificial intelligence, which has implications for the specialness of man, causes the same doubts and worries and dissonance within them as evolution does, and this is the way they respond.

Lee Witt said...

" Artificial intelligence, which has implications for the specialness of man, causes the same doubts and worries and dissonance within them as evolution does, and this is the way they respond."

I guess that makes sense, and means we should look for their first efforts to be the invention of a very focused, skewed to their purposes, definition of AI.

William Spearshake said...

ID is about comparing biological structures to human designed structures and claiming that all examples of complexity with a known cause are the result of an intelligent agent. You would think that they would jump all over AI as further evidence of this. But, not surprising, they don’t