Friday, February 02, 2018

Doug Hofstadter, Flight, and AI


Douglas Hofstadter, author of the fascinating book, Gödel, Escher, Bach, is someone I've admired for a long time, both as an expositor and an original thinker.

But he goes badly wrong in a few places in this essay in the Atlantic Monthly. Actually, he's said very similar things about AI in the past, so I am not really that surprised by his views here.

Hofstadter's topic is the shallowness of Google Translate. Much of his criticism is on the mark: although Google Translate is extremely useful (and I use it all the time), it is true that it does not usually match the skills of the best human translators, or even good human translators. And he makes a strong case that translation is a difficult skill because it is not just about language, but about many facets of human experience.

(Let me add two personal anecdotes. I once saw the French version of Woody Allen's movie Annie Hall. In the original scene, Alvy Singer (Woody Allen) is complaining that a man was being anti-semitic because he said "Did you eat?" which Alvy mishears as "Jew eat?". This was translated as "Tu viens pour le rabe?" which Woody Allen conflates with "rabbin", the French word for "rabbi". The translator had to work at that one! And then there's the French versions of the Harry Potter books, where the "Sorting Hat" became the "Choixpeau", a truly brilliant invention on the part of the translator.]

But other things Hofstadter says are just ... wrong. Or wrong-headed. For example, he says, "The bailingual engine isn't reading anything--not in the normal human sense of the verb 'to read.' It's processing text." This is exactly the kind of complaint people made about the idea of flying machines: "A flying machine isn't flapping its wings, so it cannot be said to fly in the normal human understanding of how birds fly." [not an actual quote] Of course a computer doesn't read the way a human does. It doesn't have an iris or a cornea, it doesn't use its finger to turn the page or make the analogous motion on a screen, and it doesn't move its lips or write "How true!" in the margins. But what does that matter? No matter what, computer translation is going to be done differently from the exact way humans do it. The telling question is, Is the translation any good? Not, Did it translate using exactly the same methods and knowledge a human would? To be fair, that's most of his discussion.

As for "It's processing text", I hardly see how that is a criticism. When people read and write and speak, they are also "processing text". True, they process text in different ways than computers do. People do so, in part, taking advantage of their particular knowledge base. But so does a computer! The real complaint seems to be that Google Translate doesn't currently have access to, or use extensively, the vast and rich vault of common-sense and experiential knowledge that human translators do.

Hofstadter says, "Whenever I translate, I first read the original text carefully and internalize the ideas as clearly as I can, letting them slosh back and forth in my mind. It's not that the words of the original are sloshing back and forth; it's the ideas that are triggering all sorts of related ideas, creating a rich halo of related scenarios in my mind. Needless to say, most of this halo is unconscious. Only when the halo has been evoked sufficiently in my mind do I start to try to express it--to 'press it out'--in the second language. I try to say in Language B what strikes me as a natural B-ish way to talk about the kinds of situations that constitute the halo of meaning in question.

"I am not, in short, moving straight from words and phrases in Language A to words and phrases in Language B. Instead, I am unconsciously conjuring up images, scenes, and ideas, dredging up experiences I myself have had (or have read about, or seen in movies, or heard from friends), and only when this nonverbal, imagistic, experiential, mental 'halo' has been realized—only when the elusive bubble of meaning is floating in my brain--do I start the process of formulating words and phrases in the target language, and then revising, revising, and revising."

That's a nice -- albeit maddeningly vague -- description of how Hofstadter thinks he does it. But where's the proof that this is the only way to do wonderful translations? It's a little like the world's best Go player talking about the specific kinds of mental work he uses to prepare before a match and during it ... shortly before he gets whipped by AlphaGo, an AI technology that uses completely different methods than the human.

Hofstadter goes on to say, "the technology I've been discussing makes no attempt to reproduce human intelligence. Quite the contrary: It attempts to make an end run around human intelligence, and the output passages exhibited above clearly reveal its giant lacunas." I strongly disagree with the "end run" implication. Again, it's like viewing flying as something that can only be achieved by flapping wings, and propellers and jet engines are just "end runs" around the true goal. This is a conceptual error. When Hofstadter says "There's no fundamental reason that machines might not someday succeed smashingly in translating jokes, puns, screenplays, novels, poems, and, of course, essays like this one. But all that will come about only when machines are as filled with ideas, emotions, and experiences as human beings are", that is just an assertion. I can translate passages about war even though I've never been in a war. I can translate a novel written by a woman even though I'm not a woman. So I don't need to have experienced everything I translate. If mediocre translations can be done now without the requirements Hofstadter imposes, there is just no good reason to expect that excellent translations can't be eventually be achieved without them, at least in the same degree that Hofstadter claims.

I can't resist mentioning this truly delightful argument against powered mechanical flight, as published in the New York Times:

The best part of this "analysis" is the date when it was published: October 9, 1903, exactly 69 days before the first successful powered flight of the Wright Brothers.

Hofstadter writes, "From my point of view, there is no fundamental reason that machines could not, in principle, someday think, be creative, funny, nostalgic, excited, frightened, ecstatic, resigned, hopeful...".

But they already do think, in any reasonable sense of the word. They are already creative in a similar sense. As for words like "frightened, ecstatic, resigned, hopeful", the main problem is that we cannot currently articulate in a suitably precise sense what we exactly mean by them. We do not yet understand our own biology enough to explain these concepts in the more fundamental terms of physics, chemistry, and neuroanatomy. When we do, we might be able to mimic them ... if we find it useful to do so.

Addendum: The single most clueless comment to Hofstadter's piece is this, from "Steve": "Simple common sense shows that [a computer] can have zero "real understanding" in principle. Computers are in the same ontological category as harmonicas. They are *things*. As in, not alive. Not conscious. Furthermore the whole "brain is a machine" thing is a *belief* based on pure faith. Nobody on earth has the slightest idea how consciousness actually arises in a pile of meat. Reductive materialism is fashionable today, but it is no less faith-based than Mormonism."

Needless to say, this is just the opposite of what I hold.

15 comments:

isohedral said...

Ha ha, I immediately thought of you as soon as I saw this article. I had been formulating many of the same objections in my head.

Peter (Oz) Jones said...

Prof Shallit
Nicely skewered sir!

It is also hard to go past this one, and I had not realised said down here in Oz:

1895, Lord Kelvin. (William Thomson, 1824-1907) had confidently said, “heavier-than- air flying machines are impossible”. (at the Australian Institute of Physics)

JimV said...

I enjoyed "Godel, Escher, Bach", but not so much "I am a Strange Loop". It seemed to be an attempt to explain consciousness by analogy to things like multiple reflections between facing mirrors, but the analogy never became clear to me.

I wonder if the people who state these inchoate objections to AI ever did much computer programming. It seems the obvious place to begin to study AI, but many of them state that a computer can't make the sort of decisions that in fact many programs make. (I'm thinking of an example from the philosopher Nagel.)

The philosopher Tim Maudlin showed up in one of Scott Aaronson's ("Shetl-Optimized" blog) threads not long ago saying things like, computer programs can't feel pain so they will never be able to simulate human thinking. I tried to point out that the biological purpose of pain, to discourage dangerous behavior, could be and is implemented in computer programs - to no avail.

The problem with drawing conclusions from analogies, I think, is that when you don't know much about a subject, you can't make useful analogies for it.

Jeffrey Shallit said...

Good points.

"Pain" is, for me, one of the currently most inexplicable things about human experience. I find it very mysterious.

I agree with you entirely that "the biological purpose of pain, to discourage dangerous behavior, could be and is implemented in computer programs", but I am still left feeling that such an implementation might be missing something I can't put my finger on. And so far, the philosophical discussions of pain that I've read seem less than helpful.

Peter (Oz) Jones said...

Hello Jeffrey
Pain is indeed needed to keep us from injuring ourselves, as shown in people without that mechanism (eg CIP gene disorder) or who have damage from a stoke etc.

An interesting case, maybe an outlier:
https://www.omicsonline.org/open-access/trauma-and-pain-a-fragile-link-2167-1222-1000378.php?aid=89712

A case report by Fisher et al. highlighted the possibility of experiencing severe pain in the absence of tissue damage [5]. A builder aged 29 presented at accident and emergency having accidently jumped onto a 15 cm nail that had penetrated his work boot. The smallest movement of the nail caused so much distress that fentanyl and midazolam had to be administered so that the boot could be removed. Interestingly the nail had not penetrated the foot but had lodged between the toes resulting in no tissue damage. The builder’s perception of threat of injury had activated ascending and descending pain facilitation systems within the central nervous system which had exaggerated cognitive, appraisal, expectation, fear, and catastrophizing processes and modulated the experience and expression of pain.

Jeffrey Shallit said...

I wasn't disputing that at all.

I am puzzled by the experience of pain.

Peter (Oz) Jones said...

Jeffrey
mmm, I was just trying to be helpful as you mentioned “philosophical discussions”.
I was thinking it may not have included the findings for the neurological basis of the experience of pain without a physical etiology?
eg another is the phantom limbs work by VS Ramachandran, ie it is computational & not a response to afferent signals.

JimV said...

(Fascinating example, although it is hard for me to imagine how a nail could get driven between my toes without causing any tissue damage, if only due to friction - it would be minor tissue damage, though.)

My explanation for the experience of pain probably won't be very helpful, but here it is:

It seems to me to be the same sort of problem as "why does a rose smell like a rose" (instead of like an orange, and vice-versa)? We understand how our senses detect and react to its specific chemistry in great detail, but why that specific experience?

Well, it had to smell like some specific, distinguishable thing to fulfill its evolved, biological function, and that is the way those chemicals smell to our sense organs in our universe. If they didn't, some other function would have evolved.

The same is true for the experience of pain. There is a way for pain to occur in our universe, and evolution found it (or one of the ways). As with electricity, all we have to do is find a way to duplicate what nature has done to make use of it (presuming we want to). E.g., a computer could designed with the hardware to detect roses vs. oranges. It might not "feel" the same way it does to us, but why is that important?

(This argument was not understood or appreciated by Tim Maudlin, noted philosopher.)

The difference of course is that we don't understand pain neurologically yet in the same level of detail as scent-detection, but I don't think we can ever hope to understand why specific experiences (such as the scent of the rose) are as they are - just how they are caused chemically and neurologically.

Pseudonym said...

And let's not forget my all-time favourite: "The question of whether machines can think is about as relevant as the question of whether submarines can swim." - Edsger Dijkstra

This is something I've said in various forms elsewhere, but the real problem of consciousness is that we only have one example of consciousness to work with, and one example makes for a very poor definition.

It's like asking what constitutes "life". All of our examples use DNA and RNA, etc etc. Other kinds of life should be possible, but without other non-theoretical examples, it's impossible to define "life" in an unbiassed way.

F. Andy Seidl said...

Great post. I'm a huge Hofstadter fan, but I agree very much with your critique. Still, I can't say I agree with your closing sentence: "Reductive materialism is fashionable today, but it is no less faith-based than Mormonism."

At some level, I get it, there is nothing but faith in the sense that we can only "know" what we perceive and there is no way to objectively know that what we perceive is in any sense "real". But, reductive materialism has proven to be an effective way to model reality. Mormonism, no.

Reductive materialism gives us essentially every technology that exists--modern medicine, air travel, electric power, computers, skyscrapers, fertilizer, radio... everything. Mormonism, on the other hand, is essentially worthless as a model of reality. Essentially nothing has come from Mormonism other than Mormon traditions that, unsurprisingly, server primarily to replicate the Mormon memes in the minds of Mormons.

Jeffrey Shallit said...

Andy, the sentence you refer to at the end is not mine, but the clueless commenter "Steve" I am quoting.

It expresses just the opposite of what I believe.

F. Andy Seidl said...

Jeffery, I'm sorry I misinterpreted that, but thank you for the clarification! :-)

Unknown said...

I wonder if the accuracy and precision of the "translation" is a function of the relatedness of the languages involved.

I am a keen photographer and post my pictures to Google+. People commenting on my pictures will often post in their native language (in which I rarely have any facility). However, Google+ uses Google Translate to provide a way to convert the person's comment into English.

I've noticed that I can make sense of translations from many languages to which Google Translate provides a text, but there are some that are almost always mangled beyond English comprehension. I assume that this is because some languages use a similar grammar, syntax and referential structure to english, whole othere are vastly different.

Google Translate seems to do a particularly bad job with Arabic, which is unfortunate as Arabic speakers seem to be particularly interested in my style of photography. Arabic speakers comment often, but Google Translate doesn't help me to understand what is being said to me.

Anonymous said...

Hope it’s OK to comment on something old.

I think this post really misunderstands Hofstadter’s point. You wrote “The telling question is, Is the translation any good? Not, Did it translate using exactly the same methods and knowledge a human would?

Both questions are wrong. The “telling question” for me, and I think Hofstadter too, is simply How intelligent is the translator? Of course that’s only answerable with a definition of intelligent. Fortunately Hofstadter’s been defending a pretty good one for decades: intelligent means capable of making analogies, sensitive to patterns, and possessing fluid concepts.

So how intelligent is Google Translate on that definition? That’s just what the Atlantic article is about, and the conclusion is: not very, and moreover it’s doubtful that merely building out its net and tossing more data at it will change that.

So to be clear: Hofstadter didn’t say an AI translator needs to think exactly like a human – and he definitely didn’t say that an AI translator needs to think exactly like Douglas Hofstadter. He just said an AI translator needs to live up to the name and think with intelligence. Your comparison to the claim that a flying machine needs to flap wings like a bird turns out to be, well, a bit of a canard.

My impression is that you don’t think AI is about creating systems that satisfy Hofstadter or anyone else’s definition of intelligence. You seem more interested in creating systems that give the right outputs to certain problems. The sad thing about that view is it ignores the possibility that AI can illuminate what intelligence is and how it works.

Jeffrey Shallit said...

I think "intelligent means capable of making analogies, sensitive to patterns, and possessing fluid concepts" is much too vague to be worth doing anything with. Unless you can give me some testable criteria for "possessing fluid concepts".

I suspect it's also much too narrow. Hawks seem pretty darn intelligent to me, but I don't know that they make analogies or possess fluid concepts. Intelligence is multi-faceted and those three aspects don't capture it very well.