Wednesday, November 28, 2018

The Only Map that Matters


...is a moose map.

See?

(Hat tip: J. C.)

Tuesday, October 16, 2018

Happy Quaternions Day


"Here as he walked by [in Dublin] on the 16th of October 1843 Sir William Rowan Hamilton in a flash of genius discovered the fundamental formula for quaternion multiplication i2 = j2 = k2 = ijk = -1 & cut it on a stone of this bridge."

It's the 175th anniversary.

Friday, October 05, 2018

Moose Fight


For Moose Friday, from CBC New Brunswick, a video of two male moose battling it out.

Monday, September 17, 2018

Numberphile Mentions Our Paper


You have to look really quickly, but the youtube channel numberphile briefly mentions our recent paper on sums of palindromes:

Our paper appears onscreen at 3:30.

Sunday, September 16, 2018

Columnists Go Ga-Ga Over Reagan Letter that Demonstrates What a Tool He Was


Karen Tumulty discovered a previously unpublished 1982 letter written by Ronald Reagan to his father-in-law, Loyal Davis, shortly before Davis's death. She, like many other columnists, think this illustrates what a wonderful guy Reagan was. Michael Gerson gushed, "This letter is remarkable and revealing. I am so grateful that Karen found it." Peter Wehner called it a "rather remarkable/moving historical document". Ron Fournier sighed, "What a beautiful letter". Glenn Kessler said, "Such a remarkable find. Pause the Twitter feuds for a moment and glimpse the personal faith of a president."

I think it's an interesting find, but not for the reasons that Tumulty et al. do. I think it illustrates at least three significant deficiencies in Reagan's character that many in the public don't know about (but anyone who followed his career closely knows all too well).

First, Reagan was just not that bright, and showed signs of senility in his second term. As Jonathan Chait wrote,

Lou Cannon’s biography describes President Reagan frequently misidentifying members of his own Cabinet, describing movie scenes as though they were real, changing his schedule in order to follow the advice of an astrologer, and bringing up a science-fiction movie, in which aliens cause the Soviets and Americans to come together, with such frequency that Colin Powell would joke to his staffers, “Here come the little green men again.” As Cannon concluded, “The sad, shared secret of the Reagan White House was that no one in the presidential entourage had confidence in the judgment or the capacities of the president.”

The letter confirms it. Reagan didn't know the difference between "prophesy" (the verb) and "prophecy" (the noun), and thought the correct plural was "prophesys".

Second, Reagan never let actual facts get in the way of a good story. Truth was unimportant to him. Again, anyone who's actually followed his career already knows this, but the general public doesn't -- they saw him as a genial, reliable grandfather figure. But as Stephen Greenspan wrote in Annals of Gullibility:

Many of these stories [of Reagan] were embellished or, quite typically, completely made up. One example is a story Reagan told about a football game between his high school from Dixon, Illinois, and a rival team from Mendota. In this story, the Mendota players yelled for a penalty at a crucial point in the game. The official had missed the play and asked Reagan what had happened. Reagan's sense of sports ethics required him to tell the truth, Dixon was penalized, and went on to lose the game by one touchdown. Wonderful story, except that it never happened.

This aspect of Reagan's character is also illustrated in the letter. He refers to "one hundred and twenty three specific prophesys [sic] about his [Jesus'] life all of which came true."

The claim that aspects of Jesus' life were correctly and miraculously foretold is a common one among Christian evangelicals. Oddly enough, however, the specific number of fulfilled prophecies varies widely from author to author. A google search gives "more than 300", "over 400", "hundreds", "191", "68", and many similar claims. However, most of these so-called prophecies can be dismissed right away because (a) they were not prophecies or (b) they actually referred to something other than Jesus or (c) they were extremely obscure or vague or (d) their correctness is seriously disputed.

The few that remain that might well be true because Jesus (assuming he existed) deliberately chose to take actions based on what the Old Testament said. In this case, the prophecy is correct, but not for any miraculous reason.

And of course, the value of true prophecies is negated by the prophecies that were falsified. One of the most important of Jesus' predictions -- (in Matthew 24) "Verily I say unto you, This generation shall not pass, till all these things be fulfilled." -- was falsified. None of the things Jesus claimed would happen occurred in the generation after his lifetime. The amount of ink Christians have expended trying to excuse this failed prophecy could probably fill a dozen swimming pools.

I doubt very much that Reagan investigated his 123 claims. He was not a scholar or expert in the Bible. Almost certainly he was just repeating some claim he had once heard -- this would be in line with other stories about Reagan, who had a large number of half-remembered quips and anecdotes he liked to relate, without concern for whether they were true.

Third --- and this is the most damning for me --- what the letter illustrates is the willingness of Reagan to take advantage of someone's pain and suffering to ram his religious beliefs down the throat of a dying man. Civilized people do not expect others to share their religious beliefs, and do not evangelize to vulnerable people. It is rude and it is grotesque and it is contemptible.

If, dear reader, you are a Christian and you have trouble understanding my point of view, let us try a thought experiment. Suppose you were on your deathbed, and you were very worried because, in your religion, the sins you know that you committed would likely condemn you to an afterlife of eternal damnation. Suppose I, your atheist relative, tried to console you by saying, "Look, your beliefs about Hell are all nonsense. You are not going to experience eternal damnation because THERE IS NO HELL. No heaven, either, by the way." Would you be grateful? My guess is no, but rest assured -- I would not do such a thing.

There are other aspects of Reagan's character on exhibit in his letter -- a lack of judgment, a deficiency of skepticism, and an overwhelming gullibility. But I think I've said enough: the letter is an appalling document. The fact that people celebrate it as praiseworthy indicates a fundamental sickness at the heart of modern Christian America.

Monday, September 10, 2018

I Did Warn You


When I read the latest dreck from the "Walter Bradley Center for Natural and Artificial Intelligence", all I could think was: I did warn you.

Of course, it didn't really take that much cleverness. The "Center" is a project of the Discovery Institute, a think tank so committed to dissembling about evolution that it's often been called the "Dishonesty Institute". And, as I pointed out, the folks working at the "Center" aren't exactly luminaries in the area they purport to critique.

This latest column is by Michael Egnor, a surgeon whose arrogance (as we've seen many times before) is only exceeded by his ignorance. Despite knowing nothing about computer science, Egnor tries to explain what machine learning is. The results are laughable.

Egnor starts by making an analogy between a book and a computer. He says a book "is a tool we use to store and retrieve information, analogous in that respect to a computer". But this comparison misses the single most essential feature of a computer: it doesn't just store and retrieve information, it processes it. A book made of paper typically does not; the words are the same each time you look at it.

Egnor goes on to construct an analogy where the book's binding cracks preferentially where people use it. But to be a computer you need more kinds of processing capabilities than cracked bindings. Not just any processing; there's a reason why machines like the HP-35, despite their ability to do trig functions and exponentials, were called "calculators" and not "computers". To be genuinely considered a "computer", a machine should be able to carry out basic operations such as comparisons and conditional branching. And some would say that a computer isn't a real computer until it can simulate a Turing machine. A book with a cracked binding isn't even close.

Egnor goes on to elaborate on his confusion. "The paper, the glue, and the ink are the book's hardware. The information in the book is the software." Egnor clearly doesn't understand computers! Software specifies actions to be taken by the computer, as a list of commands. But a book doesn't typically specify any actions, and if it does, those actions are not carried out by the "paper" or "glue" or "ink". If anything carries out those actions, it is the reader of the book. So the book's hardware is actually the person reading the book. Egnor's analogy is all wrong.

Egnor claims that computers "don't have minds, and only things with minds can learn". But he doesn't define what he means by "mind" or "learn", so we can't evaluate whether this is true. Most people who actually work in machine learning would dispute his claim. And Egnor contradicts himself when he claims that machine learning programs "are such that repeated use reinforces certain outcomes and suppresses other outcomes", but that nevertheless this isn't "learning". Human learning proceeds precisely by this kind of process, as we know from neurobiology.

Finally, Egnor claims that "it is man, and only man, who learns". This will be news to the thousands of researchers who study learning in animals, and have done so for decades.

When a center is started by people with a religious axe to grind, and staffed by people who know little about the area they purport to study, you're guaranteed to get results like this. Computer scientists have a term for this already: GIGO.

Sunday, September 09, 2018

Robert Marks: Four Years and Still No Answer -- and More Baylor Hijinks


Once upon a time, the illustrious Baylor professor Robert Marks II made the following claim: "we all agree that a picture of Mount Rushmore with the busts of four US Presidents contains more information than a picture of Mount Fuji".

I don't agree, so I asked the illustrious Marks for a calculation or other rationale supporting this claim.

After three months, no reply. So I asked again.

After six months, no reply. So I asked again.

After one year, no reply. So I asked again.

After two years, no reply. So I asked again.

After three years, no reply. So I asked again.

Now it's been four years. Still no reply.

The illustrious Marks also recently supervised a Ph. D. thesis of Eric Michael Holloway. In it, the author apparently makes some dubious claims. He claims that "meaningful information...cannot be made by anything deterministic or stochastic". But if you want to actually read this Ph. D. thesis and learn how this startling claim is proven, you're out of luck. And why is that? It's because Eric Holloway has imposed a 5-year embargo on his thesis, meaning that no one can read it for five years, unless Eric Holloway approves. And when I asked to see a copy, I was refused.

Now, if there were some shenanigans going on -- for example, if a Ph. D. thesis were of such low quality that you wouldn't want anyone else to know about it -- what better way to hide that fact than to impose a ridiculously lengthy embargo? Perhaps an embargo so long that the supervisor would be safely retired by then and not subject to any investigation or sanction?

Then again, perhaps Eric Holloway is just following the example of his illustrious supervisor, who is adept at ducking questions for years.

Sunday, August 26, 2018

Creationist Physicist Doesn't Understand Mathematics, Either


If there's one consistent aspect of creationism, it's that people lacking understanding and training are put forth as experts. Here we have yet another example, from the creationist blog Uncommon Descent. There physicist Rob Sheldon is quoted as saying

THere [sic] can even be uncertainty in mathematics. For example, mathematicians in the 1700’s kept finding paradoxes in mathematics, which you would have thought was well-defined. For example, what is the answer to this infinite sum: 1+ (-1) + 1 + (-1) …? If we group them in pairs, then the first pair =>0, so the sum is: 0+0+0… = 0. But if we skip the first term and group it in pairs, we get 1 + 0+0+0… = 1. So which is it?

Mathematicians call these “ill-posed” problems and argue that ambiguity in posing the question causes the ambiguity in the result. If we replace the numbers with variables, do some algebra on the sum, we find the answer. It’s not 0 and it’s not 1, it’s 1/2. By the 1800’s a whole field of convergence criteria for infinite sums was well-developed, and the field of “number theory” extended these results for non-integers etc. The point is that a topic we thought we had mastered in first grade–the number line–turned out to be full of subtleties and complications.

Nearly every statement of Sheldon here is wrong. And not just wrong -- wildly wrong, as in "I have absolutely no idea of what I'm talking about" wrong.

1. Uncertainty in mathematics has nothing to do with the kinds of "infinite sums" Sheldon cites. "Uncertainty" can refer to, for example, the theory of fuzzy sets, or the theory of undecidability. Neither involves infinite sums like 1 + (-1) + 1 + (-1) ... .

2. Ill-posed problems have nothing to do with the kind of infinite series Sheldon cites. An ill-posed problem is one where the solution depends strongly on initial conditions. The problem with the infinite series is solely one of giving a rigorous interpretation of the symbol "...", which was achieved using the theory of limits.

3. The claim about replacing the numbers with "variables" and doing "algebra" is incorrect. For example if you replace 1 by "x" then the expression x + (-x) + x + (-x) + ... suffers from exactly the same sort of imprecision as the original. To get the 1/2 that Sheldon cites, one needs to replace the original sum with 1/x - 1/x^2 + 1/x^3 - ..., then sum the series (using the definition of limit from analysis, not algebra) to get x/(1+x) in a certain range of convergence that does not include x=1, and then make the substitution x = 1.

4. Number theory has virtually nothing to do with infinite sums of the kind Sheldon cites -- it is the study of properties of integers -- and has nothing to do with extending results on infinite series to "non-integers etc."

It takes real talent to be this clueless.

Sunday, July 22, 2018

History Quiz


This American president's library contains many books on religion, but it also contains a copy of Darwin's Origin of Species.

Which American president is it?

Friday, July 13, 2018

Discovery Institute Branches Out Into Comedy


That wretched hive of scum and villainy, the Discovery Institute, has announced that its nefarious tentacles have snagged a new venture: a situation comedy called the "Walter Bradley Center for Natural and Artificial Intelligence".

Walter Bradley, as you may recall, is the engineering professor and creationist who, despite having no advanced training in biology, wrote a laughably bad book on abiogenesis. Naming the "center" after him is very appropriate, as he's never worked in artificial intelligence and, according to DBLP, has no scientific publications on the topic.

And who was at the kick-off for the "center"? Why, the illustrious Robert J. Marks II (who, after nearly four years, still cannot answer a question about information theory), William Dembski (who once published a calculation error that resulted in a mistake of 65 orders of magnitude), George MontaƱez, and (wait for it) ... Michael Egnor.

Needless to say, none of these people have any really serious connection to the mainstream of artificial intelligence. Egnor has published exactly 0 papers on the topic (or any computer science topic), according to DBLP. Dembski has a total of six entries in DBLP, some of which have a vague, tangential relationship to AI, but none have been cited by other published papers more than a handful of times (other than self-citations and citations from creationists). Marks has some serious academic credentials, but in a different area. In the past, he published mostly on topics like signal processing, amplifiers, antennas, information theory, and networks; lately, however, he's branched out into publishing embarrassingly naive papers on evolution. As far as I can tell, he's published only a small handful of papers that could, generously speaking, be considered as mainstream artificial intelligence, none of which seem to have had much impact. MontaƱez is perhaps the exception: he's a young Ph. D. who works in machine learning, among other things. He has one laughably bad paper in AI, about the Turing test, in an AI conference, and another one in AAAI 2015, plus a handful in somewhat-related areas.

In contrast, take a look at the DBLP record for my colleague Peter van Beek, who is recognized as a serious AI researcher. See the difference?

Starting a center on artificial intelligence with nobody on board who would be recognized as a serious, established researcher in artificial intelligence? That's comedy gold. Congrats, Discovery Institute!

Saturday, March 03, 2018

Who's the Comedian?


I wrote a joke that was once voted the funniest religious joke of all time. This what I looked like in high school.
Who am I?

Friday, February 16, 2018

The World's Best Job: Moose Rescue


Snowmobilers worked together to free a stuck moose last month in Newfoundland.

"We knew the moose was stuck really good," one said.

And a trucker saved a moose in British Columbia.

After I retire, that's the job for me!

Hat tip: A. L.

Sunday, February 11, 2018

Yet Another Baseless Claim about Consciousness


If I live long enough, I'm planning to write a book entitled "The 100 Stupidest Things Anyone Ever Said About Minds, Brains, Consciousness, and Computers". Indeed, I've been collecting items for this book for some time. Here's my latest addition: Michael S. Gazzaniga, a famous cognitive neuroscientist who should know better, writes:

Perhaps the most surprising discovery for me is that I now think we humans will never build a machine that mimics our personal consciousness. Inanimate silicon-based machines work one way, and living carbon-based systems work another. One works with a deterministic set of instructions, and the other through symbols that inherently carry some degree of uncertainty.

If you accept that the brain functions computationally (and I think the evidence for it is very strong) then this is, of course, utter nonsense. It was the great insight of Alan Turing that computing does not depend in any significant way on the underlying substrate where the computing is being done. Whether the computer is silicon-based or carbon-based is totally irrelevant. This is the kind of thing that is taught in any third-year university course on the theory of computation.

The claim is wrong in other ways. It is not the case that "silicon-based machines" must work with a "deterministic set of instructions". Some computers today have access to (at least in our current physical understanding) a source of truly random numbers, in the form of radioactive decay. Furthermore, even the most well-engineered computing machines sometimes make mistakes. Soft errors can be caused, for example, by cosmic rays or radioactive decay.

Furthermore, Dr. Gazzaniga doesn't seem to recognize that if "some degree of uncertainty" is useful, this is something we can simulate with a program!

Sunday, February 04, 2018

The Sortition Solution: Representation by Randomly-Chosen Representatives


The US political system is clearly broken. To name just a few problems:
  • the legislative agenda is largely driven, not by citizen need, but by lobbyists and special interests that can afford large political contributions;
  • corruption is rampant;
  • the budget never gets balanced because existing funded items have strong special interest support;
  • new budget items get added (but rarely removed) by special interests;
  • special interests consistently block action where there is widespread public support (e.g., gun control);
  • political parties induce a tribalist "us vs. them" mentality that leads to gridlock and an inability to deal with corruption within a party;
  • minority political viewpoints (Greens, for example) rarely get elected because they cannot achieve a majority in their district;
  • representatives are typically chosen from a small number of professions (e.g., law), while other sorts of expertise (e.g., science) are not adequately represented;
  • almost all representatives are Christians; atheists and other minority religious viewpoints are wildly under-represented;
  • incumbents have a huge advantage over challengers, even when they are clearly unfit;
  • women and minorities are wildly under-represented;
  • rural voters and interests are over-represented;
  • instead of being seen as employees doing the work of citizens, representatives become media celebrities in their own right;
  • legislators are extremely reluctant to address controversial issues, for fear of being voted out in the next election;
  • first-past-the-post voting means that candidates that most voters dislike are often elected.
Proportional representation is often proposed as a solution to some of these problems. In the most typical version of proportional representation --- party-list --- you vote for a party, not a candidate, and representatives are then chosen from a list the party provides. But this doesn't resolve the corruption and tribalism problems embodied in the first few items on my list.

My solution is exotic but simple: sortition, or random representation. Of course, it's not original with me: we use sortition today to form juries. But I would like to extend it to all legislative bodies.

Support for sortition comes from all parts of the political spectrum; William F. Buckley, Jr., for example, once said, "I am obliged to confess that I should sooner live in a society governed by the first two thousand names in the Boston telephone directory than in a society governed by the two thousand faculty members of Harvard University."

Here is a brief outline of how it would work. Legislators would be chosen uniformly and randomly from a universal, publicly-available list; perhaps a list of all registered voters.

In each election period (say 2-5 years), a random fraction of all representatives would be completely replaced, perhaps 25-50%. This would allow some institutional memory and expertise to be retained, while insuring that incumbents do not have enough time to build up fiefdoms that lead to corruption.

Sortition could be phased in gradually. For the first 10 years, sortition could be combined with a traditional electoral system, in some proportion that starts small and eventually completely replaces the traditional electoral system. This would increase public confidence in the change, as well as avoiding the problem of a "freshman class" that would be completely without experience.

I suggest that we start with small state legislatures, such as New Hampshire, as an experiment. Once the experiment is validated (and I think it would be) it could move to replace the federal system.

Advantages

Most of the problems I mentioned above would be resolved, or greatly reduced in scope.

The new legislative body would be truly representative of the US population: For example, about 50% of legislators would be women. About 13% would be black, 17% Hispanic or Latino, and 5% Asian. About 15% would be atheists, agnostics, humanists, or otherwise religiously unaffiliated.

Issues would be decoupled from parties: Right now, if you vote for the Republicans, you get lower taxes and restrictions on abortion. What if you support one but not the other? There is no way to express that preference.

Difficult legislative choices will become easier: Experiments have shown over and over that balancing the federal budget -- traditionally one of the most difficult tasks in the existing system -- turns out to be a brief and relatively trivial exercise for non-partisan citizen groups. (Here's just one such example.) Sortition would resolve this thorny problem.

One significant motivation for corruption -- getting donations for re-election -- would essentially disappear. Of course, there would be other opportunities for corruption (there always are), but at least one would be gone.

A diverse elected body would be able to consider issues from a wide variety of different perspectives. Effective action could be taken where there is widespread public support (e.g., gun control).

Objections answered

People will not want to serve: We would pay them very well -- for example, $250,000 per year. We would enact a law requiring employers to release representatives from the employment with a guarantee of re-employment after their term is over. If someone refuses to serve, we'd just move to the next person on the random list.

Sortition will produce stupid, incompetent, and dishonest representatives: Very true. Some will be stupid, some will be incompetent, and some will be dishonest. But this is also true for the existing system. (Have you ever seen Louis Gohmert being interviewed?) In my view, those with genuine expertise and leadership ability will naturally be seen as leaders by others and acquire some influence within the chamber. Stupid and incompetent people will quickly be recognized for what they are and will not have as much influence in the legislative agenda.

The public will not have trust in the selection process: Trust is a genuine issue; people will naturally distrust a new system. That's one reason to phase it in gradually. Mathematicians and theoretical computer scientists know a lot about how to sample randomly; whatever specific method is chosen would be open-source and subject to scrutiny. To make a truly random choice even more convincing, a combination of different methods could be used. For example, we could use algorithmic methods to choose a sample of (say) a thousand names. Then we could use physical means (for example, the ping-pong balls used for lotteries) to choose 200 names of the legislators from this group.

The legislative agenda will not be clear: Political parties offer a legislative agenda with priorities, but where will the agenda come under sortition? My answer is that the major issues of the day will generally be clear. For example, today's issues include anthropogenic global warming, terrorism, immigration, wage stagnation, and health care, to name just five. These are clear issues of concern that can be seen without the need of a political party's ideology. The existing federal and state bureaucracies -- civil servants -- will still be there to offer expertise.

People will feel like they have no voice: Without elections, how do people feel their voice is heard? Another legitimate objection. This suggests considering some sort of mixed system, say, with 50% of representatives chosen by sortition and 50% chosen by election. Or perhaps two different legislative bodies, one based on sortition and one based on election. We have to be willing to experiment and innovate.

Sortition should be seriously considered.

Friday, February 02, 2018

Doug Hofstadter, Flight, and AI


Douglas Hofstadter, author of the fascinating book, Gödel, Escher, Bach, is someone I've admired for a long time, both as an expositor and an original thinker.

But he goes badly wrong in a few places in this essay in the Atlantic Monthly. Actually, he's said very similar things about AI in the past, so I am not really that surprised by his views here.

Hofstadter's topic is the shallowness of Google Translate. Much of his criticism is on the mark: although Google Translate is extremely useful (and I use it all the time), it is true that it does not usually match the skills of the best human translators, or even good human translators. And he makes a strong case that translation is a difficult skill because it is not just about language, but about many facets of human experience.

(Let me add two personal anecdotes. I once saw the French version of Woody Allen's movie Annie Hall. In the original scene, Alvy Singer (Woody Allen) is complaining that a man was being anti-semitic because he said "Did you eat?" which Alvy mishears as "Jew eat?". This was translated as "Tu viens pour le rabe?" which Woody Allen conflates with "rabbin", the French word for "rabbi". The translator had to work at that one! And then there's the French versions of the Harry Potter books, where the "Sorting Hat" became the "Choixpeau", a truly brilliant invention on the part of the translator.]

But other things Hofstadter says are just ... wrong. Or wrong-headed. For example, he says, "The bailingual engine isn't reading anything--not in the normal human sense of the verb 'to read.' It's processing text." This is exactly the kind of complaint people made about the idea of flying machines: "A flying machine isn't flapping its wings, so it cannot be said to fly in the normal human understanding of how birds fly." [not an actual quote] Of course a computer doesn't read the way a human does. It doesn't have an iris or a cornea, it doesn't use its finger to turn the page or make the analogous motion on a screen, and it doesn't move its lips or write "How true!" in the margins. But what does that matter? No matter what, computer translation is going to be done differently from the exact way humans do it. The telling question is, Is the translation any good? Not, Did it translate using exactly the same methods and knowledge a human would? To be fair, that's most of his discussion.

As for "It's processing text", I hardly see how that is a criticism. When people read and write and speak, they are also "processing text". True, they process text in different ways than computers do. People do so, in part, taking advantage of their particular knowledge base. But so does a computer! The real complaint seems to be that Google Translate doesn't currently have access to, or use extensively, the vast and rich vault of common-sense and experiential knowledge that human translators do.

Hofstadter says, "Whenever I translate, I first read the original text carefully and internalize the ideas as clearly as I can, letting them slosh back and forth in my mind. It's not that the words of the original are sloshing back and forth; it's the ideas that are triggering all sorts of related ideas, creating a rich halo of related scenarios in my mind. Needless to say, most of this halo is unconscious. Only when the halo has been evoked sufficiently in my mind do I start to try to express it--to 'press it out'--in the second language. I try to say in Language B what strikes me as a natural B-ish way to talk about the kinds of situations that constitute the halo of meaning in question.

"I am not, in short, moving straight from words and phrases in Language A to words and phrases in Language B. Instead, I am unconsciously conjuring up images, scenes, and ideas, dredging up experiences I myself have had (or have read about, or seen in movies, or heard from friends), and only when this nonverbal, imagistic, experiential, mental 'halo' has been realized—only when the elusive bubble of meaning is floating in my brain--do I start the process of formulating words and phrases in the target language, and then revising, revising, and revising."

That's a nice -- albeit maddeningly vague -- description of how Hofstadter thinks he does it. But where's the proof that this is the only way to do wonderful translations? It's a little like the world's best Go player talking about the specific kinds of mental work he uses to prepare before a match and during it ... shortly before he gets whipped by AlphaGo, an AI technology that uses completely different methods than the human.

Hofstadter goes on to say, "the technology I've been discussing makes no attempt to reproduce human intelligence. Quite the contrary: It attempts to make an end run around human intelligence, and the output passages exhibited above clearly reveal its giant lacunas." I strongly disagree with the "end run" implication. Again, it's like viewing flying as something that can only be achieved by flapping wings, and propellers and jet engines are just "end runs" around the true goal. This is a conceptual error. When Hofstadter says "There's no fundamental reason that machines might not someday succeed smashingly in translating jokes, puns, screenplays, novels, poems, and, of course, essays like this one. But all that will come about only when machines are as filled with ideas, emotions, and experiences as human beings are", that is just an assertion. I can translate passages about war even though I've never been in a war. I can translate a novel written by a woman even though I'm not a woman. So I don't need to have experienced everything I translate. If mediocre translations can be done now without the requirements Hofstadter imposes, there is just no good reason to expect that excellent translations can't be eventually be achieved without them, at least in the same degree that Hofstadter claims.

I can't resist mentioning this truly delightful argument against powered mechanical flight, as published in the New York Times:

The best part of this "analysis" is the date when it was published: October 9, 1903, exactly 69 days before the first successful powered flight of the Wright Brothers.

Hofstadter writes, "From my point of view, there is no fundamental reason that machines could not, in principle, someday think, be creative, funny, nostalgic, excited, frightened, ecstatic, resigned, hopeful...".

But they already do think, in any reasonable sense of the word. They are already creative in a similar sense. As for words like "frightened, ecstatic, resigned, hopeful", the main problem is that we cannot currently articulate in a suitably precise sense what we exactly mean by them. We do not yet understand our own biology enough to explain these concepts in the more fundamental terms of physics, chemistry, and neuroanatomy. When we do, we might be able to mimic them ... if we find it useful to do so.

Addendum: The single most clueless comment to Hofstadter's piece is this, from "Steve": "Simple common sense shows that [a computer] can have zero "real understanding" in principle. Computers are in the same ontological category as harmonicas. They are *things*. As in, not alive. Not conscious. Furthermore the whole "brain is a machine" thing is a *belief* based on pure faith. Nobody on earth has the slightest idea how consciousness actually arises in a pile of meat. Reductive materialism is fashionable today, but it is no less faith-based than Mormonism."

Needless to say, this is just the opposite of what I hold.

Friday, January 26, 2018

Just Your Usual Friday Moose Standing on a Car


Here.

Apparently the photo is from Alaska and not Maine.

Hat tip: F. R.

Monday, January 15, 2018

Yet More Incoherent Thinking about AI


I've written before about how sloppy and incoherent a lot of popular writing about artificial intelligence is, for example here and here -- even by people who should know better.

Here's yet another example, a a letter to the editor published in CACM (Communications of the ACM).

The author, a certain Arthur Gardner, claims "my iPhone seemed to understand what I was saying, but it was illusory". But nowhere does Mr. Gardner explain why it was "illusory", nor how he came to believe Siri did not really "understand", nor even what his criteria for "understanding" are.

He goes on to claim that "The code is clever, that is, cleverly designed, but just code." I am not really sure how a computer program can be something other than what it is, namely "code" (jargon for "a program"), or even why Mr. Gardner thinks this is a criticism of something.

Mr. Gardner states "Neither the chess program nor Siri has awareness or understanding". But, lacking rigorous definitions of "awareness" or "understanding", how can Mr. Gardner (or anyone else) make such claims with authority? I would say, for example, that Siri does exhibit rudimentary "awareness" because it responds to its environment. When I call its name, it responds. As for "understanding", again I say that Siri exhibits rudimentary "understanding" because it responds appropriately to many of my utterances. If I say, "Siri, set alarm for 12:30" it understands me and does what I ask. What other meanings of "awareness" and "understanding" does Mr. Gardner appeal to?

Mr. Gardner claims "what we are doing --- reading these words, asking maybe, "Hmmm, what is intelligence?" is something no machine can do." But why? It's easy to write a program that will do exactly that: read words and type out "Hmmm, what is intelligence?" So what, specifically, is the distinction Mr. Gardner is appealing to?

He then says, "That which actually knows, cares, and chooses is the spirit, something every human being has. It is what distinguishes us from animals and from computers." First, there's the usual "actually" dodge. It never matters to the AI skeptic how smart a computer is, it is still never "actually" thinking. Of course, what "actual" thinking is, no one can ever tell me. Then there's the appeal to the "spirit", a nebulous, incoherent thingy that no one has ever shown to exist. And finally, there's the absurd claim that whatever a "spirit" is, it's lacking in animals. How does Mr. Gardner know that for certain? Has he ever observed any primates other than humans? They exhibit, as we can read in books like Chimpanzee Politics, many of the same kinds of "aware" and "intelligent" behaviors that humans indulge in.

This is just more completely incoherent drivel about artificial intelligence, no doubt driven by religion and the need to feel special. Why anyone thought this was worth publishing is beyond me.