Wednesday, May 25, 2016

Actual Neuroscientists Cheerfully Use The Metaphors Epstein Says are Completely Wrong


Here is yet more evidence that psychologist Robert Epstein is all wet when he claims that computation-based metaphors for understanding the brain are factually wrong and hindering research.

Actual research neuroscientists, summarizing what we know about memory, cheerfully use phrases like "storage of information", "stored memory information", "information retrieval", "information storage", "the systematic process of collecting and cataloging data", "retriev[ing]" of data, and so forth. Epstein claims the brain does not form "representations of visual events", but these researchers say "Memory involves the complex interplay between forming representations of novel objects or events...". The main theme of the essays seems to be that spines and synapses are the fundamental basis for memory storage.

So who do you think is likely to know more about what's going on in the brain? Actual neuroscientists who do research on the brain and summarize the state of the art about what is known in a peer-reviewed journal? Or a psychologist who publishes books like The Big Book of Stress-Relief Games?

Hat tip: John Wilkins.

P. S. Yes, I saw the following "Further, LTP and LTD can cooperate to redistribute synaptic weight. This notion differs from the traditional analogy between synapses and digital information storage devices, in which bits are stored and retrieved independently. On the other hand, coordination amongst multiple synapses, made by different inputs, provides benefits with regard to issues of normalization and signal-to-noise." Again, nobody thinks that the brain is structured exactly like a modern digital computer. Mechanisms of storage and retrieval are likely to be quite different. But the modern theory of computation makes no assumptions that data and programs are stored in any particular fashion; it works just as well if data is stored on paper, disk, flash drive, or in brains.

Saturday, May 21, 2016

Epstein's Dollar Bill and What it Doesn't Prove About the Brain


I hate to pick on poor confused Robert Epstein again, but after thinking about it some more, I'd like to explain why an example in his foolish article doesn't justify his claims.

Here I quote his example without the accompanying illustrations:

In a classroom exercise I have conducted many times over the years, I begin by recruiting a student to draw a detailed picture of a dollar bill – ‘as detailed as possible’, I say – on the blackboard in front of the room. When the student has finished, I cover the drawing with a sheet of paper, remove a dollar bill from my wallet, tape it to the board, and ask the student to repeat the task. When he or she is done, I remove the cover from the first drawing, and the class comments on the differences.

Because you might never have seen a demonstration like this, or because you might have trouble imagining the outcome, I have asked Jinny Hyun, one of the student interns at the institute where I conduct my research, to make the two drawings. Here is her drawing ‘from memory’ (notice the metaphor):

And here is the drawing she subsequently made with a dollar bill present:

Jinny was as surprised by the outcome as you probably are, but it is typical. As you can see, the drawing made in the absence of the dollar bill is horrible compared with the drawing made from an exemplar, even though Jinny has seen a dollar bill thousands of times.

What is the problem? Don’t we have a ‘representation’ of the dollar bill ‘stored’ in a ‘memory register’ in our brains? Can’t we just ‘retrieve’ it and use it to make our drawing?

Obviously not, and a thousand years of neuroscience will never locate a representation of a dollar bill stored inside the human brain for the simple reason that it is not there to be found.

Now let me explain why Epstein's example doesn't even come close to proving what he thinks it does.

First, the average person is not very good at drawing. I am probably much, much worse than the average person in this respect. When I play "pictionary", for example, people always laugh at my stick figures. Yet, given something to look at and copy, I can do a reasonable job of copying what I see. I, like many people, have trouble converting what I see "in my mind's eye" to a piece of paper. So it is not at all surprising to me that the students Epstein asks to draw a dollar bill produce the results he displays. His silly experiment says nothing about the brain and what it "stores" at all!

Second, Epstein claims that the brain stores no representation of a dollar bill whatsoever. He is pretty unequivocal about this. So let me suggest another experiment that decisively refutes Epstein's claim: instead of asking students to draw a dollar bill (an exercise which evidently is mostly about the artistic ability of students), instead give them five different "dollar bills", four of which have been altered in some fairly obvious respect. For example, one might have a portrait of Jefferson instead of Washington, another might have the "1" in only two corners instead of all four corners, another might have the treasury seal in red instead of the typical green for a federal reserve note, etc. And one of the five is an ordinary bill. Now ask them to pick out which bills are real and which are not. To make it really precise, each student should get just one bill and not be able to see the bills of others.

Here's what I will bet: students will, with very high probability, be able to distinguish the real dollar bill from the altered ones. I know with certainty that I can do this.

Now, how could one possibly distinguish the real dollar bills from the fake ones if one has no representation of the real one stored in the brain?

And this is not pure speculation: thousands of cashiers every day are tasked with distinguishing real bills from fake ones. Somehow, even though they have no representation of the dollar bill stored in their brain, they manage to do this. Why, it's magic!

Thursday, May 19, 2016

Yes, Your Brain Certainly Is a Computer


- Did you hear the news, Victoria? Over in the States those clever Yanks have invented a flying machine!

- A flying machine! Good heavens! What kind of feathers does it have?

- Feathers? It has no feathers.

- Well, then, it cannot fly. Everyone knows that things that fly have feathers. It is preposterous to claim that something can fly without them.

OK, I admit it, I made that dialogue up. But that's what springs to mind when I read yet another claim that the brain is not a computer, nor like a computer, and even that the language of computation is inappropriate when talking about the brain.

The most recent foolishness along these lines was penned by psychologist Robert Epstein. Knowing virtually nothing about Epstein, I am willing to wager that (a) Epstein has never taken a course in the theory of computation (b) could not pass the simplest undergraduate exam in that subject (c) does not know what the Church-Turing thesis is and (d) could not explain why the thesis is relevant to the question of whether the brain is a computer or not.

Here are just a few of the silly claims by Epstein, with my commentary:

"But here is what we are not born with: information, data, rules, software, knowledge, lexicons, representations, algorithms, programs, models, memories, images, processors, subroutines, encoders, decoders, symbols, or buffers – design elements that allow digital computers to behave somewhat intelligently."

-- Well, Epstein is wrong. We, like all living things, are certainly born with "information". To name just one obvious example, there is an awful lot of DNA in our cells. Not only is this coded information, it is even coded in base 4, whereas modern digital computers use base 2 -- the analogy is clear. We are certainly born with "rules" and "algorithms" and "programs", as Frances Crick explains in detail about the human visual system in The Astonishing Hypothesis.

"We don’t store words or the rules that tell us how to manipulate them."

-- We certainly do store words in some form. When we are born, we are unable to pronounce or remember the word "Epstein", but eventually, after being exposed to enough of his silly essays, suddenly we gain that capability. From where did this ability come? Something must have changed in the structure of the brain (not the arm or the foot or the stomach) that allows us to retrieve "Epstein" and pronounce it whenever something sufficiently stupid is experienced. The thing that is changed can reasonably be said to "store" the word.

As for rules, without some sort of encoding of rules somewhere, how can we produce so many syntactically correct sentences with such regularity and consistency? How can we produce sentences we've never produced before, and have them be grammatically correct?

"We don’t create representations of visual stimuli"

-- We certainly do. Read Crick.

"Computers do all of these things, but organisms do not."

-- No, organisms certainly do. They just don't do it in exactly the same way that modern digital computers do. I think this is the root of Epstein's confusion.

Anyone who understands the work of Turing realizes that computation is not the province of silicon alone. Any system that can do basic operations like storage and rewriting can do computation, whether it is a sandpile, or a membrane, or a Turing machine, or a person. Today we know (but Epstein apparently doesn't) that every such system has essentially the same computing power (in the sense of what can be ultimately computed, with no bounds on space and time).

"The faulty logic of the IP metaphor is easy enough to state. It is based on a faulty syllogism – one with two reasonable premises and a faulty conclusion. Reasonable premise #1: all computers are capable of behaving intelligently. Reasonable premise #2: all computers are information processors. Faulty conclusion: all entities that are capable of behaving intelligently are information processors."

-- This is just utter nonsense. Nobody says "all computers are capable of behaving intelligently". Take a very simple model of a computer, such as a finite automaton with two states computing the Thue-Morse sequence. I believe intelligence is a continuum, and I think we can ascribe intelligence to even simple computational models, but even I would say that this little computer doesn't exhibit much intelligence at all. Furthermore, there are good theoretical reasons why finite automata don't have enough power to "behave intelligently"; we need a more powerful model, such as the Turing machine.

The real syllogism goes something like this: humans can process information (we know this because humans can do basic tasks like addition and multiplication of integers). Humans can store information (we know this because I can remember my social security number and my birthdate). Things that both store information and process it are called (wait for it) computers.

"a thousand years of neuroscience will never locate a representation of a dollar bill stored inside the human brain for the simple reason that it is not there to be found."

-- Of course, this is utter nonsense. If there were no representation of any kind of a dollar bill in a brain, how could one produce a drawing of it, even imperfectly? I have never seen (just to pick one thing at random) a crystal of the mineral Fletcherite, nor even a picture of it. Ask me to draw it and I will be completely unable to do so because I have no representation of it stored in my brain. But ask me to draw a US dollar bill (in Canada we no longer have them!) and I can do a reasonable, but not exact job. How could I possibly do this if I have no information about a dollar bill stored in my memory anywhere? And how is that I fail for Fletcherite?

"The idea, advanced by several scientists, that specific memories are somehow stored in individual neurons is preposterous"

-- Well, it may be preposterous to Epstein, but there is at least evidence for it, at least in some cases.

"A wealth of brain studies tells us, in fact, that multiple and sometimes large areas of the brain are often involved in even the most mundane memory tasks."

-- So what? What does this have to do with anything? There is no requirement, in saying that the brain is a computer, that memories and facts and beliefs be stored in individual neurons. Storage that is partitioned in various location, "smeared" across the brain, is perfectly compatible with computation. It's as if Epstein has never heard of digital neural networks, where one can similarly say that a face is not stored in any particular location in memory, but rather distributed across many of them. These networks even exhibit some characteristics of brains, in that damaging parts of them don't entirely get rid of the stored data.

"My favourite example of the dramatic difference between the IP perspective and what some now call the ‘anti-representational’ view of human functioning involves two different ways of explaining how a baseball player manages to catch a fly ball – beautifully explicated by Michael McBeath, now at Arizona State University, and his colleagues in a 1995 paper in Science. The IP perspective requires the player to formulate an estimate of various initial conditions of the ball’s flight – the force of the impact, the angle of the trajectory, that kind of thing – then to create and analyse an internal model of the path along which the ball will likely move, then to use that model to guide and adjust motor movements continuously in time in order to intercept the ball.

"That is all well and good if we functioned as computers do, but McBeath and his colleagues gave a simpler account: to catch the ball, the player simply needs to keep moving in a way that keeps the ball in a constant visual relationship with respect to home plate and the surrounding scenery (technically, in a ‘linear optical trajectory’). This might sound complicated, but it is actually incredibly simple, and completely free of computations, representations and algorithms."

-- This is perhaps the single stupidest passage in Epstein's article. He doesn't seem to know that "keep moving in a way that keeps the ball in a constant visual relationship with respect to home plate and the surrounding scenery" is an algorithm. Tell that description to any computer scientist, and they'll say, "What an elegant algorithm!". In exactly the same way, the way raster graphics machines draw a circle is a clever technique called "Bresenham's algorithm". It succeeds in drawing a circle using linear operations only, despite not having the quadratic equation of a circle (x-a)2 + (y-b)2 = r2 explicitly encoded in it.

But more importantly, it shows Epstein hasn't thought seriously at all about what it means to catch a fly ball. It is a very complicated affair, involving coordination of muscles and eyes. When you summarize it as "the simply needs to keep moving in a way that keeps the ball in a constant visual relationship with respect to home plate and the surrounding scenery", you hide all the amazing amount of computation and algorithms that are going on behind the scenes to coordinate movement, keep the player from falling over, and so forth. I'd like to see Epstein design a walking robot, let alone a running robot, without any algorithms at all.

"there is no reason to believe that any two of us are changed the same way by the same experience."

-- Perhaps not. But there is reason to believe that many of us are changed in approximately the same way. For example, all of us learn our natural language from parents and friends, and we somehow learn approximately the same language.

"We are organisms, not computers. Get over it."

-- No, we are both organisms and computers. Get over it!

"The IP metaphor has had a half-century run, producing few, if any, insights along the way."

-- Say what? The computational model of the brain has had enormous success. Read Crick, for example, for an example of how the computational model has had some success in modeling the human visual system. Here's an example from that book I give in my algorithms course at Waterloo: why is it that humans can find a single red R in a field of green R's almost instantly whether there are 10 or 1000 letters, or a single red R in a field of red L's almost as quickly, but has trouble finding the unique green R in a large sea of green L's and red R's and red L's? If you understand algorithms and the distinction between parallel and sequential algorithms, you can explain this. If you're Robert Epstein, I imagine you just sit there dumbfounded.

Other examples of successes include artificial neural nets, which have huge applications in things like handwriting recognition, face recognition, classification, robotics, and many other areas. They draw their inspiration from the structure of the brain, and somehow manage to function enormously well; they are used in industry all the time. If that is not great validation of the model, I don't know what is.

I don't know why people like Epstein feel the need to deny things for which the evidence is so overwhelming. He behaves like a creationist in denying evolution. And like creationists, he apparently has no training in a very relevant field (here, computer science) but still wants to pontificate on it. When intelligent people behave so stupidly, it makes me sad.

P. S. I forgot to include one of the best pieces of evidence that the brain, as a computer, is doing things roughly analogous to digital computers, and certainly no more powerful than our ordinary RAM model or multitape Turing machine. Here it is: mental calculators who can do large arithmetic calculations are known, and their feats have been catalogued: they can do things like multiply large numbers or extract square roots in their heads without pencil and paper. But in every example known, their extraordinary computational feats are restricted to things for which we know there exist polynomial-time algorithms. None of these computational savants have ever, in the histories I've read, been able to factor arbitrary large numbers in their heads (say numbers of 100 digits that are the product of two primes). They can multiply 50-digit numbers in their heads, but they can't factor. And, not surprisingly, no polynomial-time algorithm for factoring is currently known, and perhaps there isn't one.

Sunday, May 08, 2016

Give Carol Wainio an Honorary Degree in Journalism


Carol Wainio catalogues once again a list of Margaret Wente's journalistic transgressions.

This is a story that Canadian media is not addressing with much intellectual honesty. Take this grotesque column by Emma Teitel, for example.

What journalism school in Canada will be brave enough and honest enough to recognize Wainio's work -- for example, by awarding her an honorary degree?

Wednesday, May 04, 2016

Christian god to Ted Cruz: "Drop Dead"


Poor Ted Cruz.

First, his god tells him to run for President of the US.

Then, his god humiliates him in primary after primary.

Ted wanted his Republican opponents to pray and drop out of the race.

But it didn't quite work out that way.

What a capricious puppet-master god Ted worships!

Sunday, May 01, 2016

Columnist Margaret Wente Caught Plagiarizing Again


Although columnist Margaret Wente has been caught plagiarizing yet one more time by visual artist Carol Wainio, the Globe and Mail refuses to take real responsibility for it, saying only that "This work fell short of our standards, something that we apologize for. It shouldn’t have happened and the Opinion team will be working with Peggy to ensure this cannot happen again."

But it has happened again. Again and again and again and again.

Any reputable newspaper would have, in my opinion, fired Wente long ago. Inexplicably, the National Post's Terence Corcoran actually defends Wente. He can only do so by keeping his eyes firmly shut.