Wednesday, May 25, 2016

Actual Neuroscientists Cheerfully Use The Metaphors Epstein Says are Completely Wrong

Here is yet more evidence that psychologist Robert Epstein is all wet when he claims that computation-based metaphors for understanding the brain are factually wrong and hindering research.

Actual research neuroscientists, summarizing what we know about memory, cheerfully use phrases like "storage of information", "stored memory information", "information retrieval", "information storage", "the systematic process of collecting and cataloging data", "retriev[ing]" of data, and so forth. Epstein claims the brain does not form "representations of visual events", but these researchers say "Memory involves the complex interplay between forming representations of novel objects or events...". The main theme of the essays seems to be that spines and synapses are the fundamental basis for memory storage.

So who do you think is likely to know more about what's going on in the brain? Actual neuroscientists who do research on the brain and summarize the state of the art about what is known in a peer-reviewed journal? Or a psychologist who publishes books like The Big Book of Stress-Relief Games?

Hat tip: John Wilkins.

P. S. Yes, I saw the following "Further, LTP and LTD can cooperate to redistribute synaptic weight. This notion differs from the traditional analogy between synapses and digital information storage devices, in which bits are stored and retrieved independently. On the other hand, coordination amongst multiple synapses, made by different inputs, provides benefits with regard to issues of normalization and signal-to-noise." Again, nobody thinks that the brain is structured exactly like a modern digital computer. Mechanisms of storage and retrieval are likely to be quite different. But the modern theory of computation makes no assumptions that data and programs are stored in any particular fashion; it works just as well if data is stored on paper, disk, flash drive, or in brains.


Patrick T. said...

In this overall context, if you aren't familiar with it as yet you might want to have a look at the work of Randy Gallistel, e.g., his 2009 book with King entitled Memory and the computational brain. They argue that while neuroscientists might be cheerfully using these metaphors nobody actually has any idea how information is carried forward in time in the brain, that is how the brain's basic memory mechanism works. The general idea that synaptic weights and connectivity provide the brain's memory mechanism is widely held in contemporary neuroscience, but—and I think that's much more striking—it has become a virtual dogma in large parts of the cognitive sciences (presumably because this would render the brain a finite-state machine akin to a connectionist network). However, as Gallistel and collaborators argue, there are good reasons not to believe that this is the case, plus there actually is some work in neuroscience (Johansson et al., 2014; Chen et al., 2014—also referenced in the paper you link to) that seems to provide tentative evidence for their ideas (as I've discussed elsewhere).

Jeffrey Shallit said...

Thanks, I'll check into it.

I do want to emphasize, however, that it really doesn't matter to my thesis exactly how information is represented in the brain. The whole virtue of theoretical computer science is that these details are largely irrelevant. We still have limits on what the brain, as a computing device, can do, and we have a useful language to express computational claims about the brain.

Patrick T. said...

You're of course right to point to the fact that theory of computation is independent from the actual implementation in hardware or wetware. Yet, at least for the cognitive scientist (and as Gallistel argues also for the neuroscientist), it matters a lot what kind of computing device we consider the brain to be: Is it akin to a finite-state machine or more like a Turing machine with a distinct memory component? The metaphors suggest to me that the latter should serve as a conceptual basis, whereas most work on memory against the background of "plasticity" and connectionism actually relies on the former, so that the brain must always change "itself" (i.e. its current state) by altering synaptic weights and connectivity patterns when memorising information. So, in a very Marrian fashion, despite the principled independence of theory of computation its conceptual tools clearly have implications for how we try to "reverse-engineer" the wetware.