Thursday, May 19, 2016

Yes, Your Brain Certainly Is a Computer


- Did you hear the news, Victoria? Over in the States those clever Yanks have invented a flying machine!

- A flying machine! Good heavens! What kind of feathers does it have?

- Feathers? It has no feathers.

- Well, then, it cannot fly. Everyone knows that things that fly have feathers. It is preposterous to claim that something can fly without them.

OK, I admit it, I made that dialogue up. But that's what springs to mind when I read yet another claim that the brain is not a computer, nor like a computer, and even that the language of computation is inappropriate when talking about the brain.

The most recent foolishness along these lines was penned by psychologist Robert Epstein. Knowing virtually nothing about Epstein, I am willing to wager that (a) Epstein has never taken a course in the theory of computation (b) could not pass the simplest undergraduate exam in that subject (c) does not know what the Church-Turing thesis is and (d) could not explain why the thesis is relevant to the question of whether the brain is a computer or not.

Here are just a few of the silly claims by Epstein, with my commentary:

"But here is what we are not born with: information, data, rules, software, knowledge, lexicons, representations, algorithms, programs, models, memories, images, processors, subroutines, encoders, decoders, symbols, or buffers – design elements that allow digital computers to behave somewhat intelligently."

-- Well, Epstein is wrong. We, like all living things, are certainly born with "information". To name just one obvious example, there is an awful lot of DNA in our cells. Not only is this coded information, it is even coded in base 4, whereas modern digital computers use base 2 -- the analogy is clear. We are certainly born with "rules" and "algorithms" and "programs", as Frances Crick explains in detail about the human visual system in The Astonishing Hypothesis.

"We don’t store words or the rules that tell us how to manipulate them."

-- We certainly do store words in some form. When we are born, we are unable to pronounce or remember the word "Epstein", but eventually, after being exposed to enough of his silly essays, suddenly we gain that capability. From where did this ability come? Something must have changed in the structure of the brain (not the arm or the foot or the stomach) that allows us to retrieve "Epstein" and pronounce it whenever something sufficiently stupid is experienced. The thing that is changed can reasonably be said to "store" the word.

As for rules, without some sort of encoding of rules somewhere, how can we produce so many syntactically correct sentences with such regularity and consistency? How can we produce sentences we've never produced before, and have them be grammatically correct?

"We don’t create representations of visual stimuli"

-- We certainly do. Read Crick.

"Computers do all of these things, but organisms do not."

-- No, organisms certainly do. They just don't do it in exactly the same way that modern digital computers do. I think this is the root of Epstein's confusion.

Anyone who understands the work of Turing realizes that computation is not the province of silicon alone. Any system that can do basic operations like storage and rewriting can do computation, whether it is a sandpile, or a membrane, or a Turing machine, or a person. Today we know (but Epstein apparently doesn't) that every such system has essentially the same computing power (in the sense of what can be ultimately computed, with no bounds on space and time).

"The faulty logic of the IP metaphor is easy enough to state. It is based on a faulty syllogism – one with two reasonable premises and a faulty conclusion. Reasonable premise #1: all computers are capable of behaving intelligently. Reasonable premise #2: all computers are information processors. Faulty conclusion: all entities that are capable of behaving intelligently are information processors."

-- This is just utter nonsense. Nobody says "all computers are capable of behaving intelligently". Take a very simple model of a computer, such as a finite automaton with two states computing the Thue-Morse sequence. I believe intelligence is a continuum, and I think we can ascribe intelligence to even simple computational models, but even I would say that this little computer doesn't exhibit much intelligence at all. Furthermore, there are good theoretical reasons why finite automata don't have enough power to "behave intelligently"; we need a more powerful model, such as the Turing machine.

The real syllogism goes something like this: humans can process information (we know this because humans can do basic tasks like addition and multiplication of integers). Humans can store information (we know this because I can remember my social security number and my birthdate). Things that both store information and process it are called (wait for it) computers.

"a thousand years of neuroscience will never locate a representation of a dollar bill stored inside the human brain for the simple reason that it is not there to be found."

-- Of course, this is utter nonsense. If there were no representation of any kind of a dollar bill in a brain, how could one produce a drawing of it, even imperfectly? I have never seen (just to pick one thing at random) a crystal of the mineral Fletcherite, nor even a picture of it. Ask me to draw it and I will be completely unable to do so because I have no representation of it stored in my brain. But ask me to draw a US dollar bill (in Canada we no longer have them!) and I can do a reasonable, but not exact job. How could I possibly do this if I have no information about a dollar bill stored in my memory anywhere? And how is that I fail for Fletcherite?

"The idea, advanced by several scientists, that specific memories are somehow stored in individual neurons is preposterous"

-- Well, it may be preposterous to Epstein, but there is at least evidence for it, at least in some cases.

"A wealth of brain studies tells us, in fact, that multiple and sometimes large areas of the brain are often involved in even the most mundane memory tasks."

-- So what? What does this have to do with anything? There is no requirement, in saying that the brain is a computer, that memories and facts and beliefs be stored in individual neurons. Storage that is partitioned in various location, "smeared" across the brain, is perfectly compatible with computation. It's as if Epstein has never heard of digital neural networks, where one can similarly say that a face is not stored in any particular location in memory, but rather distributed across many of them. These networks even exhibit some characteristics of brains, in that damaging parts of them don't entirely get rid of the stored data.

"My favourite example of the dramatic difference between the IP perspective and what some now call the ‘anti-representational’ view of human functioning involves two different ways of explaining how a baseball player manages to catch a fly ball – beautifully explicated by Michael McBeath, now at Arizona State University, and his colleagues in a 1995 paper in Science. The IP perspective requires the player to formulate an estimate of various initial conditions of the ball’s flight – the force of the impact, the angle of the trajectory, that kind of thing – then to create and analyse an internal model of the path along which the ball will likely move, then to use that model to guide and adjust motor movements continuously in time in order to intercept the ball.

"That is all well and good if we functioned as computers do, but McBeath and his colleagues gave a simpler account: to catch the ball, the player simply needs to keep moving in a way that keeps the ball in a constant visual relationship with respect to home plate and the surrounding scenery (technically, in a ‘linear optical trajectory’). This might sound complicated, but it is actually incredibly simple, and completely free of computations, representations and algorithms."

-- This is perhaps the single stupidest passage in Epstein's article. He doesn't seem to know that "keep moving in a way that keeps the ball in a constant visual relationship with respect to home plate and the surrounding scenery" is an algorithm. Tell that description to any computer scientist, and they'll say, "What an elegant algorithm!". In exactly the same way, the way raster graphics machines draw a circle is a clever technique called "Bresenham's algorithm". It succeeds in drawing a circle using linear operations only, despite not having the quadratic equation of a circle (x-a)2 + (y-b)2 = r2 explicitly encoded in it.

But more importantly, it shows Epstein hasn't thought seriously at all about what it means to catch a fly ball. It is a very complicated affair, involving coordination of muscles and eyes. When you summarize it as "the simply needs to keep moving in a way that keeps the ball in a constant visual relationship with respect to home plate and the surrounding scenery", you hide all the amazing amount of computation and algorithms that are going on behind the scenes to coordinate movement, keep the player from falling over, and so forth. I'd like to see Epstein design a walking robot, let alone a running robot, without any algorithms at all.

"there is no reason to believe that any two of us are changed the same way by the same experience."

-- Perhaps not. But there is reason to believe that many of us are changed in approximately the same way. For example, all of us learn our natural language from parents and friends, and we somehow learn approximately the same language.

"We are organisms, not computers. Get over it."

-- No, we are both organisms and computers. Get over it!

"The IP metaphor has had a half-century run, producing few, if any, insights along the way."

-- Say what? The computational model of the brain has had enormous success. Read Crick, for example, for an example of how the computational model has had some success in modeling the human visual system. Here's an example from that book I give in my algorithms course at Waterloo: why is it that humans can find a single red R in a field of green R's almost instantly whether there are 10 or 1000 letters, or a single red R in a field of red L's almost as quickly, but has trouble finding the unique green R in a large sea of green L's and red R's and red L's? If you understand algorithms and the distinction between parallel and sequential algorithms, you can explain this. If you're Robert Epstein, I imagine you just sit there dumbfounded.

Other examples of successes include artificial neural nets, which have huge applications in things like handwriting recognition, face recognition, classification, robotics, and many other areas. They draw their inspiration from the structure of the brain, and somehow manage to function enormously well; they are used in industry all the time. If that is not great validation of the model, I don't know what is.

I don't know why people like Epstein feel the need to deny things for which the evidence is so overwhelming. He behaves like a creationist in denying evolution. And like creationists, he apparently has no training in a very relevant field (here, computer science) but still wants to pontificate on it. When intelligent people behave so stupidly, it makes me sad.

P. S. I forgot to include one of the best pieces of evidence that the brain, as a computer, is doing things roughly analogous to digital computers, and certainly no more powerful than our ordinary RAM model or multitape Turing machine. Here it is: mental calculators who can do large arithmetic calculations are known, and their feats have been catalogued: they can do things like multiply large numbers or extract square roots in their heads without pencil and paper. But in every example known, their extraordinary computational feats are restricted to things for which we know there exist polynomial-time algorithms. None of these computational savants have ever, in the histories I've read, been able to factor arbitrary large numbers in their heads (say numbers of 100 digits that are the product of two primes). They can multiply 50-digit numbers in their heads, but they can't factor. And, not surprisingly, no polynomial-time algorithm for factoring is currently known, and perhaps there isn't one.

147 comments:

JimV said...

This is an excellent post.

At some point (if we last long enough and continue to make progress) computers will not only spell-check and grammar-check articles before publication, but also fact-check and logic-check them. Even with today's technology, there is no excuse for Mr. Epstein (and his editor) not doing simple Google searches to check his assertions. He would have easily found facts about neural networks and brain memory and so on. Here's another algorithm for him: "Thou shalt not bear false witness."

Petrushka said...

Brains are certainly a kind of computer, but I'd argue they have more in common with Thompson's Tone Discriminator than with your iPhone.

Timing is a critical factor in neural behavior, and neural timing is not controlled by a central clock, at least not in the parts of the brain associated with complex behavior.

Thompson's circuit exists on a digital chip substrate, but the evolved circuit co-opts analog components of the chips' behavior.

Jeffrey Shallit said...

I think it's important to draw a distinction between "being a computer" and "doing a computation" on the one hand, and "how is the computation implemented?" on the other. There are many different ways to do computation, and perhaps a modern digital computer is not the best model for exactly how the brain achieves what it does. But those are implementation details that have almost nothing to do with whether a brain is a computer or does computation.

Petrushka said...

It makes a difference if you are doing AI research, or building autonomous automobiles, or fantasizing about the future. It would make a difference if you were contemplating recording memories or transferring consciousness to another medium.

Science fiction, but such thoughts occupy an enormous amount of our entertainment time. It's difficult to get through a day without seeing a reference to artificial intelligence.

philosopher-animal said...

I've encountered otherwise intelligent and scientifically literate people deny computational properties to brains (for example) previously - my philosophy of science professor, Mario Bunge, most notably. (Who at least has the excuse that he was 7 years Turing's junior ;))

I think he would take a dynamical systems approach to cognition, etc. where there is no "representation" at all. (That this doesn't get one out of the woods is another story, of course.)

Jeffrey Shallit said...

Yes, Bunge is a weird case, in my opinion. About twenty years ago, Bunge wrote a startlingly uninformed column for Free Inquiry (Spring 1997 issue) commenting on computers, algorithms, and the Internet. I wrote a detailed rebuttal which Free Inquiry refused to publish. When I asked why, they promised to tell me why it was rejected and then never did. Here are some of the silly things Bunge said:

"there can be no algorithm for designing algorithms"

"Only a living brain, and well-appointed one at that, can invent radically new ideas, in particular analogies and high-level principles."

"the Internet will never displace refereed academic journals and books"

Unknown said...

I agree with almost everything you say here; I have only one quibble: can you really compute the Thue-Morse sequence with a finite automaton?

Jeffrey Shallit said...

Yes, under the following model: for each n, express n in base-2, run the resulting string through the automaton and record the result. See my book, "Automatic Sequences".

Michael Anes said...

Indeed. The essence of Marr, Vision, Chapter 1: The Philosophy and the Approach

SSG said...

DNA is not coded in base 4. It's true 4 different molecules are used but they only can pair with their counterpart (C with G and A with T). So possible states of a single gene is "1" or "0". There is no state "2" or "3". It's essentially base 2.

Jeffrey Shallit said...

Incorrect, SSG. If what you said were true, then DNA sequence databases would be expressed in terms of an alphabet of size 2, but they're not. See, for example, here.

Cillian McHugh said...

An interesting perspective. Yes there are similarities but in answer to:

"I don't know why people like Epstein feel the need to deny things for which the evidence is so overwhelming"

The reason for ditching an obsolete metaphor is because keeping it is beginning to prove to be an obstacle to progress in understanding how the brain actually works.


Also:
"I'd like to see Epstein design a walking robot, let alone a running robot, without any algorithms at all."

here is a walking robot with no algorithms:

http://www.popsci.com/technology/article/2011-10/passive-walking-robot-can-stroll-downhill-forever-no-power-source

Jeffrey Shallit said...

The reason for ditching an obsolete metaphor is because keeping it is beginning to prove to be an obstacle to progress in understanding how the brain actually works.

1. It's not a metaphor, it's a fact. Brains are computers.
2. It's not obsolete, it is the principal way to understand the brain.
3. It is not an obstacle, this is just an unsupported claim of a few people. Epstein is not a neuroscientist.

here is a walking robot with no algorithms:

By "a walking robot" I mean one that can walk on the kinds of terrain people do, not just downhill on a treadmill.

matador said...

For me you make it sound like the brain is designed by anyone, can you clearify your thoughts about that? I mean to say that the brain ís a computer, do you imply that it is build up, artificial?

Aaron Hosford said...

Even the downhill-walking robot implements an algorithm. It is computed mechanically, rather that electronically, but being computed by a less obviously computational medium does not make it any less of an algorithm.

Unknown said...

I don't think Jeffrey gave you the best response. I have often had the same thought you are expressing, that since DNA pairs come in only two types (AT, and CG) we might be tempted to say that a given DNA locus only encodes one bit of information instead of two, but that actually isn't quite right. We have to take account of the "polarity" at any locus (is it AT or TA? Is it CG or GC?). The polarity *matters* from an informational perspective because the triplet codons that encode amino acids are built from the full four-character alphabet. So, we can say a DNA base encodes two bits of information in one of two ways. The classical way is to get our two bits from the fact that there are four characters. As you point out, this way of looking at it can be problematic. The other way is to view the "type pairs" (AT and CG) as encoding only one bit, but then to add a second bit to account for their polarity, as I explained above. Either way, we still get two bits of information at each DNA locus.

Cheers!

Cillian McHugh said...

I suspect that we have different understandings of what a computer is and does.
Using the traditional information processor view 2 ways in which applying knowledge of how computers work to psychology have hampered psychology are
1: input -> process -> output
2: vision as "photos"

1. On a computer there is clear separation between input process and output. This is not the case with perceiving and acting (noted as far back as 1896 by Dewey). Action affects perception in a way that makes the separation implied by the computer model untenable.

2. The eye is not a camera/video camera that receives images for the the brain to process. Gibson's work on optical flow is one theory of vision that really puts questions over the brain as a computer view.

The above is based on the traditional view of computers. Perhaps this is dated, and computers are more holistic than they used to be?

Jeffrey Shallit said...

On a computer there is clear separation between input process and output.

Not true. That was one of the big insights of Turing: data can be considered as program, and program as data. This has been known since 1936, before there were electronic computers. Again, don't confuse computation in the abstract with the specific details of how it is implemented in most digital computers today.

The eye is not a camera/video camera that receives images for the the brain to process

Irrrelevant. Whether something is a computer or not has nothing to do with how it gets its inputs. They could be through a video camera or punched cards or a microphone. I don't know much about Gibson's work, but from a brief scan of what I could find online I see no implications at all for the "brain as computer" point of view.

Cillian McHugh said...

"Again, don't confuse computation in the abstract with the specific details of how it is implemented in most digital computers today."

I think this is probably the key point. Psychologists (in the main, obviously there are exceptions) don't have the knowledge of computers necessary to avoid this confusion. The "input -> process -> output" model is still what gets used and this is a problem (I wasn't aware that this separation isn't true for computers)

The point with Gibson's work is that visual system doesn't record for processing by the brain, we actively engage with the environment and vision is active (where a camera isn't). I can see how in another conception of what a computer is this may be irrelevant, but to the lay psychologist (with little or no computing knowledge) these kind of assumptions are associated with the brain as a computer view and this is a problem for making progress in psychology.

I think however that we probably agree on the point that the brain isn't a computer in the traditional "desktop", input-output understanding of a computer.

Jeffrey Shallit said...

I see no fundamental difference between the supposedly "active" vision of a human, and a robot equipped with a camera that can base its decisions on future actions on what it is currently seeing. What do you think the difference is?

Keith Wiley said...

That previous comment was by me, Keith Wiley. Somehow the blog system credited me as "Unknown".

Jim Pyke said...

http://www.springer.com/gb/book/9781402067082

"Gives perhaps the most comprehensive analysis of The Turing Test - the ultimate benchmark of true artificial intelligence ever published"

...co-edited by Robert Epstein

Just sayin'

Jeffrey Shallit said...

Just saying what, Jim?

Cillian McHugh said...

I think that the camera passively records and delivers information to the robot in a separable way that not reflective of how vision works. The camera records everything and sends it onto the robot for processing, but that's not how vision works. Seeing occurs as part of goal directed behaviour and what we see depends on the current goal. I don't see how a robot could be fooled by optical illusions. Or would a robot fail to see the gorilla in that "count the passes" basketball video?

https://www.youtube.com/watch?v=vJG698U2Mvo

Jeffrey Shallit said...

Robots fooled by optical illusions:

https://www.newscientist.com/article/dn12701-artificial-brain-falls-for-optical-illusions/

Come up with a formal definition of "goal directed behavior", one that we could apply to both humans, animals, and computers, and then we'll talk. Until then, "goal directed behavior" is one of those vague phrases that psychologists love, but cannot be tested in any rigorous or numerical way.

Paul said...

Great post, thanks for articulating this so well.

One additional doozy from the Aeon piece was, following the dollar bill example, it was claimed "From this simple exercise, we can begin to build the framework of a metaphor-free theory of intelligent human behaviour." To me, this is terribly damning to Epstein's credibility as any kind of serious theorist. How can you possibly have a theory without metaphor?

Cillian McHugh said...

Great article. Thanks for sharing.

As for a formal definition of goal directed behaviour, I will have a look and see what I can find. It's not necessarily supposed to be the thing that is tested numerically though, the reason for its popularity is that every process that we try to study occurs in the context of the current goal (whatever that may be. Obvious lab setting examples: selecting the correct answer; finishing the task as quickly as possible). This makes it all very messy but human behaviour is varied and messy but we can't understand behaviour if we ignore the goal directed nature of it.

Anyway, I'll look for a formal definition in the morning.

Ultimativity said...

I appreciate your response to Epstein's article. I enjoyed your perspective and your arguments. However, I disagree with the very topic.

You mentioned that you have a belief about "consciousness". I suppose, like computation, there is some latitude in its definition. The problem I see is that consciousness is not a "thing." Likewise, our brain does not "do" anything. At least not independently of the rest of our body."Consciousness" like "computation" is a means that organisms evolved of interacting with their environment.

Humans have learned to mimic biological functions and to build machines that mimic biological functions. But building a pump, then calling a heart a pump, or asserting that it conducts pumping is the sort of metaphor that leads to lengthy discussions like this. It's like calling flowing water a river, then stating the river erodes land or creates whirlpools.

Anyway, thanks for posting and thanks for allowing comment.

Jeffrey Shallit said...

Consciousness is just the name we give to the fact that our brains contain a model of the world sophisticated enough to contain a model of ourselves in it. I don't know why people need to mystify it.

But building a pump, then calling a heart a pump, or asserting that it conducts pumping is the sort of metaphor that leads to lengthy discussions like this.

So airplanes don't fly, then, because only birds fly. That's the viewpoint I parodied in my opening. I don't think it's a useful distinction. People already talk about computers as if they have thoughts and desires: "my Iphone wants me to update its operating system", "my computer thinks it has two monitors attached to it", and so forth. I don't have any objection to this kind of language, nor do I think it obscures anything.

David Thomas said...

Isn't considering a brain a computer somewhat reductive? Brains compute but is there anything else they do or would you say computation will be able to cover it? I'm obviously not a computer scientist but I'm interested in this topic specifically.

MycoBean said...

I think it would be seriously cool to discuss this statement: 'Brains can do everything that computers can do but computers cannot do everything that brains can do. Why exactly is that?' Excellent discussion here.

MycoBean said...

That previous question posed through mycobean email was posed by Jessica Rose. :)

David Thomas said...

Maybe a better question is as to whether or not a computer can be a brain?

chrisbudmelman said...

fun fun fun. i love when someone makes me change my mind about something. understanding that computation is a process that occurs in mediums such as digital computers and brains rather then getting hung up on the confusing terminology (we call digital computers "computers") has changed how i think of this ai debate. so thanks.

Jeffrey Shallit said...

Brains can do everything that computers can do but computers cannot do everything that brains can do.

What are some examples of things, in your opinion, that brains can do that computers cannot? Do you think these things are in principle impossible for computers, or merely not yet achieved? What sort of methods could we use to show that some task is even in principle impossible for a computer?

The history of claims that this or that task is "impossible" for a computer to do is a long and sad record of being proved wrong. Machine translation, playing ping-pong, grandmaster-quality chess, Go, and so forth, are predictions of the past that later proved to be incorrect.

Jeffrey Shallit said...

Maybe a better question is as to whether or not a computer can be a brain?

What's your definition of a brain? First we need to understand what you think the salient characteristics of a brain are, before we can answer that question.

Jeffrey Shallit said...

Isn't considering a brain a computer somewhat reductive? Brains compute but is there anything else they do or would you say computation will be able to cover it?

Reductionism is not a dirty word for me. Indeed, I think reductionism is the soul of all science. We have to take things apart and study them in an isolated environment before we can begin to understand how they fit together.

The human brain, for example, does more than just compute, in that it does things like coordinate motion and regulate physiological processes. In other words, it has physical effects in the physical world. But lots of robots also interact with the physical word, with that process controlled by computers. How is this fundamentally different from what a brain does?

Jeffrey Shallit said...

Cillian: did this robot exhibit goal-directed behavior, not ?

http://www.frc.ri.cmu.edu/~hpm/talks/revo.slides/1960.html

Jeffrey Shallit said...

chrisbudmelman: very glad to hear it. Being willing to change your mind is a great characteristic for a scientist or a citizen.

livinginthemovie said...

Here's what I posted to my Colin E. Davis Facebook account about this article: "I was very interested in hearing about an alternative model, but none was proposed. All we have is models really. He told the history of our modeling of the mind as analogous to different developing technologies, with the computer analogy being the latest. He said none of them were true. But what is truth? Truth is the model that gets you results. Thats all truth is and all reality is, is modeling. Its all myth. The computer model is one that gets us from point a to point b, and will be discarded or built upon in time as our perception of ourselves within the cosmos grows. The computer analogy works, so lets use it, but lets realize its a model and a myth and that its true "enough" for now. The alternative is to propose a new model that works better, at least for those who can apply it and get new and beneficial results."

David Thomas said...

I think the differences between a computer and a brain in terms of how they compute are drastic because with brains there are so many connections and there are so many processes going on in parallel. There's extreme complexity and difference in strategies used in functioning between a brain and an artificial computer. There is also an absence of agency in a machine as of now.

I do agree that a brain computes though.

Jeffrey Shallit said...

There are already massively parallel machines. In any event, the theory of computation implies that anything that can be computed with parallel processing can also be computed with a sequential machine.

As for "agency", if you can give a precise definition of what it means, maybe we can decide whether machines have agency. Personally, I think "agency" is one of those words philosophers love but are unable to define coherently.

David Thomas said...

Agency would entail being an object that experiences it's objecthood in a sense that it is aware,in varying degrees, of itself,others,and it's environment. And for some forms of agency there may be abilities of abstraction in ways that it utilizes some semblance of strategic foresight in order to formulate and accomplish goals. I suppose having agency may entail assumed autonomy.

Jeffrey Shallit said...

OK, that's fine with me, although I'd call that "consciousness". A computer with a video camera that has a model of its environment and itself would then have agency. That's fine with me. What I don't like is claims about agency that could not apply to such a computer.

Cillian McHugh said...

No luck on finding an acceptable formal definition.

Brilliant! I'd read about a similar robot that had solar cells and would seek the light when batteries were low. Don't think it was as sophisticated as the Beast you shared. Also the openworm project is very exciting.

I'm still not convinced that the brain is a computer though. But maybe that's because of what I think of when I hear "computer"; and what I think of when I hear brain-computer comparisons: playing chess; decision making/problem solving; objective accuracy vs relative and sufficient for current purposes. I think in order to convince people the brain is a computer you need to first educate them about what you mean by "computer".
Perhaps "the brain is a computer (but not in the everyday sense of the word computer)" would be easier to get on board with. Im not saying I'd necessarily agree then either (because a successful computer simulation of a particular brain related phenomenon doesn't necessarily imply that the brain is a computer - software can simulate weather patterns but nobody is claiming that the weather is a computer) but there would certainly be more common ground for constructive discussion (the beast, and the optical illusion example).

Pooya said...

The last paragraph reminds me of the story told by Oliver Sacks about twins John and Michael who could apparently verify primality of numbers up to 20 digits. As we know now primality can be checked in polynomial time.

http://www.pepijnvanerp.nl/articles/oliver-sackss-twins-and-prime-numbers/

philosopher-animal said...

About Bunge's "no algorithm for discovering algorithms". The 1997 article mentioned was *just* before I had my first class with him, where I responded to this sort of thing. Bad timing. :) Anyway, what he *meant* was "algorithm" as used *in mathematics*. As we figured out the next year, too, Roger Penrose has/had the *same* problem.

Computing people, in particular those in AI, seem to use "algorithm" in a slightly more permissive way than those in mathematics. (Dennett points this out too, as I discovered/reminded myself of later.) In particular, a result from an algorithm in the computing sense is *not* guaranteed to be correct. "Math meaning" algorithms are. For example, the Miller-Rabin test for primality is only probabilistically going to get the right answer in the general case. Note of course this depends on one regarding the output from M-R as just the result of "is prime?" Rather than, say, the pair which is itself 100% correct.

This divergence in usage might stem from Turing's remark that a machine cannot be expected to be infallible and intelligent at once, but I don't know.

Jeffrey Shallit said...

I am aware of the story told by Sacks, but I find it dubious. I would want to see a more convincing test done with a mathematician present. In any event, a strong pseudoprime test or two would suffice for nearly all small numbers; we don't need the more recent result about PRIMES in P to test primality efficiently in practice. (Nobody actually uses the deterministic polynomial-time algorithm of Agrawal et al. in practice).

Pooya said...

I understand. I was just trying to think of what's an extreme task other than simple multiplication and square rooting with numbers a skeptic would come up with, and pointing out that even that is provably in P.

I enjoyed reading your response to Epstein's article, it well deserves the tone in this post.

Eric James said...

Epstein is an acolyte of BF Skinner and is merely rehashing behaviorist twattle. Chomsky, the first to deal a deathblow to the behaviorist/empiricist model has spent his life showing that there are innate rules for language, contra this essay.

Craig W said...

"humans can process information (we know this because humans can do basic tasks like addition and multiplication of integers)."
It seems like it could be useful to define processing so that it includes humans and computers. But at this level of explanation, where we simply look at results and make inferences, it just looks like conveniently shaped black box. We are long way from "knowing" anything about humans other than how broadly we've defined Information Processing.

To me this is like saying that horses are automobiles are both "human travel machines". In that sense, automobiles solve the same problems as horses. However, I'm not going to hire an automobile mechanic when my horse has health problems, because the "human travel machine" classification is only useful under a very limited set of circumstances.

Jeffrey Shallit said...

Here's the point: if you want to know what a horse and and car are capable of pulling, you can rely on well-understood physical theories of energy, power, and so forth.

Similarly, if you want to know what a brain is capable of doing, you can use insights from the theory of computation to help figure it out. Denying that the brain is a computer only handicaps you, it does not provide insight.

Nobody is saying you should call your local Apple Genius bar if you get a brain tumor.

Stefaan Himpe said...

I couldn't finish reading Epstein's exposition because it piles nonsense upon misinformation.

On the other hand, others have summarized it better than I ever could: https://xkcd.com/386/

Craig W said...

One doesn't have to categorize the brain as a computer in order to see if there is any insight from computational theory. The early Greeks were able to more or less predict the motions of the planets with a geocentric model of the solar system. Yet, there was another, better model waiting to be discovered.

It's justifiable to demand a better explanation (than IP) before moving on, yet I don't see why there shouldn't be a vigorous curiosity about something better. I mean, isn't it easy enough to see why classifying horses as machines could lead to misunderstandings about horses? I can't think of any scientific field that would take this idea seriously.

Can understandings about AI contribute to understandings about human intelligence? This seems to be the case, but that doesn't mean it is the best possible model. The things that computers do quite easily are things that the human mind often finds hard and vice versa. Certainly inasmuch as they purposely *imitate* what we know about biological brains (with neural networks and the like) they can resemble biological processes. But that doesn't mean they are fundamentally similar. Much like building a robotic horse doesn't make the horse any more of a machine. There may be some crossover knowledge, like the movement and support of limbs, but it still ignores what it fundamentally means to be a horse and would be *dependent on the actual study of horses* to be applicable.

It seems to me that someone with an interest in developing robotics and AI is much more invested in the IP model simply because it is the best way to move forward in their field. But for a psychologist, the underlying causes are much more important than the functional equivalences.

Sean Larabee said...

Thanks for this post. I am not educated enough to understand all the mathematics involved but the general conclusions are not lost on me.

Sean Larabee said...

Thanks for this post.

nickpsecurity said...

Nice counters. I have a more controversial view I posted here on Hacker News discussion with links to back it up:

https://news.ycombinator.com/item?id=11731697

The view is that the brain is an *analog* computer. There used to be a whole field of analog computers that implemented specific, mathematical functions. Ran in real time without regard to clocks or a dedicated memory. Several models for general-purpose analog were made, including for a neural net. Sieglemann's work especially important there. Brain's own physical properties and signaling are similar to an analog system. My links show all these. So, it's probably either an analog or mixed-signal system.

That most people forgot analog computing is why they're having trouble understanding it. Whereas, some wise engineers are having great successes with brain chips by doing them analog. The wafer-scale project is particularly the kind of thing you're not going to see with digital design. Analog, esp distributed system, handles it with ease and high efficiency.

Jack Crow said...

> Isn't considering a brain a computer somewhat reductive?

https://en.wikipedia.org/wiki/Human_computer

Computation used to be a professionally performed, manual process. Computation is a function of the mind (with a distinction between autonomic and somatic).

Nathaniel Demian said...

The post was excellent. Thank you.

And nicpsecurity, thanks for your comment. To be honest, I know nothing about analog computers (really) other than vague dynamic systems metaphors so hopefully this will be eye-opening.

kylheku.com said...

Very well argued article, full of excellent points!

All the name calling attacks on Epstein detract from its value, however.

Can't we just let the Epsteins of the world be "hopeless wetware romantics", rather than calling them stupid?

Don Berg said...

Have you read the critique of the computational brain in George Lakoff and Mark Johnson's book Philosophy in the Flesh? Since Lakoff is a cognitive linguist it seems fair to say that he has the proper scientific expertise to make informed judgements about the relationship between what we say about the brain, what we mean by what we say about the brain, and the relationship between what we meant and how brains actually work.

Another Lakoff book you would probably find interesting is Where Does Mathematics Come From with Raphael Nunez.

Jeffrey Shallit said...

I am distantly familiar with Lakoff's "embodied cognition" ideas. To me, they do not cast doubt at all on the idea of the brain as a computer; they only say that one cannot understand the brain in isolation, and that cognition is more than just the brain. To me this is obvious, since what we do is process information, and that information is gained from various sense organs like eyes, ears, and so forth. Neither does the claim that most information processing is done subconsciously affect my thesis.

I've read his book about mathematics and find much of it convincing.

Hugo GRA said...
This comment has been removed by the author.
Hugo GRA said...

One of the best article's about the theme. Congrats!!!
Like memory, in the brain, everything is computable and have a representation, even emotional states, that are informations in specifics neuronal circuits, etc.
Even I'm developing a AGI that has emotional states and common sense as its cognitive core.
It's aways good to find a good article like this.

jim green said...

Epstein isn't known for his solid research skills or any significant advancement in psychological theory or practice - his claim to fame was that he was a long time editor of Psychology Today where he was (in)famous for promoting wacky views in a bid to stir up circulation. He loved to be contrarian and scoff at "political correctness" - in many ways he was very conservative, especially when it came to family. His two big controversies was his sympathy, bordering on advocacy, for NARTH and the ex-gay movement - specifically he allowed NARTH to advertise in Psychology Today and he editorialized several times that ex-gay therapy was appropriate for men (its almost always men) who were "unhappy with the gay lifestyle of promiscuity and mental illness" He scoffed at the position statements of the APA, AMA, etc. opposing ex-gay therapy as "political correctness" and he even went so far as to concoct a bizarre sexuality scale that was so biased towards fluidity that his "experiment" that he claimed validated his gay to straight hobby horse was in reality little more than an anonymous internet survey that was a series of clicks with no serious controls or any concern with statistical methods and measures. His other long term media point of reference is his assertion that arranged marriages in other cultures are far superior than our decadent and selfish recent custom of marrying for love and companionship in a marriage of legal equality. Epstein, like many conservatives, thought marriage entail moral, cultural and religous constraints to force people into marriage and make it difficult to divorce. Epstein thought love was so fickle and unpredictable that it shoudn't be the basis for marriage - even complete strangers will eventually develop a friendship out of custom and close proximity - passion or sexual chemistry would be a bonus but is secondary. Finally, Epstein is not very fond of kids these days and their fancy google and facebook, which he sees a conspiracy to control our lives and stamp out dissent. When google placed a malware warning on a part of his personal website, he threatened to sue and insisted that google was using it as a pretext to attack him. A few months later he finally had a security professional verify googles claims and his site was restored to the google index once the virus was removed. Epstein has always been more a hack and attention junky than a serious scientist...

Da Blog said...

My concern about the issue at hand is the mistake that several disciplines make when they assume that if the brain does process information then that is what constitutes what it means to be human. Thus we can get trapped in either or thinking rather than treating the distinction as somewhat fuzzy. With the IP approach we have those who are in the business of "reprogramming the unconscious" or those who make the leap that what we think eventually becomes an objective reality. And as is usually the case, both "sides" cling to their conviction that they are right. I tend to side with Wittengenstein's view that the model of the world is the world and that the brain can never, and need not, have a complete description of what it is interacting with. This ability to interact with the world is what AI researches have been up to for a while and there is no certainty that they will ever ultimately succeed.

Tomaz C said...

THANK YOU I thought of the almost exactly commentaries when I was reading the article, you saved me hours of writing.

DeWayne Stafford said...

When the insanity of loading the consciousness into a silicon brain titanium body robot was finally accomplished, the discovery was made that the only need then was a can of WD-40 !!!! The human body and brain is a multidimensional, quantum, bioelectrochemical,transceiver stargate portal for your consciousness. The brain actually acts as a sort of filter resistor capacitor and gives the illusion that we as consciousness are the brain or body. People function mostly as meat puppets driven by the unfiltered programs of others who are running on programs of others on and on. Rational thought is a divine program that can be RAN as a fire wall for the mind. When we edit the toxic programs of others, we can , by using the master program of rational thought,actually start consciously running our own sanity program as opposed to the toxic insanity programs of the dominant antisocial metaprogram of secular antisocial dar-win-lose-ism. When we delete this deathist toxic program we become sane and conscious of our true identity as consciousness. Functioning as a sleep walking meat puppet is not an optimal condition of functioning. Pathological conformity is politically correct slavery. TIME TO WAKE UP

Freddie said...

I think your scorched earth method here is not helping you to make the case you want to make. And in particular, I wish you would consider the fact that that the default stance in the popular understanding of cognitive science, neuroscience, and artificial intelligence is for people to vastly overestimate our current level of understanding and our current abilities. And it's really important that we push back against that. Maybe read with more empathy next time.

Bastien said...

What are some examples of things, in your opinion, that brains can do that computers cannot? Do you think these things are in principle impossible for computers, or merely not yet achieved? What sort of methods could we use to show that some task is even in principle impossible for a computer?

The history of claims that this or that task is "impossible" for a computer to do is a long and sad record of being proved wrong. Machine translation, playing ping-pong, grandmaster-quality chess, Go, and so forth, are predictions of the past that later proved to be incorrect.


What about using THE notion that exactly captures the fact that a task is impossible for computers, namely undecidability? It would suffice to prove that the human brain can solve some undecidable problem, such as the halting problem, and we would have a proof that our brain is more powerful than computers. Of course, a formal proof that our brain is powerful enough to solve the halting problem would require a formal mathematical model of our brain, which we do not have and may never have. But my gut feeling is that our brain can do more than computers.

Jeffrey Shallit said...

What would it mean for the brain to be able to solve an undecidable problem, such as the halting problem? You do realize, I hope, that undecidable problems have infinitely many instances (by definition) and a brain could only solve, in finite time, finitely many instances. So it would be very hard to show that the brain could solve undecidable problems with any finite number of examples.

As for "gut feelings", I think the history of science shows that they aren't worth very much. My "gut feeling" is that brains can't do more than computers (or, more precisely, computers with sensory inputs). So now we have two gut feelings that are opposed to each other. How shall we resolve this, if not by experiment and science? And then the gut feelings lose all value entirely.

Bastien said...

Of course undecidable problems have infinitely many instances. And of course a brain can only solve finitely many problems in a finite time, but also a computer. This has nothing to do with computation complexity: being able to solve a decision problem means being able to solve correctly ANY instance in a finite time, not ALL instances in a finite time. So the same notion we use for computers could be used for the brain, and we could compare their abilities this way, assuming we had an accurate mathematical model of the brain (or at least of its "computational" aspect, should there be more).

Concerning gut feelings I agree that scientifically speaking they're not worth much. The only value I see in gut feelings in science is that they can provide the motivation and the direction to make advances. But once again, I don't think that this matter can be settled scientifically for now, as we probably won't have a good enough understanding of the brain and an accurate computational model of the brain before several decades at the very least, if we ever have. So all we can do for now is to make conjectures. I don't claim that I am right saying that the brain can do more than computers, but just as you make clear in this article and your comments what your gut feeling is, I don't see why I couldn't say what mine is.

Jeffrey Shallit said...

" So the same notion we use for computers could be used for the brain, and we could compare their abilities this way"

How, specifically? Give me a detailed example.

Niall MacCeide said...

If the brain is a computer then your thesis is also a computation and thus without insight. Your argument defeats itself.

Jeffrey Shallit said...

Dear Niall:

Your "argument" is a non sequitur. Why must a computation be without "insight"? And what is your definition of "insight"? Let's have a good rigorous definition, so that I can tell what kinds of brain processes result in "insight" and which do not.

Ian Wardell said...

"Knowing virtually nothing about Epstein, I am willing to wager that (a) Epstein has never taken a course in the theory of computation (b) could not pass the simplest undergraduate exam in that subject (c) does not know what the Church-Turing thesis is and (d) could not explain why the thesis is relevant to the question of whether the brain is a computer or not".

Well, the mind-body problem is a philosophical problem, not a scientific one or a computational one. Basically one would need to subscribe to a variety of functionalism in the mind-body problem, but all types of functionalism are simply untenable (as are all types of materialism). I know absolutely nothing about the Church-Turing thesis etc, but one does not need to in order to understand we could have no reason to ever suppose a computer is conscious. If you think otherwise Jeffrey Shallit then wheel out your conscious robot. It appears to respond and behave appropriately? We can always pull the sucker apart to see precisely why it says what it does. And it will have nothing to do with any alleged conscious states, but will have everything to do with the execution of algorithms.

Ian Wardell said...

"We certainly do store words in some form. When we are born, we are unable to pronounce or remember the word "Epstein", but eventually, after being exposed to enough of his silly essays, suddenly we gain that capability. From where did this ability come? Something must have changed in the structure of the brain (not the arm or the foot or the stomach) that allows us to retrieve "Epstein" and pronounce it whenever something sufficiently stupid is experienced. The thing that is changed can reasonably be said to "store" the word".

Nope, that is unintelligible. Physical things/processes can't store memories. Memories as in information can be stored, yes, but that's not memory in the everyday sense where we recollect prior events in our lives eg 1001010101 is not literally the same thing as a memory of something I did yesterday, just as the word green is not literally the very same thing as my understanding of what it is like to experience greenness. It couldn’t be anyway since different words in different languages are used to convey greenness. And the knot I tie in my handkerchief, to remind me of something, is not literally the memory itself etc.

Suppose I try to remember something. I don’t have to hold it in my mind, I can write it down in a notebook. So my memory is stored there.

Later on I look in my notebook in order to retrieve my memory. But wait! How do I understand what I’ve written down? I have to be able to remember the meanings of the words I’ve written down.

So I would need to make a 2nd lot of notes in order to remember the meaning of the words composing the first lot of notes.

But this process just goes on without end. That is to say we get an infinite regress. So saying our memories are stored is explanatorily vacuous.

I propose therefore that memories are a direct perception of ones past history. (direct need not entail that such memories are completely accurate just as direct visual perception need not entail that what you see can’t be blurred due to poor eyesight).

Jeffrey Shallit said...

all types of functionalism are simply untenable (as are all types of materialism

Ha, ha! Good one. Let me know when you're back on planet earth.

I know absolutely nothing about the Church-Turing thesis

Of course not! Why should any philosopher bother to learn the most fundamental things about computation in order to understand the brain? The whole idea is silly, silly, I say.

we could have no reason to ever suppose a computer is conscious

Define your terms rigorously, then we'll talk. It always amuses me when philosophers try to argue about what is "conscious" without even having the most basic understanding of the difficulty of giving a scientific definition of the word.

then wheel out your conscious robot

People. Apes. Dolphins. All are conscious robots.

And it will have nothing to do with any alleged conscious states, but will have everything to do with the execution of algorithms. Exactly the point Leibniz made. It is correct, but doesn't have the implication you think. Exactly the same thing applies to people.

Nope, that is unintelligible

Yes, of course it's unintelligible to the computationally illiterate. Take a basic course in the theory of computation and the theory of information, then get back to me.

Physical things/processes can't store memories.

Of course they can. Not only that, physical things/processes are the only candidate anyone has to store memories.

So I would need to make a 2nd lot of notes in order to remember the meaning of the words composing the first lot of notes.

But this process just goes on without end. That is to say we get an infinite regress. So saying our memories are stored is explanatorily vacuous.


That is the dumbest objection I've heard. It is not original with you; I've heard it before. I wonder who first proposed it? It would be useful to know for my book project about the philosophy of mind. There is no reason to think "I would need to make a 2nd lot of notes in order to remember the meaning of the words" any more than a Turing machine would need such a thing to process the 0's and 1's it wrote originally wrote. Ferchrissake, take a course in theory of computation before you babble so mindlessly.

I propose therefore that memories are a direct perception of ones past history

What does it mean? How precisely could we directly perceive our past history if it is not stored somewhere in the brain?

Mindalia en Directo 1 said...

First of all I do apologize for my poor english. Well, if a mountain of sand is also a computer, obviously brain is a computer. It's not something new that brains can compute therefore we are computers, this is a logic inference. The issue is if there is a conimplication relationship between what we call a computer and a brain. As we don't understand totally how a brain works (there is several theories about memory storage, for example or basic conscious experience as an emerging property that I don't know if you, as a core reductionist, would accept) I don't think we can say much about paralelism brain/computer: there is an apodictical problem in explanans or explanandum. Let's wait first to understand brain and consciusness. Your opinion on consciousness is right, but pretty incomplete, a brain wich knows itself doesn't contains some kinds of consciusness, grade, intentionality, qualia, subjective experience, value, meaning, semantics, cretivity, believing, moral consciousness, perceptive consciusness, free will... Consciusness is something probably ontologicaly primary and elemental in reality and remains unexplained till present, as neuroscience itself mainly recognize. I would suggest you to read again Penrose's work in order to improve your skills on the issue. I would suggest to try to understand Ian Wardell entry, it may help you increase your view on brain as a computer. I don't find very helpfull ideas like every phisical thing is a computer or because a brain computes then is a computer... a brain can emite calor but is not quite correct to say it's a stove. We need to watch it in a wider view: brains are related to activities like having fear, hesitate, having cold, anger, depresion, believe in gods... maybe coomputers will do... but if not, is not quite informative to say a brain is a computer. Thnak you for your time.

Ian Wardell said...

Greetings Jeffrey. No we can't scientifically define consciousness. Only the material can be defined by science. Consciousness is necessarily non-material (assuming the modern definition of the material coined at the birth of modern science in the 17th Century). See an essay by me:

http://ian-wardell.blogspot.co.uk/2016/04/neither-modern-materialism-nor-science.html


Jeff said:

"There is no reason to think "I would need to make a 2nd lot of notes in order to remember the meaning of the words" any more than a Turing machine would need such a thing to process the 0's and 1's it wrote originally wrote. Ferchrissake, take a course in theory of computation before you babble so mindlessly".

You are confused because the same word "memory" is being used in 2 different senses. Computer memory is not the same thing as normal memory. Computers cannot store memories in the standard meaning of the term.

Jeff said:
"How precisely could we directly perceive our past history if it is not stored somewhere in the brain?"

In the same way that we can visually see even if the objects seen are not literally stored in the brain. Think of memories as kinda like a type of retropsychokinesis.




Jeff said:
"Exactly the point Leibniz made. It is correct, but doesn't have the implication you think. Exactly the same thing applies to people".


Well . . yes, the same argument applies to people. Which demonstrates that people cannot possibly be purely material beings (and this is a different argument than the arguments I give in my essay I link to above).


Let's for the sake of argument grant that the brain is a computer. This does not alter the fact that computation does not equate to consciousness. Computation is only defined by rules, rules that we impose so that the computational process gives correct answers. So computation only has meaning by virtue of consciousness (same as information is only information by virtue of consciousness). So it would simply be nonsensical to suppose computation ,in and of itself is consciousness.

To further elucidate, when I think something through I come to the correct conclusions by virtue of an unfolding understanding on my part as my chain of thought develops. But materialism holds that physical processes are wholly determined by physical laws (which we are supposing can also be expressed as a computational process) and not by virtue of any meaning or semantic content. A calculator gives a correct answer eg 2+3 = 5, only because the meaning that we impose upon the symbols 2,3 and 5. Computational processes do not intrinsically have meaning. The meaning is merely relative to the observer, just like the word "happiness" only has meaning by virtue of the fact we assign this sequence of symbols a certain meaning. If all conscious beings in the Universe were to suddenly cease to exist, then all books would contain no meaning whatsoever. Likewise, all computational processes and the outputs of all computers would be wholly devoid of any meaning.

Since thought processes -- at least if they are more likely to lead to the truth rather than falsehoods -- have to involve an unfolding understanding, then if our thought processes were purely computational process (or any physical process), then we be no more likely to reach correct conclusions than false conclusions about anything. Which is absurd. So computationalism, or indeed any type of (reductive) materialism, cannot possibly be correct.

So if the brain is a computer, then this just underscores the fact that there must be something else, namely a non-physical self (such a self need not necessarily survive the deaths of our bodies though).

Ian Wardell said...

Oops, that should have been retrocognition! Not retropsychokinesis.

Purple Neon Lights said...

Whenever I encounter a scientist with the arrogant certainty that Jeffrey Shallit displays, red flags go up all over the place. A good scientist is very, VERY conscious of the fact that there are no certainties, that there are only working assumptions and probabilities. The history of science is a litany of modifications and expansion of  the existing paradigm. (Example: incorporating relativity into Newtonian mechanics.) Jeffrey Shallit behaves as if this is not the case. He thinks he's got it all figured out.

Shallit is making me think of what the great fighter pilot Chuck Yeager said, "The one that gets you is the one you'll never see coming." Shallit needs to bear this in mind. Show me an arrogant scientist, and I'll show you someone who is at very high risk of winding up with a lot of egg on their face, followed up with a dessert of humble pie.

Shallit:  Account for the extremely well documented cases of mediums talking to disincarnates, and providing highly verifiable veridical evidence. If you don't know about this evidence of these cases, then you need to do some research. If you think you know the subject, and you wave it away, I guarantee you haven't researched it thoroughly enough. Get off your high horse, dude. Your zipper is down.

No, I'm not going to provide you any links -- that's your job. If you ask me really, really nicely, maybe I'll give you a link. But, you're so arrogantly certain yourself that I don't want to gamble taking the time to hold your hand. You've got to conduct yourself in a more responsible scientific manner for me to give you any more of my time. Your argument seems to consist of, "Francis Crick said it, so it must be true."

Please.

If you take even an a hour to read a history of science you will see that time and again, highly-esteemed authorities making learned proclamations to all, can be so very wrong. You have taken no time to account for this possibility. This marks you as a third-rate scientist and observer. Heads up.

You may wonder why I'm being so in your face. I'm being in your face because I'm reflecting your attitude back to you. You called the tune.

Jeffrey Shallit said...

Account for the extremely well documented cases of mediums talking to disincarnates, and providing highly verifiable veridical evidence.

There aren't any. Conventional explanations (cheating, "cold reading", and so forth) are much more plausible. You don't provide a single case we can discuss.

If you take even an a hour to read a history of science you will see that time and again, highly-esteemed authorities making learned proclamations to all, can be so very wrong.

Famous logical fallacy: "these famous scientists were all wrong, and so you are."

You may wonder why I'm being so in your face.

You're not, sorry. Try harder.

Jeffrey Shallit said...

See an essay by me:

Umm, no. So far you've provided no evidence you understand the issues.

Consciousness is necessarily non-material

No, it's not. Consciousness, for me, means a system having sense organs that allow it to form a model of the world that includes itself.

Computers cannot store memories in the standard meaning of the term.

Sure they can. Putting your claim in italics is not providing evidence for your claim.

Think of memories as kinda like a type of retropsychokinesis.

There is no evidence for psychokinesis, much less the kind you advocate. As for "retrocognition", I don't know what you think it might mean.

Which demonstrates that people cannot possibly be purely material beings

No, it demonstrates that brain processes are algorithmic.

that computation does not equate to consciousness.

Nobody said it did. A lot of computation doesn't involve consciousness. But you can't have consciousness without a computational system. Please try to understand the argument.

So computation only has meaning by virtue of consciousness

This is clearly false. Counterexample: a Turing machine that computes the function n -> n^2 does so whether or not it was designed by a conscious mind or not.

Computational processes do not intrinsically have meaning.

Define "meaning". Define "intrinsic". Explain why anything "intrinsically" has "meaning". If you define the words the way I think you do, lots of things "intrinsically" have "meaning" that is not imposed: for example, varves.

then if our thought processes were purely computational process (or any physical process), then we be no more likely to reach correct conclusions than false conclusions about anything.

Non-sequitur. Instead of continuing to make a fool of yourself, why don't you get a book on theory of computation and read it?


Bastien said...

" So the same notion we use for computers could be used for the brain, and we could compare their abilities this way"

How, specifically? Give me a detailed example.


I don't know what you mean by a "detailed example". Without a mathematical model of how our brain computes, the best I can do is by analogy with how we do with Turing machines. So assuming we had a mathematical description of how our brain works, more precisely of how it performs computations, it seems reasonable to imagine that we could prove that it can or cannot decide a problem. If we can describe a behaviour of this model of our brain (an "algorithm" if you wish) that, given any instance of a certain problem (let's say the halting problem), reaches the right answer after a finite time, then it would mean that our brain can solve this problem (assuming the model we work with is correct). If it can solve (in that meaning) an undecidable problem, then it is more powerful than computers. I believe that all the problem that computers can solve can also be solved by our brain, because it seems clear that with enough time, our brain can correctly correctly any algorithm step by step. But I don't see any reason to a priori assume that it cannot do more.

Jeffrey Shallit said...

So you admit you have no way currently to test your claim. Fine with me. By contrast, I do have a suggestion that is not based on uncomputability, but rather intractability. If brains could solve, say, the integer factorization problem on larger and larger examples in time that scales significantly better than the best-known algorithm, that would be at least some evidence that brains are doing something that beyond our current understanding of computation. But they don't. In fact, we see just the opposite: the best mental calculators can handle fairly large instances of problems with known polynomial-time solutions, such as multiplication or extracting roots, but fail miserably on more complicated problems such as integer factorization.

But I don't see any reason to a priori assume that it cannot do more.

Because brains are physical objects that must obey physical laws, and because of the Church-Turing thesis. That's a good reason to assume it cannot do more, at least provisionally.

Ian Wardell said...

So, Mr Shallit refuses to read my explanation as to why all forms of materialism are untenable, and even snips the link to it. Elsewhere (https://www.blogger.com/comment.g?blogID=20067416&postID=113525168149028937) he explains he hasn't read my arguments because, although he likes "to read opposing ideas, [he only likes to read them] if the people proposing them seem to have connected brain cells".

You know, this is fairly reminiscent of the childish insults that people fling at each other at school. I've communicated with many skeptics/materialists over the net over the years. None have ever insulted me in quite such a childish manner. Indeed, whilst disagreeing with me, many of them think I advance much more sophisticated views than others who oppose materialism.

Yet he insists that consciousness is material. {sighs}

And he fails to understand what the word consciousness means, what meaning means even.

One of the problems with the world is that there are far too many knuckleheads like Mr Shallit who refuse to listen. The arguments are there, but he simply isn't interested. The history of the human race is adequate testimony to the propensity of many people to subscribe to the most transparent stupidities imaginable. Materialism is possibly one of the most stupid, and certainly the notion that some mechanical contraption is conscious is.

Bastien said...

I never said that I had a way to prove anything. Also I didn't make any claim, I just said what my gut feeling is, and I said that computability could be a way to prove it right or wrong, assuming we manage to have some day a mathematical model of our brain.

Concerning tractability, I'm not convinced that it would tell us anything interesting on the subject. Doing complex things and doing things fast seem to be very different things in my opinion. Or to put it differently, it's not because computers solve some problems much faster than us that they aren't problems that we can solve while they can't.

Because brains are physical objects that must obey physical laws
Of course they follow physical laws, it does not mean that they work the same way as computers, nore that they can do the same things... Besides, what defines what a computer can do is more mathematical laws than physical laws. I really do not see what this has to do with the problem at hand.

Concerning the Church-Turing thesis, first it is not proved, and also it only talks about computing, it does not say anything about behaviours that cannot be seen as computations.

Jeffrey Shallit said...

I didn't touch your link, Ian, so please don't lie about it.

And he fails to understand what the word consciousness means

I proposed my definition of it. You did not argue for or against it.

Materialism is possibly one of the most stupid, and certainly the notion that some mechanical contraption is conscious is.

Repeating your assertions ad nauseam is not evidence. You have provided no evidence.

Jeffrey Shallit said...

Bastien:

The Church-Turing thesis is not susceptible to proof, as it is a statement about the physical world and not the mathematical one. So claiming "it is not proved" seems to miss the point.

Nevertheless the thesis seems reasonably well-supported by evidence. If you propose something that violates special relativity, for example, you have a strong evidentiary burden to meet. Similarly, if you propose something that violates the Church-Turing thesis. "Gut feeling" doesn't meet that burden.

My point about factoring is that this is a test one could actually carry out. By contrast, "solve an uncomputable problem" does not lend itself to a test of this sort. At least, you haven't proposed one.

Similarly, without pointing to some brain process that could conceivably violate the Church-Turing thesis, you have very little to go on.

Ian Wardell said...

Oh yes, my link is there, appropriate apologies.

The existence of consciousness refutes materialism since consciousness is not material (i.e the notion of the material which was adopted at the birth of modern science in the 17th Century).

Jeffrey Shallit said...

"Indeed, whilst disagreeing with me, many of them think I advance much more sophisticated views than others who oppose materialism."

Just a variation on "The lurkers support me in e-mail!"

Jeffrey Shallit said...

"The existence of consciousness refutes materialism since consciousness is not material "

Repeating a claim ad nauseam doesn't make it any more true. I already proposed my definition of consciousness, which is most certainly material.

Purple Neon Lights said...

You like to win, not discover the truth. I suggest mixed martial arts or being a trial lawyer.

Bastien said...

Jeffrey :

I know that the Church-Turing thesis cannot be proved for now. However I don't think that the reason for it is the one you give, i.e. that it deals with the physical world. Indeed it compares the power of computation of the human brain with that of Turing machines, and while the former are clearly physical objects, the latter are mathematical objects. So the way I see it is that it is susceptible to proof, we just lack a mathematical model of the brain.

What evidence do you think about which supports the Church-Turing thesis?

Also, even if we admit that this thesis is true, you haven't replied to my remark that the Church-Turing thesis only talks about computation, so that if it is true it only means that the computational power of our brain is the same as that of computers. But nothing proves that everything our brain does can be seen as computation. What about dreams, intuitions, love? I don't claim that these things cannot be seen as computations, but I think that at the very least it's not obvious that they can.

On the matter I find this paragraph on Wikipedia's article on the Church-Turing thesis rather relevant : https://en.wikipedia.org/wiki/Church%E2%80%93Turing_thesis#Philosophical_implications

My point about factoring is that this is a test one could actually carry out.
Yes, but it is irrelevant.

By contrast, "solve an uncomputable problem" does not lend itself to a test of this sort. At least, you haven't proposed one.
I have already said three times at least that this question cannot be solved before we have a mathematical model of how our brain computes.




Purple Neon Lights said...

Shallit seems like the new Gerald Woerlee. (Some here will know Gerald Woerlee -- Shallit all-but-certainly won't.) Shallit is a guy who has his reality box that he defends combatively Outside of his box he doesn't care if something is right -- he just wants to fight it. I think we can all be better entertained by watching a boxing match. He has his own little circumscribed world that he rules by copiously writing papers and teaching students. He does this very well. However he flunks abysmally as far as thinking outside of his world. Instead of exploring, he becomes arrogant and derisive.

Purple Neon Lights said...

I've seen Shallit's type before. They're testosterone-laced male philosophers who relish clashing antlers over making true discoveries. I've seen these guys lock horns on endless discussion threads, They never give an inch. They don't seek to make progress; they just want to win. They split hairs ad nauseum and make learned, derisive retorts -- and, ultimately, get nowhere. This is not intended to be an insult. This is simply an objective observation. Shallit is very skilled at juggling quantities and numerical processes. He has basically zero ability to think outside of that arena.

Ian Wardell said...

I believe it's the first time I've said that.

Anyway, modern materialism is construed as being what science can investigate. But science limits itself to the quantifiable or that which can be measured. Hence, it necessarily follows that it cannot in principle explain consciousness since the latter is essentially characterised by qualia (construed in its broadest sense) and intentionality (in its philosophical sense).

Jeffrey Shallit said...

Qualia, I think, have been readily handled by Dennett.

As for "intentionality", please quantify what units you would use to measure "aboutness", or how you could otherwise determine whether one thing is "about" another. For example, are varves "about" chronology or not, yes or no?

I've posed this question about "intentionality" to a handful of philosophy professors. None had any good answer. Suffice it to say, I am not particularly impressed with most philosophers.

Jeffrey Shallit said...

So the way I see it is that it is susceptible to proof, we just lack a mathematical model of the brain.

No, I think this is wrong. A mathematical model of the brain will necessarily be incomplete, as is a mathematical model of every natural process studied so far. We don't get proofs in physics or chemistry, but we do get confirming or disconfirming evidence.

What about dreams, intuitions, love?

If they are the result of physical processes (which I think they are) then they can be modelled computationally.

What evidence do you think about which supports the Church-Turing thesis?

Two things: first, no plausible model has even been proposed which violates it. And second, the large number of models of computation that are equivalent to TM's.

Jeffrey Shallit said...

"male philosophers"

Mr. Neon, please don't insult me by calling me a philosopher.

Bastien said...

No, I think this is wrong. A mathematical model of the brain will necessarily be incomplete, as is a mathematical model of every natural process studied so far. We don't get proofs in physics or chemistry, but we do get confirming or disconfirming evidence.

If it is impossible to get a complete mathematical model of our brain, or at least of its computing abilities, then I think it would be evidence that it cannot be equivalent to Turing machines.

If they are the result of physical processes (which I think they are) then they can be modelled computationally.

What tells you that all physical processes can be modelled computationally ?? This is a very strong assumption, and I would tend to think that it is not true.

no plausible model has even been proposed which violates it
What would it mean for a model to violate it? The way I understand the Church-Turing thesis, to violate it would mean exactly what I proposed, which is to understand well enough how our brain computes and then formally compare its power which that of Turing machines. And like we said several times we are far from being able to do that, and it may even never be possible. So that the fact that it has not been done yet cannot be seen as evidence that the thesis is true.

the large number of models of computation that are equivalent to TM's.
The fact that we can invent many computational models equivalent to TMs does not say anything about our brain. There are also many computational models that are weaker or stronger than TMs.

Jeffrey Shallit said...

If it is impossible to get a complete mathematical model of our brain, or at least of its computing abilities, then I think it would be evidence that it cannot be equivalent to Turing machines.

Why? Do we have even a single physical model about anything that is complete? I think you're confusing ignorance about the model with equivalence of computing power. The burden of proof is, I think, currently on others to show that the brain has access to some different physical aspect that allows it compute beyond the C-T thesis.

What tells you that all physical processes can be modelled computationally ?

The success so far, and the lack of a significant candidate process that hasn't been modelled.

There are also many computational models that are weaker or stronger than TMs.

What is a plausible (physically realizable) model that is stronger than a TM? I don't know of any.

Jeffrey Shallit said...

Neon:

The history of psychic claims is a history of frauds and rubes. My mind is pretty much made up about this, but I do have a standard for reconsideration. Have just one psychic pass Randi's challenge, then we'll talk. I don't say passing such a challenge guarantees psychic powers, only that they would be worth reconsidering if it happens.

Ian Wardell said...

"Qualia, I think, have been readily handled by Dennett".

Dennett is insane. And there's nothing to be handled by them. We have phenomenological experiences. They are not material, hence materialism is false. There's no 2 ways about this.

"As for "intentionality", please quantify what units you would use to measure "aboutness""

As I keep saying, you cannot measure consciousness (whether qualia or intentionality), otherwise it would be material.

"Suffice it to say, I am not particularly impressed with most philosophers".

Neither am I. Not modern philosophers of the 20th and 21st Centuries anyway.

Having said that scientists tend to be utterly clueless when it comes to any philosophical issues.

Incidentally it would be nice if you could actually explain how a computer could conceivably be conscious. How could it experience pain? Greenness? Hope? Fear? Could even an abacus experience such things?

Ian Wardell said...

"Have just one psychic pass Randi's challenge"

Jeffrey, you really are a complete clown...

Bastien said...

Why? Do we have even a single physical model about anything that is complete? I think you're confusing ignorance about the model with equivalence of computing power. The burden of proof is, I think, currently on others to show that the brain has access to some different physical aspect that allows it compute beyond the C-T thesis.

It seems that you're confusing ignorance about the model and absence of a model:

In a previous comment you said "A mathematical model of the brain will necessarily be incomplete, as is a mathematical model of every natural process studied so far.", which seems to mean that there exists no mathematical model of the brain, or if there exists one it is impossible for us to discover/understand it. In that case there must be a reason for the impossibility to define such a complete model, and a plausible reason would be that there are phenomena in our brain that are more complex than what a Turing machines does. I don't see how it would be impossible to define a complete model of the brain if all it does can be simulated by a Turing machine. It is funny, because if this is what you meant, then you are yourself providing evidence that the brain has access to "physical aspect that allows it [to] compute beyond the C-T thesis", as you put it.

So this was absence of a model, or at least absence of a model that humans can understand. But if you meant instead ignorance of the model, which means that we could find a complete mathematical model of the brain, but that we do not have it yet, then my assertion that my proposal to evaluate the computational power of the brain is susceptible to proof stands.

The success so far, and the lack of a significant candidate process that hasn't been modelled.
Wikipedia says that in physics, the CT thesis can have different meanings. One of them is the following:
"The universe is not equivalent to a Turing machine (i.e., the laws of physics are not Turing-computable), but incomputable physical events are not "harnessable" for the construction of a hypercomputer. For example, a universe in which physics involves random real numbers, as opposed to computable reals, would fall into this category."
Another one is the following:
"The universe is a hypercomputer, and it is possible to build physical devices to harness this property and calculate non-recursive functions. For example, it is an open question whether all quantum mechanical events are Turing-computable, although it is known that rigorous models such as quantum Turing machines are equivalent to deterministic Turing machines. [...] John Lucas and Roger Penrose[55] have suggested that the human mind might be the result of some kind of quantum-mechanically enhanced, "non-algorithmic" computation, although there is no scientific evidence for this proposal."

What is a plausible (physically realizable) model that is stronger than a TM? I don't know of any.
Our brain? :-) Just kidding, I can't use this as an argument, this is what we're trying to prove. Of course we don't know if it is possible to build such a super TM. You are convinced that it cannot be done. I am not.


Jeffrey Shallit said...

Dennett is insane.

You've worn out your welcome. Get lost.

Ian Wardell said...

Dennett denies that consciousness exists. If that doesn't deserve the description of being insane, God knows what does. Although, admittedly, those that think a complex abacus is conscious comes close...

Jeffrey Shallit said...

What part of "Get lost" did you not understand?

Ian Wardell said...

Are you sure you're not still at school? You sure as hell act like it. And you refuse to advance any arguments to justify your position. Admittedly that's impossible unless you deny the existence of consciousness too. It's little wonder that you're a fan of Dennett.

Jeffrey Shallit said...

Three chances, you're gone. Bye bye!

Purple Neon Lights said...

This is Part 1 of a 3-part post:

Following, I will first discuss some limitations and strengths of James Randi's investigations. I will then provide four examples of eminent figures in the areas of the hard sciences and philosophy who came to the working assumption that consciousness is separate from the body. I also will mention an example of high-quality western scientific investigation of a paranormal phenomenon.

******************
About James Randi
******************

James Randi is a prominent magician, or illusionist, who avidly debunks allegedly paranormal events.

The Randi Challenge (discontinued in 2015) was a long-standing offer of a million-dollar prize to be awarded to whomever could demonstrate a paranormal event under controlled circumstances. These circumstances were to be agreed to by both James Randi and the demonstrator. 

No one ever won the Randi Challenge.

It is often presumed that Randi is an unquestionable authority; that, since Randi has never seen evidence of paranormal phenomena, then such evidence does not exist. This is not true. There are many strong, scientific indications of paranormality that have not come under the limited purview, or "searchlight," of Randi's investigations.

***********
First, the
bad news...
***********

There are documented cases of quality paranormality demonstration proposals being turned down by the Randi organization (aka, JREF -- the James Randi Educational Foundation). Also, in at least one case, a protocol was agreed upon, but was later amended by JREF to require a level of performance far higher then the demonstrator had initially said he could accomplish. In other words, JREF moved the goal posts in the middle of the game, as it were.

As well, Randi has been called out numerous times for making impulsive, arrogant, shoot-from-the-hip, inaccurate -- even libelous -- proclamations. There are instances of his having apologized in writing for having done this.

I wouldn't say that Randi cunningly plans to lie. However, he can display a knee-jerk, careless, arrogant dismissiveness that makes him say things that are poorly contemplated and demonstrably false. (There are credible observers who would be willing to take the next step and suppose that Randi has deliberately and premeditatedly lied. I will give Randi benefit of the doubt, because I see him as being, on balance, a very good and honorable man. )

********
Now, the good news:
Randi is good man
providing a
much-needed service
********

I understand well, and feel keenly, much of where Randi's coming from. He's dealt with so many quacks and charlatans, that he has very little patience for them.  He's tired of suffering fools.  I believe he thinks that any psychic claim is BS, and he feels like he doesn't have time to deal with every Tom, Dick or Harry who has his head up his exhaust pipe, as it were.

In interviews, Randi can display a touching, urgent concern about people being bilked by quacks and liars presenting as psychics. He nobly wants to stop these bad actors from victimizing others. It's like he is on a mission. He reminds me of paramilitary outfits in Africa passionately dedicated to fighting elephant poachers. These volunteers put their lives on the line on because a concern for the larger good.

For these reasons, Randi is a most valuable global asset.

*******
Ending Part 1 of 3.
*******

Purple Neon Lights said...

This is Part 2 of a 3-part post...

*********
Randi overlooks
good info
**********

The cost of Randi's sometimes aggressively dismissive attitude, however, is that he misses some good stuff when it's right under his nose.

There are numerous examples of scientifically responsible observations of potentially paranormal phenomena that have not been examined by JREF.

One such example is the extensively tested and documented  anticipatory effect.

The anticipatory effect is a name for how the human nervous system often reacts a short moment BEFORE being shown an highly emotionally-charged picture. There is no mechanism by which this happens that fits in with mainstream physics and physiology. Yet, it happens, beyond a reasonable doubt.

These experiments on the anticipatory effect have been torn down to the stud walls, as it were, by qualified skeptical evaluators. They have come up blank in their search for flaws in the experimental design. The indications are exceptionally strong that, in some way, some people can pick up the emotional content of an image prior to seeing it.

In experiments on the anticipatory effect, images are selected by a random number generator in a computer. Nobody knows which image is going to appear prior to being shown -- not the experimenter, not the subject, not the computer technician, nobody. Some of the images are benign, and some are highly emotionally charged (depicting things such as shotgun suicides and explicit sexual acts, to name two examples).  Subjects are connected to highly sensitive physiological measuring devices similar to lie detectors. Their physiological responses are much stronger when shown the emotionally charged pictures versus the benign pictures. In many cases, these strong responses occur *before* the image is shown.

These experiments have been designed to address potential problems or "leakages" in the experimental design. Many of these potential leakages were identified by skeptics, who were invited to check out the set up.

The result? The anticipatory effect still happens. The *way* it happens is open to discussion, but the well-replicated fact that it does, indeed, happen has been proven far beyond the confidence levels required in modern western science. A meta-analysis of these experiments have been done showing statistical confidence levels greatly exceeding that which would occur by chance.

(Link to meta-analysis of studies of the anticipatory effect:

http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3478568/    )

This is but one example of anamolous, possibly psychic, phenomena that Randi has never investigated.  It's outside of the scope of where he looks.

*******
End of Part 2 of a 3-part post
*******

Purple Neon Lights said...

This is Part 3 of
a 4-part post...
************

Instead of checking out long, sometimes complicated studies, Randi appears to prefer seeing a one-off demonstration of psychic phenomena -- or, at best, a demonstration involving a small, short, statistical sample. Randi doesn't bother with replication. All of this flies in the face of how many scientific investigations are conducted. Science very frequently involves large samples, and replication.

As an illustration of a common deficiency in Randi's investigatory style, researchers wanting to demonstrate the well-established efficacy of taking baby aspirin to reduce the likelihood of a repeat heart attack in prior heart attack victims (to name one hypothetical example) would be very hard pressed to demonstrate that within the frameworks that Randi has required in the past.

A number of qualified scientists have rejected applying for the Randi Challenge because of his limited, blinders-on attitude as to what constitutes a demonstration. Also, they choose not to bother with Randi because of the instances where he has been very difficult to work with, and has overridden reasonable, measured decisions made by his designated surrogates.

It can be very expensive to conduct a full-scale, large sample research project. To gamble all of that on the unlikely chance that Randi will pay off the million dollars makes no sense. This one of the reasons why a number of reputable scientists haven't bothered with the Randi Challenge.

Randi's more of a magician and entertainer than he is a scientist. He has had no formal training in experimental science.

In that vein, regarding testing  possibly psychic phenomena, Randi has said (paraphrased) "What is needed here is not a scientist, but a policeman." 

In good measure, Randi is correct.  Some good scientists have been fooled by stealthy individuals.

However, even *better* scientists have *recruited* skeptical illusionists to close potential holes in experimental design - and, having done that, have come up with the same anamolous results.

So, Randi's statement that what is needed as a policeman, and not a scientist, can be modified to say, "What is needed here is a good scientist *teamed up* with a policeman." And, some good scientists have done this. But, Randi has not teamed up with the good scientists.

Again, I want to emphasize that Randi has done a great deal of good work by busting the relatively easy targets of mainstream quacks and charlatans. He plucks the low-hanging fruit. But, as for examining much of the quality scientific evidence, he's not in the arena.

It's like he teaches "Psychic Investigation 101" -- good as far as it goes, but there's a whole lot more to be learned.

OK, that's part what I can say about James Randi.

**********

This the end of Part 3 of a 4-part post

Purple Neon Lights said...

This is Part 4 of a 5-part post
********
Following is a sampling of some of the eminent figures of philosophy and the hard sciences who have rigorously investigated the question of consciousness, and have come up with the extensively reasoned, working assumption that consciousness is separate from the body. Here are four of them:

1) Erwin Schrödinger.

Schrödinger received the Nobel Prize in Physics in 1933. He is best known for his quantum physics thought experiment called "Schrödinger's Cat."  He believed consciousness was not produced by the brain and could not be explained in physical terms.

Quotes by Schrödinger:

"Consciousness cannot be accounted for in physical terms. For consciousness is absolutely fundamental. It cannot be accounted for in terms of anything else."

"The observing mind is not a physical system, it cannot interact with any physical system. And it might be better to reserve the term 'subject' for the observing mind. ... For the subject, if anything, is the thing that senses and thinks. Sensations and thoughts do not belong to the 'world of energy.'"

2) Wilder Penfield, MD.

A pioneer of mapping brain function, Penfield is widely considered to be the father of neurosurgery.

After many years of physically stimulating brains, and recording the reactions, Penfield found himself having to presume that consciousness lies outside of the brain.

Penfield thought for many years that there was no consciousness independent of the brain. After fifty years of research, however, he changed his mind.

In his last book, "The Mystery of the Mind," Penfield stated: “I came to take seriously, even to believe, that the consciousness of man, the mind, is NOT something to be reduced to brain mechanism. . . Where did the mind — call it the spirit if you like — come from? Who can say? It exists."

"We humans are lofty creations, eternal souls, and timeless spiritual beings. This view is difficult for some to fathom."

3) Sir John Eccles, Nobel Laureate.
Eccles was an Australian neuropsychologist and philosopher who won the 1963 Nobel Prize in Physiology or Medicine for his work on the synapse. 

Eccles, after persistent detailed reasoning and contemplation, came to strongly suppose that consciousness is separate from the physical body.

He used the phrase "promissory materialism" to describe the belief that, eventually, everything -- including consciousness -- will be explainable by physical mechanisms, if we wait long enough. He considered this to be a promise that could never be fulfilled, like a promissory note that could never be paid off.

Quote by Eccles:

"I maintain that the human mystery is incredibly demeaned by scientific reductionism, with its claim in promissory materialism to account eventually for all of the spiritual world in terms of patterns of neuronal activity. This belief must be classed as a superstition ... we have to recognize that we are spiritual beings with souls existing in a spiritual world as well as material beings with bodies and brains existing in a material world."

4) Sir Karl Popper. Popper was the renowned philosopher of science who is possibly best known for asserting that falsifiability is necessary for a theory to be scientific.

Popper did not believe in materialism. He believed in dualism, which holds that the mind is nonmaterial.

From Wikipedia:

Interactionist dualism, or simply interactionism, is the particular form of dualism first espoused by Descartes... In the 20th century, [a] major defender [has been] Karl Popper. It is the view that mental states, such as beliefs and desires, causally interact with physical states.

*******
End part 4 of 5

Purple Neon Lights said...

Part 5 of a 5-part post:

Upshot of all of the above: the more one digs into the subject of quality paranormal research; and the more one learns how to sidestep the vast amount of bad information that is out there in this area; and the more one does not fall sway to the oft-internalized conventional wisdom of many mainstream scientists and academicians that all creation is ultimately reducible to physical entities and mechanisms; the more one will come to realize that, far and away, the best hypothesis about consciousness vis-à-vis the brain, is that consciousness is separate from the brain and the body.

***********

Above, I have provided a small sampling of the quality information that suggests that consciousness may be nonphysical. There are several other areas where excellent scientific exploration of parapsycholgy has been done, such as remote viewing, micro-psychokinesis (aka "micro PK), veridically verified communications with disincarnates, and other so-called paranormal phenomena.

Upshot: It is scientifically plausible and defensible to hypothesize -- if only cautiously and tentatively -- that consciousness might be separate from the body.

********

End Part of 5 of 5

Jeffrey Shallit said...

"the quality information": Neon, you and I have very different ideas of what constitutes evidence.

The fact that you can name four scientists and philosophers who held eccentric ideas about the brain shows nothing. It's an argument from authority, and not a particularly good one at that. Popper? Ferchrissakes, the man was a philosopher, not a neuroscientist.

The vast majority of scientists who study the brain do not think the mind is "nonmaterial", and indeed, neuroscientists are investigating the mind and brain now the way they would any other material object.

As for parapsychology, I used to believe there was something to it (when I was a teenager), but after reading the literature I changed my mind. Read, for example, Susan Blackmore, The Adventures of a Parapsychologist.

Purple Neon Lights said...

My main goal is to establish that the question of whether consciousness can conceivably exist without a physical brain and nervous system should be, at least, on the table.

I see no reason to dismiss the question. Dismissing the question, to me, is imprudent. It takes extraordinary indications to validly dismiss *any* question. For example, I cannot think of circumstances where I can guarantee that the sun will rise tomorrow. The question always open whether the sun will rise tomorrow.

As far Susan Blackmore, I am familiar with her body of work. She went into the area of parapsychology with high hopes and expectations of finding evidence of psi-type anomalous phenomena, but came up empty handed. She eventually saod that she was tired of looking. That certainly doesn't mean that there's not evidence.

There is a lot of quite-substantial evidence that can't be fit into any commonly accepted framework.

In my previous five-part post, I selected just one set of experiments that I thought was the most easy to assimilate. That was, of course, the experiments about the anticipatory effect. The fact that people are reacting before seeing a picture does not necessarily mean there is a non-prosaic, mechanistic explanation for it; however, it is true that no ordinary explanation has been produced. Therefore, the door must be left open to a non-ordinary explanation. This is not to say that the non-ordinary explanation will eventually gain widespread acceptance and corroboration. It is to say that the non-ordinary explanation clearly deserves to still be on the table.

In the next few days, I will provide some links to other scientific studies that strongly suggest that non-ordinary explanations should still be under strong consideration.

Jeffrey Shallit said...

I think I already explained why I reject your view. I don't think it should be on the table because there is not much evidence for it, and because there is not even a reasonable mechanism proposed that could account for it.

In the next few days, I will provide some links to other scientific studies that strongly suggest that non-ordinary explanations should still be under strong consideration.

Not really interested, thanks. My blog is not the place for you to post extremely lengthy accounts of your beliefs. I'll gladly let you post a link in the comments, though.

BarryR said...

The brain is not a computer. It is a paper weight.
Or: It is not an anything. But we can view it as an information processing device and achieve limited success with AI.
I view the brain as a receiver of consciousness. Bohm viewed it as a holographic device, not a digital one.

The philosophy of science shows that people get excited about the brain is...

Jeffrey Shallit said...

a receiver of consciousness

Babble.

Purple Neon Lights said...

Dogma increases in proportion to the inability to defend one's position. Also, dogma increases in proportion to the unwillingness to incorporate new facts into one's own worldview.

Jeffrey Shallit said...

Neon, I have also rejected Bigfoot, unicorns, Elvis still being alive, homeopathy, crop circles, and dowsing. If there were ever any good evidence for these, I would reconsider.

I already stated what I consider good evidence for psychic powers: passing Randi's test. You may find my line unreasonable, but there it is.

It makes sense to draw a hard line about reconsideration for many things, for otherwise one's life would be entirely spent reconsidering and not getting anything done.

BarryR said...

"I see no fundamental difference between the supposedly "active" vision of a human, and a robot equipped with a camera that can base its decisions on future actions on what it is currently seeing. What do you think the difference is?"
Marvin Minsky thought he could produce a "seeing" computer in 6 months. That was 50/60 years ago. It's still not possible because what we are really dealing with is mind not brain and computers = brain is a metaphor that will have a life and then be replaced by another metaphor.

Jeffrey Shallit said...

"It's still not possible because what we are really dealing with is mind not brain and computers"

This is not an argument; it's just a restatement of a claim. Why, specifically, is it not possible? What fundamental chemical or physical principle prevents it?

"computers = brain is a metaphor that will have a life and then be replaced by another metaphor"

I am willing to bet this is not the case. Computers and computing are very fundamental concepts. It's like saying "stuff is made of atoms" is a metaphor that will be replaced. Not likely.

BarryR said...

"It's like saying "stuff is made of atoms" is a metaphor that will be replaced. Not likely."
Stuff is not made of atoms. Atoms are just our scientific view of what is.
An Instrumentalist notion of science is that science is just a shorthand for telling us how to do things. You seem to have an extreme logical positive notion of science.

https://en.wikipedia.org/wiki/Philosophy_of_science#The_purpose_of_science

Jeffrey Shallit said...

Fascinating. I've never met anyone who explicitly denies the atomic theory of matter.

If matter is not made of atoms, what do you think matter is made of? How did you come to your belief, and how did you test it?

BarryR said...

"If matter is not made of atoms, what do you think matter is made of? How did you come to your belief, and how did you test it?"
I can't actually answer that. 40 years ago, I was a scientist concerned with image processing. I increasingly thought that data was how we reduced things but not how we put things together (perceived/understood). I became a 70s dropout practicing lots of meditation. It was a personal paradigm shift. So I don't think matter is made of anything. We just analyse it according to what we want to do.

P.S: Spelling is Australian.

Jeffrey Shallit said...

OK, no problem. You don't think matter is made of anything and you have no evidence in support of this hypothesis (I hesitate to call it that, as it seems incoherent).

Meanwhile, you ignore or reject the huge evidence in favor of atoms, even the fact that we have actually imaged individual atoms.

Bizarre.

Ian Wardell said...

Jeff said:
"You don't think matter is made of anything and you have no evidence in support of this hypothesis (I hesitate to call it that, as it seems incoherent)".

In a computer game environment what are the objects in that environment made of? Pixels? But depending how close your character is to an object, and what perspective he views it from, it will be different pixels.

Suppose what we consider to be physical reality is composed purely of our sensory experiences? Physical reality is nothing but our sensory qualia, in other words everything we see, touch, hear, taste and smell. There are no mind-independent objects somehow causing our qualia. Would it then make sense to say objects are made out of atoms? Well . .not in a literal sense at least.

Readers might be interested in a short blog entry by myself on subjective idealism.

http://ian-wardell.blogspot.co.uk/2014/03/a-very-brief-introduction-to-subjective.html

Jeffrey Shallit said...

In a computer game environment what are the objects in that environment made of? Pixels?

I can tell you've never programmed a computer game. Does it bother you that you comment on things you know nothing at all about?

Jeffrey Shallit said...

There are no mind-independent objects somehow causing our qualia.

I bet if you were struck by a meteorite, as Ann Hodges was, you might believe in mind-independent objects.

But seriously, I can't imagine the kind of solipsism needed to think that there are no objects outside human minds.

Bastien said...

Your comparison is quite unfair Jeffrey. "Matter = atoms" is supported by soooo much more evidence than "brain = computer"... The first being that we can observe what matter is made of. For "brain = computer" all we have for now is claims supported by very little evidence, but strong beliefs.

Jeffrey Shallit said...

If I can't convince an intelligent person like BarryR that matter is made of atoms, what possible success could I have convincing him or someone like him of "brain = computer"?

By the way, as I think I explained, a brain is a computer because we use the word "computer" to mean "a thing that stores and processes information". In that sense, "brain is a computer" is almost trivial. So I think your claim of "very little evidence" is wildly wrong.

Bastien said...

Well it seems that you sometimes use the term computer with the broad, imprecise meaning of "a thing that stores and processes information", and sometimes with the precise meaning of "a computing device that is equivalent to Turing machines". So if we talk about the first definition, since it is vague it is indeed much easier to agree on the fact that brains are computers, in the sense that they seem to be able to store information and process it. However, even using this broad definition of computer, I still lack evidence to support the claim that all brain activity can be accounted for by information storing and processing.

Jeffrey Shallit said...

No, I think you misunderstand.

A computer model need not be equivalent to a Turing machine. For example, finite automata are a general model of computing that are not, as well as many individual Turing machines. But nobody knows a realistic computer model that is more powerful than a Turing machine. Saying "the brain is a computer" is not saying "the brain is equivalent to a Turing machine" because, for one thing, Turing machines have unbounded tapes and the brain is finite.

What's a good candidate, in your opinion, for brain activity that could not be characterized as information storing and processing? I guess you could add "control", because the brain controls parts of the body. But still nothing that could not be modeled physically.

BarryR said...

"If I can't convince an intelligent person like BarryR that matter is made of atoms,"
My position is more complex than that. If I make a (simple) circuit, I accept the explanation of transistors in terms of flow of electrons and positive holes. But I regard that as just one view of what is real. I'm not alone in my thinking. A study of the philosophy of science will show many arguments similar to mine.

Jeffrey Shallit said...

I don't understand your meaning.

When we image atoms, what are we seeing if not atoms?

Suffice it to say I am really unimpressed by most philosophers of science.

BarryR said...

Godel, Penrose and others have discussed incompleteness theories. Below is a quote from Lucas. So some pretty sharp minds have questioned computers = brain/mind.

Minds, Machines and Gödel is J. R. Lucas's 1959 philosophical paper in which he argues that a human mathematician cannot be accurately represented by an algorithmic automaton. Appealing to Gödel's incompleteness theorem, he argues that for any such automaton, there would be some mathematical formula which it could not prove, but which the human mathematician could both see, and show, to be true.

The paper is a Gödelian argument over mechanism.

Lucas presented the paper in 1959 to the Oxford Philosophical Society. It was first printed in Philosophy, XXXVI, 1961, then reprinted in The Modeling of Mind, Kenneth M. Sayre and Frederick J. Crosson, eds., Notre Dame Press, 1963, and in Minds and Machines, ed. Alan Ross Anderson, Prentice-Hall, 1964, ISBN 0-13-583393-0.

Jeffrey Shallit said...

Lucas' argument is wrong, and if you actually understand Gödel, the flaw is easy to see. See, for example, Torkel Franzen's book, page 55. Or use a google search.

I know of only a handful of mathematicians who think Lucas was right. The fact that many philosophers take it seriously when it is so obviously flawed is yet another reason why I don't have very high respect for many philosophers.

BarryR said...

I've read some reviews of Franzen and I see they negate some interpretations of Godel. But my point was that I'm not alone in questioning:
computer = brain (I think you mean mind. And this is probably our bone of contention)

There are philosophers who question if atoms exist. I say they are merely convenient abstractions. But this comes down to the basic difference between a reductionist and a... well nowadays I write paranormal stories

Jeffrey Shallit said...

I am completely unimpressed with arguments like "This smart person agrees with me".

BarryR said...

As said earlier, it requires a flip in consciousness to move away from the current thinking.

Jeffrey Shallit said...

I'll take evidence over "flip in consciousness" any day.

BarryR said...

"I'll take evidence over "flip in consciousness" any day."

Actually, you don't.
The evidence to date is that computers are hopeless as brains. (By which you mean mind).

Your prejudice is that brains are computers.

BarryR said...

I thought this was interesting:
For the first time, scientists have demonstrated that an organism devoid of a nervous system is capable of learning.

See: https://www.sciencedaily.com/releases/2016/04/160427081533.htm