Back when I was a graduate student at Berkeley, I worked as a computer consultant for UC Berkeley's Computing Services department. One day a woman came in and wanted a tour of our APL graphics lab. So I showed her the machines we had, which included Tektronix 4013 and 4015 terminals, and one 4027, and drew a few things for her. But then the incomprehension set in:
"Who's doing the drawing on the screen?" she asked.
I explained that the program was doing the drawing.
"No, I mean what person is doing the drawing that we see?" she clarified.
I explained that the program was written by me and other people.
"No, I don't mean the program. I mean, who is doing the actual drawing, right now?
I explained that an electron gun inside the machine activated a zinc sulfide phosphor, and that it was directed by the program. I then showed her what a program looked like.
All to no avail. She could not comprehend that all this was taking place with no direct human control. Of course, humans wrote the program and built the machines, but that didn't console her. She was simply unable to wrap her mind around the fact that a machine could draw pictures. For her, pictures were the province of humans, and it was impossible that this province could ever be invaded by machines. I soon realized that nothing I could say could rescue this poor woman from the prison of her preconceptions. Finally, after suggesting some books about computers and science she should read, I told her I could not devote any more time to our discussion, and I sadly went back to my office. It was one of the first experiences I ever had of being unable to explain something so simple to someone.
That's the same kind of feeling I have when I read something like this post over at Telic Thoughts. Bradford, one of the more dense commentators there, quotes a famous passage of Leibniz
Suppose that there be a machine, the structure of which produces thinking, feeling, and perceiving; imagine this machine enlarged but preserving the same proportions, so that you could enter it as if it were a mill. This being supposed you might visit its inside; but what would you observe there? Nothing but parts which push and move each other, and never anything which could explain perception.
But Leibniz's argument is not much of an argument. He seems to take it for granted that understanding how the parts of a machine work can't give us understanding of how the machine functions as a whole. Even in Leibniz's day this must have seemed silly.
Bradford follows it up with the following from someone named RLC:
The machine, of course, is analogous to the brain. If we were able to walk into the brain as if it were a factory, what would we find there other than electrochemical reactions taking place along the neurons? How do these chemical and electrical phenomena map, or translate, to sensations like red or sweet? Where, exactly, are these sensations? How do chemical reactions generate things like beliefs, doubts, regrets, certainty, or purposes? How do they create understanding of a problem or appreciation of something like beauty? How does a flow of ions or the coupling of molecules impose a meaning on a page of text? How can a chemical process or an electrical potential have content or be about something?
Like my acquaintance in the graphics lab 30 years ago, poor RLC is trapped by his/her own preconceptions, I don't know what to say. How can anyone, writing a post on a blog which is entirely mediated by things like electrons in wires or magnetic disk storage, nevertheless ask "How can a chemical process or an electrical potential have content or be about something?" The irony is really mind-boggling. Does RLC ever use a phone or watch TV? For that matter, if he/she has trouble with the idea of "electrical potential" being "about something", how come he/she has no trouble with the idea of carbon atoms on a page being "about something"?
We are already beginning to understand how the brain works. We know, for example, how the eye focuses light on the retina, how the retina contains photoreceptors, how these photoreceptors react to different wavelengths of light, and how signals are sent through the optic nerve to the brain. We know that red light is handled differently from green light because different opsins absorb different wavelengths. And the more we understand, the more the brain looks like Leibniz's analogy. There is no ghost in the machine, there are simply systems relying on chemistry and physics. That's it.
To be confused like RLC means that one has to believe that all the chemical and physical apparatus of the brain, which is clearly collects data from the outside world and processes it, is just a coincidence. Sure, the apparatus is there, but somehow it's not really necessary, because there is some "mind" or "spirit" not ultimately reducible to the apparatus.
Here's an analogy. Suppose someone gives us a sophisticated robot that can navigate terrain, avoid obstacles, and report information about what it has seen. We can then take this robot apart, piece by piece. We see and study the CCD camera, the chips that process the information, and the LCD screens. Eventually we have a complete picture of how the robot works. What did we fail to understand by our reductionism?
Our understanding of how the brain works, when it is completed, will come from a complete picture of how all its systems function and interact. There's no magic to it - our sensations, feelings, understanding, appreciation of beauty - they are all outcomes of these systems. And there will still be people like RLC who will sit there, uncomprehending, and complain that we haven't explained anything, saying,
"But how can chemistry and physics be about something?"
Saturday, May 29, 2010
Subscribe to:
Post Comments (Atom)
53 comments:
People commenting on an immaterialist consciousness aren't asking how biochemical processes in the brain can be [i]about[/i] something. I don't think they have any problem with that, because they always say they can imagine zombie humans doing all the things regular humans do. What they are asking is how biochemical processes can [i]feel[/i] like something to the owner of the processes. That's a lot trickier to imagine, even though I have no doubt we will understand it eventually. I sometimes say that qualia will end up being "internal behaviors," although then I get told that behaviorism is dead.
"But how can chemistry and physics be about something?"
I suspect that quite a few of the people asking this really do understand, but the idea makes them uncomfortable. Like evolution, neurology is knocking humanity off its pedestal. The more we learn, the more we realize that we really are just clever apes. The question is really special pleading for humanity; that the soul really does exist.
touching conclusion .. yogis quietly smiling .. with compassion ..
Perhaps the problem is one that has plagued people since the birth of science....
We don't want to believe that we are basically machines.
When we are sick, we are okay with doctors treating our body like a machine that does not function properly.
With psychology and psychiatry, we are okay with doctors treating our mind (brain) if it is not functioning properly.
But the thought that we are nothing more than a complex machine is perhaps too frightening for them to bear.
It seems like the frontier of science right now is explaining consciousness. Will science ever be able to explain consciousness? I do not know. But the admission of "I do not know" is what drives science onwards.... whereas for those who cling to superstition, "I do not know" is where the line of questioning ends.
Here's an analogy. Suppose someone gives us a sophisticated robot that can navigate terrain, avoid obstacles, and report information about what it has seen. We can then take this robot apart, piece by piece. We see and study the CCD camera, the chips that process the information, and the LCD screens. Eventually we have a complete picture of how the robot works. What did we fail to understand by our reductionism?
But can you then tell us what the robot experiences when it hears "high C" on a piano, or what it experiences when it "sees green"?
Or take an alien. Forgetting all ethics, we study it while live, then kill it and study it in excruciating detail. We find that it has a neural system a lot like ours. Can we thereby tell what qualities it experiences when seeing green, or hears harmonies? Can we even say that it has qualitative experiences such as we do? If not, what have we truly explained about its perceptions and feelings?
I don't doubt that everything can be explained in essentially a "reductive" manner, although emerging properties would be a part of that. But we have to consider what explaining our experiences even can mean, and realize that we may never be able to explain a "thing in itself" at the deepest level.
One important difference between the robot and the human is that once we've explained everything via our abstractions, we're pretty much done. We don't have to answer "What's it like to be a robot?" because in all likelihood it's not much different than being a television or a rock.
We may someday know nearly all that is knowable about humans (not every last detail, rather the principles governing those details), yet we might never be able to say why the experience of green really is "like what it is" because that is a "thing in itself," and not explainable via our abstractions.
However, that doesn't mean that we won't understand the perception of green adequately in a science sense (within its limitations). The reason consciousness can be explained, almost certainly, is that we can compare information reported by the subject with information gathered from the subject's brain. Wherever the information can interact as experienced "subjectively" is certainly a potential area for consciousness of that information to be occurring.
Leibniz's problem was that he was thinking in mechanical abstractions. We think in terms of fields, though, and although, say, electric fields are not knowable as entities that "experience green," that these might "in themselves" experience energy changes in that way is hardly anything anybody can rule out. So that if electric fields evidently contain the information contained in a subject's reports of consciousness, the correlation ought to be considered likely to be meaningful.
Electric fields are what I think are (mostly at least) responsible for consciousness, but that's not my point here. Other fields (quantum, maybe, but I'm highly suspicious of made-up fields that also fail to explain unconscious areas of the brain) might possibly do it.
The point is that consciousness needs actual explanation via something like fields, and cannot be simply assumed to be the sum of the electrochemical interactions that explain so much of neural activity so well (again, why is some conscious and some apparently not?).
Glen Davidson
I'm a B.Sc. Computer Science student, I've just finished my degree and I am now applying for several Ph.D.s and M.Sc.s in artificial intelligence.
Given that my current level of education excludes any in depth coverage of neuroscience I have already read many books that explain the confusion. Perhaps they should read some literature detailing the working of the brain that postdates 1714, that might help.
He should definitely read Society of Mind by Marvin Minsky, it also explains the difference between knowing how to use something and knowing how it works, which would debunk the later paragraphs about the scientist and the colour red.
I came here from Pharyngula, so I don't have a backstory on this, but it's blatantly apparent you're misconstruing Bradford's argument, whether because this is the first time you've come across it or what I don't know. Clearly he was talking about perception, the phenomenological things, you know, things like qualia. Asking about these things is totally orthogonal to asking about mechanisms and functions. Neither the wavelength of red nor a mathematical model explain red.
I'd recommend Facing Up to the Problem of Consciousness by Chalmers, or at least the first half of it, to see what's really being discussed.
Glen Davidson = Missing the Point x 10^zillion
Reductionism doesn't work on informational systems. You can take a computer a part, see where the data goes in, the processor, the active memory, and the data storage... but that won't tell you the software that it's running.
Of course the mind is immaterial, it's software. The brain is hardware, the mind is software. The combination is greater than the sum of its parts, just like a computer. Informational systems are annoying like that.
"What they are asking is how biochemical processes can [i]feel[/i] like something to the owner of the processes."
Because the neuron is triggered that creates the "I just felt x" process.
Much like when your left hand is touched, the 'touch' nerve sends a signal to your sensorimotor cortex. *That* neuron is just 'touch', not a location. But the firing of that neuron triggers a second neuron that says "source location is left hand".
This is the theorized source of phantom limbs. I think that an analogous process for other feelings is plausible.
As to the business of qualia, zombies and whatnot, whenever I find a philosopher describing a zombie I become convicted that perhaps I am, in fact, just such a zombie.
I am unaware of any processes in my mind that could not be experienced by such a zombie. This does not bode well for a concept that is supposed to communicate a problem in philosophy.
And what is meant by the process of experiencing the color red? As far as I'm aware the signals produce something that causes me to think I'm seeing red. I don't need to think in terms of qualia at all, the experience itself is quite arbitrary (indeed synesthesia demonstrates that the actual experience for different people may be quite different) but the responses, both built-in and learned, are not. The qualia are thus, in that they exist at all, simply place-holding symbols, internal artifacts of the process.
To say that we are conscious because we are actually elaborate biological computers explains very little, given that no known computer needs to be conscious to operate. If synaptic wiring and chemistry gets the job done, why does consciousness bother to exist at all, especially if it is not to be regarded as having any causal powers going beyond the neurological "computer" and the "program" it is running? If all that exists is material causation, then why, precisely, are we not unconscious zombies?
I don't at all see that bradford is making a foolish point. And he also makes another valid point: even if (and it's a mighty big if) the atheist materialists have gotten it right concerning Darwinian evolution, they've still got nothin', due to the problem of consciousness.
Anonymous:
Chalmers is quite confused, too. To see why, read Dennett.
@Matteo: Consciousness is sort of, in a way, like the random access memory in a computer. All the variables that are 'happening' with and are immediately necessary to the function of a program are stored there are immediate, instant retrieval. Of course it is a crude example, and computers don't exactly have consciousness, but as long as we are using analogies...
Something you seem to disregard is the fact that the essence of consciousness is nothing more than 'awareness,' and evolutionary biology explains the necessity of awareness quite easily. Certainly, if we could design a computer with awareness similar to our own, being able to react to the range of stimuli we can, it would qualify as being conscious.
Your question of why we aren't just unconscious zombies is quite misleading. Think about, again, a computer which is able to perform complex calculations are enormous speeds but is in fact not aware. The computer has no survival power. If a modern day computer type mind were put into a life form, it would be quite dysfunctional due to its utter lack of awareness.
An unconscious zombie of a human being without the awareness of the full range of stimuli that we are aware of would have a much more difficult time of survival (and probably wouldn't survive at all), while any 'unconscious zombie' that had the awareness to react to certain situations in manners necessary for its survival would indeed be conscious.
xkcd.com/659
I saw a creationist debating someone a while back. He made the annoying argument/challenge to the atheist about how the brain/emotions worked: "Is it the carbon atom? the benzene molecule?..."
Unfortunately the atheist ignored the point, but I would have responded with a similar question: "Your question displays your ignorance. Let me paraphrase it for the audience to ensure they understand as well. How does a car work? Is it the iron atom? The gasoline molecule? How does a computer work? Is it the silicon atom? If you can't understand how those work, how could I explain the brain, other than simply similar biochemical and physical processes."
RE Anon... "Reductionism doesn't work on informational systems. You can take a computer a part, see where the data goes in, the processor, the active memory, and the data storage... but that won't tell you the software that it's running."
Being a lowly BS CS, I defer to the resident PhD for any corrections of my following statements and I pulled out Sipser to hopefully ensure that I don't totally donkify my reply...
Computational Theory deals with languages. For any regular language there exists a finite state machine (FSM) that recognizes that language. The modern computer is basically an enhanced FSM called a turing machine (TM) which differs from a FSM because the TM has the added ability to remember and has explicit accept/reject states. All computer systems are fundamentally reducible to a TM which is why it is the basis for proofs and why computer architecture hasn't radically changed beyond adding varying interfaces or storage since Von Neumann; therefore, hardware can undergo reduction.
Furthermore, all "software" simply defines the states of a TM including which states transition to accept states. Theoretical programs/languages, can be proven by reduction through an inductive step.
It follows that both hardware and software are reducible.
If computational theory holds, then the hardware of the brain should be reducible to a TM, at least a nondeterministic one, and all that is stored and processed should be reducible to at least a turing-recognizable language if indeed the language of the brain is enumerable.
Will these reductions satisfy us? Probably not as determining why is a much more challenging question to answer than determining how. The why question is often where our society fractures along the lines between informed understanding and naive belief.
Here's another argument I think is fundamentally flawed, the "Chinese Box". The idea behind that seems to be addressing the big issue with the "p-zombie", actually trying to imagine in detail how a p-zombie could at least potentially function.
I'll break it down in one of the ways I heard it. Imagine a giant robot that acts like a human in all ways on the outside, you can't tell the difference except it's a giant robot. Inside, there's a "box" containing a single person, a book, and however many lights levers and buttons are needed to make this analogy work. This person recieves numerous signals, and has to interpret which buttons and levers to use depending on which signal s/he sees. This person is entirely responsible for how the robot reacts to the outside. This person has no idea that they are even IN a robot, nor what any of the signals "mean" (they're input from outside) or what the levers and buttons "mean" (output to the robot). It's all written in the book. The person sees the input, looks up that specific input in the book (which contains responses to all possible inputs) and then the book tells the person which buttons and levers to use.
The idea here is apparently to demonstrate that conceptually a p-zombie CAN exist. I'd like to tear apart this idea.
Namely, everything I know about the modern state of AI, which is admittedly limited, suggests that this method will NEVER produce a robot that will react just like a person. I've seen a number of "chat bots" that all use basically the same method, building a huge database of responses mapped to a huge database of possible inputs to simulate human speech. Not a single one is very convincing for long. That's because this method lacks one thing very important, memory of the context of an input. Someone saying exactly the same thing, in exactly the same tone, in exactly the same visual background, can mean something completely different depending on what happened before that person said it, and no matter how large your database, you'll never be able to generate appropriate context based responses due to that.
To put it simply, not a single chat bot I've ever talked to has ever been able to answer the question "What did I just say?", and I don't think any chinese box will ever be able to answer it either.
Now let's assume someone tries to "fix" the chinese box by giving it a sort of internal memory. The person has instructions to write down inputs and start trying out different outputs until it gets a certain sort of response, in which case that sort of response is kept for future use. The person even keeps a short log of previous inputs so it can use the context of those inputs to generate a certain kind of response. The person still lacks any knowledge of what the "outside" is doing or what the outputs are sending, just the general instructions of working towards specific inputs and keeping logs of what tends towards those inputs best. Even in that modified scenario, which I can actually say might even produce a robot that acts just like a human, I'd say you have created a system that's fully self aware. All the functions of neurons have simply been replaced with carbon and paper acting through a person stuck in there basically doing a job that a machine could probably do just as well. Any hangups you have about conciousness "emerging from" a large process of reading books and pulling levers are silly, it's no different in principle than it emerging from electrical pathways in robots or neural interactions in humans. In other words, even when a chinese box is modified so that it could actually WORK, the result will be a fully aware robot, not a p-zombie robot.
Those are honestly the very BEST arguments I've heard for duelism, and really, they aren't all that good.
My big issue with "p-zombies" is they assume their own conclusion.
When someone says "imagine a person who on the outside behaves and reacts just like anyone else, but on the inside completely lacks awareness, qualia, an internal monologue, and so on", they are really saying "imagine that my assumption that reductionism can't explain conciousness, qualia, etc, is true". Well when you make that sort of assumption IN your thought experiment, that's the only conclusion you CAN reach. So, it's fundamentally flawed.
The basic problem with it is that it's easy to imagine something when you ignore the little details, but since this is an argument ABOUT endless small details, you need them. How does this p-zombie work? What processes give rise to it being able to act in the world just like anyone else without us being able to tell the difference, and how do these processes not create an internal experience of awareness?
The reductionist naturalistic explanation is saying, more or less, that once you build something that CAN act like a human does, it can't HELP but be qualitatively aware of itself internally, that it's a NECESSARY part of those processes for that phenomenon to be there. So far, the science I've read seems to point exactly to that. In other words, p-zombies simply aren't possible, not just in reality but even conceptually. Those that say "but I can imagine it" just haven't imagined it in enough detail to realize they haven't really concieved of anything solid, just like someone who says they can imagine "a god that's both omniscient and omnibenevolent still allowing people to suffer" just haven't given that enough though to realize how much of a lack of thought they've actually given it.
(To be continued)
Why did the robot cross the road?
Your perception of things like color are not fixed, as demonstrated by this http://www.youtube.com/watch?v=Yr-QtNE9k84
I think that consciousness is a matter of recursion; my mental model of the world includes a model of myself as an entity with internal states and a mental model of the world which includes a model of myself as....
As such it's the same sort of mental capacity that's needed to model others as entities with internal states- a useful trait for a social creature- just turned back upon ourselves.
People asking how this can possibly lead to feelings need to explain why it shouldn't. Darwin asked: why should thought being a secretion of brain seem more marvellous than gravity being a secretion of matter? Worth pondering on.
Even studying a computer is difficult enough. Let's say you have a chip with 80 wires protruding - can we determine the internal operating instructions by looking at each line and the signals they produce? Absolutely not, we'd have to know much more and be able to look at the substrate and all the metallized components to get some idea of what's going on - and this reverse-engineering is a far more difficult task than designing your own chip. But even in the case where we only see the wires and know nothing about what is inside that polymer casing, we may be unable to determine how the device works and yet we know it does and since it's a human artifact we even know that the device has no intelligence (though in this case it did have a designer).
I enjoyed reading Helen Fisher's book, Why We Love, about the biochemistry of emotion. Maybe I liked it because I'm a chemist but it also made sense to me in terms of my life, age and experiences.
When two people in love are said to have "good chemistry" it's the literal truth!
The progression from infatuation to ardor to attachment to appreciation and all the shades in between are understandable in terms of brain chemistry. It doesn't diminish the emotion for me knowing that "emotion" is a balance of chemicals.
I'm quite pleased to even be here to appreciate it!
Aside from the problem of why it feels like anything to be us, there's also the problem that nobody actually understands how the brain solves hard problems, such as understanding text. Until we really understand how that works, in terms of 'just machinery', we can't really be sure that that's all we are.
Jeffrey Shallit: "How can anyone, writing a post on a blog which is entirely mediated by things like electrons in wires or magnetic disk storage, nevertheless ask "How can a chemical process or an electrical potential have content or be about something?" The irony is really mind-boggling."
There is no irony here, you don't seem to get the question. Things like blog postings don't have any meaning on their own, this meaning is only created by human consciousness interacting with them. The same pattern can have very different meanings to different people, it's all a matter of training. The question cited is concerned with how this conscious interpretations arise from physics and chemistry of the human brain. So far we have no answer.
Consciousness isnt all it's cracked up to be. All the really good things our brains do is the province of the unconscious parts. Ever try to learn to do anything hard, like drive a car? When consciousness does it, it's terrible. Only when it move to being unconscious do we get proficient.
The one thing consciousness is really good at is telling stories. The brain makes up a fictional character called "I" that can do all these things, like drive a car, which is very very useful for communicating with other brains. The problem is thinking that consciousness is the most important part of the story. It's just the narrator.
Had the woman you mentioned at the beginning of your post ever seen a spyrograph toy, or a tool or clothing manufacturing plant? Maybe analogies would have helped her.
Reading this, I remember how a long time ago, I was astounded that the Big Bang happened before there were any living things, any brains, any consciousness. I remember that I would sit and contemplate this, and it would blow my mind. I suppose I was having the same difficulty that this woman was having, and I cannot recapture the odd feeling I got when I realized that there was no design (it was not called "design" back then) to the cosmos. Now, of course, it does not seem strange at all.
Asking how chemical reactions can make us appreciate beauty is trickier, for these people are engaging in Neo-Platonic thinking, assuming that "beauty" exists Out There Somewhere, and that humans are "smart" enough to perceive it. What is a gorilla's perception of beauty, if any; what do all our art and music amount to when we pollute the earth and talk only to ourselves? I have stopped referring to art and music as "higher" expressions for this reason.
as they say, 'If the brain was so simple that we could understand it, than we would be so simple that we couldn't'
we will never explain consciousness through reductionism, even if we succeed in mimicking it... if reductionism were true than the 'belief' in reductionism would itself be a pure result of that reductionism, and thus merely inevitable and unprovable.
Here's an analogy of those who refuse to think of conciousness as unexplainable by brains.
I can imagine myself being approached by someone who asks where "Windows" is on my computer. I might point to the hard drive (where the program is stored), or the RAM (where active instructions are stored and can be quickly modified), or the processor (which actually does the processing of all the instructions), all of them being parts of the overall computer.
Imagine this person declares that I'm just saying Windows is "in the computer" but that I can't point to any single part of the computer and say it's a "particle of windows-ness", and I can't. I'd have to explain that what the computer uses as "Windows" is an emergent property from a series of very specific processes the computer runs.
This person may still refuse to understand that, still insisting that there is no single portion I can point to and say "THAT is a particle of Windows" and thus the idea that Windows is just a part of the computer is silly. There has to be some other strange higher "Windowsness" that the computer taps into.
If you reject this as an analogy, why? I specifically avoided talking about conciousness at all because that's the best way to make this clear. There are PLENTY of emergent properties besides conciousness that are purely the result of processes. I'm not saying it's just a matter of making it complicated enough, it's gotta be specifically set up that way.
If you acknowledge that it's silly to consider Windows-ness as inexplicable as purely a process emergent from the running of a computer (and it has to be the running of it, no one considers the unuses memories of a computer that's off to be an active program), then why do you consider it inexplicable for conciousness to be simply the way our brains run? It's the same thing, except that computers, whatever level of "awareness" they may have, aren't nearly up to our level yet.
Zombie lovers really gotta read Dennett. He has this bit about Zimbos that shows exactly what is so absurd about the idea.
Anyway, for those really concerned about how amazing it is to experience things, ask yourself how much more substantial your experiences are than those of a chimp. Seriously, the chimp sees, hears, smells, tastes, remembers, loves, hates. OK, how much more substantial are the chimp's experiences than those of most monkeys...you can take this argument all the way down. You'll lose a little bit of capacity for phenomenological experience every step down you take.
The really special thing about humans as far as I can tell is language, which allows them to ask stupid questions like "how come I feel things about stuff" (seriously, the zombie thing is pretty inane, for reasons already mentioned in this thread). You see the color green so that you can differentiate it from the color red so that you can tell the fruit from the tree. You feel things about stuff because otherwise you would just lie down and die instead of trying to go out and have kids (which is what your genes are trying to do even if you're not cooperating). No quantum fields required to explain any of this.
-Dan L.
Extending my previous comment: we can't now know that there's no 'ghost in the machine', because we don't now know how the 'human machine' manages to do what it does (such as use language in ways that are 'free, but appropriate to circumstances', as Chomksy has put it).
But the idea that there is such a 'ghost' a.k.a. causally efficacious nonphysical 'soul', is not scientifically useful, because it's just a way of saying that we will never understand how the human machine works, because, as Chomsky again has pointed out, if we had a theory that provided such an understanding, it would simply be seen as part of physics (perhaps a very new part of physics, but, still, nevertheless, just more physics).
We are obviously a very long way away from either understanding how we can do the things we do, or having any good reason to think that we cannot attain such understanding. The latter situation might hold, for example, if we'd accumulated a thorough and clearly wellfounded knowledge of how some part of the brain works in terms of the physical science of the day, but that brain-part still managed to come up with better solutions to certain kinds of problems more often than it ought to be able to on the basis of what we know about it, and we'd been staring at this problem for a few centuries and gotten nowhere with it, as if we were trying to understand the actions of a player in a computer game on the basis of our knowledge of the game engine including bot code).
The idea that there's a ghost in the human machine is an example of a semi-theory that is empirically falsifiable in the Popperian sense (you can falsify it by producing a theory that actually explains all the stuff that people do), but scientifically useless, because its only substantive claim is that you can't make any systematic predictions about the phenomena that it's a theory about. Which is the same problem that ID has.
There is no irony here, you don't seem to get the question. Things like blog postings don't have any meaning on their own, this meaning is only created by human consciousness interacting with them. The same pattern can have very different meanings to different people, it's all a matter of training. The question cited is concerned with how this conscious interpretations arise from physics and chemistry of the human brain. So far we have no answer.
Mantis, part of the point is the implicit and unjustified reification going on in these questions. Asking where in the brain or what brain structures are responsible for red or sweet is very much like asking where in a computer does drawing or image processing take place. The questions themselves are structured in such a way that they contain implicit assumptions that are dubious (e.g. that red or image processing have a location) and will only accept a constrained answer, where such a constrained answer involves casting doubt on neuroscience's ability to explain conscious experiences of red or sweet, rather than casting doubt on our conceptualization of the experiences of red or sweet and whether they really are what we think they are (for instance, the existence of color illusions seriously suggests the possibility that we may need to re-evaluate and re-conceptualize what we think are our qualia of color). The latter is a perfectly acceptable way to approach this issue, but these sorts of "material, physical science about neurons is disconnected from abstract categories like red and sweet" simply rule out this approach a priori.
we will never explain consciousness through reductionism, even if we succeed in mimicking it... if reductionism were true than the 'belief' in reductionism would itself be a pure result of that reductionism, and thus merely inevitable and unprovable.
This makes no sense. Are you confusing reductionism with determinism? Even if you did, it still wouldn't follow that the belief in it is inevitable, as someone could still be fated to disbelieve. And unprovability is a total non-sequitur either way.
"There is no ghost in the machine, there are simply systems relying on chemistry and physics. That's it."
Despite all your wisdom, your language betrays that you too are plagued by the dualist instinct. We all are.
These people are so frustrating not because they don't get it, but because we're helping them not get it by subtly agreeing with them. With every word we support their assumption that ghosts are better than machines.
Why would we call the mind "simply" a chemical process? As far as subjective descriptions go, nothing could be more horribly contrary to reality. The complexity of chemistry in a person is staggering. See? There, I did it. Why would I say, "The complexity of chemistry in a person," as if there's something other than chemistry? Why not, "The complexity of a person"?
Step 1 in our argument should be to prohibit the words "simple," "merely," "just," "only," "that's it," etc. They are literally, diametrically, wrong.
The human soul, and the laws that govern it, are both amazing. One isn't an insult to the other. Quite the opposite. But if we can establish that, there's still one little problem. Even with all that complexity, chemistry is still ultimately limited. At first thought, that shouldn't be a problem for people. Life is one long (or short) story of limits. We should be used to it. But maybe we're not. Maybe we've been clinging to the hope all along that we are infinite in at least one way. For the simplest example, ghosts are immortal. They are infinite in the time dimension. A lot of people are too preoccupied with being perfect to care for any theory about how they're good.
I commend to your attention this article from Seed. A man suffered strokes that wiped out his visual cortex, rendering him blind. His eyes and optic nerves remained functional. It was purportedly discovered, however, that his brain was still able to process "visual" stimuli, to the point where he was able to successfully navigate an ad hoc obstacle course without being able to "see" what he was doing.
The suggestion is, of course, that his brain is processing optic stimuli in a way (a)that's useful and (b)doesn't involve the type of "qualia" that we think of as de rigeur in the processing of optic phenomena.
The man is "seeing" without being conscious of it.
How do chemical reactions generate things like beliefs, doubts, regrets, certainty, or purposes?
Shouldn't memories also be included in this list? I find it curious that memories are always excluded. Memories fade, which is exactly what one would expect based on chemical reactions in the brain. Memories are destroyed as Alzheimer's destroys the brain.
You'll note some very important things about the guy who's subconcious systems can process sight but his conciousness isn't aware of it. That is, while he can unconciously avoid obstacles, he can't do anything NOVEL with his sight input. He can't judge a piece of art, he can't decide which of two roads to take, he can only blindly react when things are thrown at him like walls and do so in pre-designated ways.
That's your big proof of concept for p-zombies?
@Dark Jaguar,
I didn't notice that Jim's post addressed the p-zombie question directly, either for or against. He just said it was pretty neat.
FWIW, I think it's a great argument _against_ the zombie. Basically, the article is saying that the subject's conscious experience of seeing has disappeared because the brain regions that handled it are gone or damaged.
There are some pretty cool implications of this, not least of which is the notion that consciousness can be modular, i.e., "seeing consciousness" alongside "hearing consciousness" and "tasting consciousness". Another is the idea that these modules can, at least in part, be mapped to specific regions of the brain.
fascinating article, though i don't think that it necessarily demonstrates what i think that you are trying to demonstrate.
part of the problem may lie in perception--we are subjective creatures after all. we certainly don't feel like an emergent property of interlinked biochemical systems in our gray matter, we feel like a singular conscious being, and so that remains our intuitive explanation for who we are.
At this point I can only echo the calls of Shallit and others for those interested in the concepts discussed here to read Dennett. He devotes much effort to explaining and analysing the cartesian model, zombies, qualia and much else (including blindsight and many other truly astounding real life examples). Dennett attempts to discuss these concepts in a rather more rigorous way than would be possible here, but he also keeps the presentation simple enough for the non-expert.
I did read Dennet, albeit some time ago, and he seemed to me to be doing exactly what Chalmers says he does, namely, simply ignoring the existence of qualia. Which, for scientific purposes, might well be the best thing to do, since the same qualia can't be observed independently by different people--only reports about produced by narrators. But that doesn't dissolve the puzzle surrounding them at all, it seems to me.
The whole is always more than the sum of its parts. That applies to everything, from atoms to elephants.
Problem solved.
Avery, from my point of view the problem isn't so much that no independent observation can be made of qualia, so much as that even I myself do not know what it is I am supposed to be experiencing as qualia.
Obviously when I see red I get a signal, and I am even able to synthesize an equivalent signal in my mind when I imagine the color red. So far so good, it's nothing more complex in principle than I could express economically in three or four lines of Lisp.
So where are these qualia which are supposed to have mysterious properties inexplicable by fairly simple computational models of consciousness?
If I'm experiencing anything other than the signals I would expect to experience as a zombie capable of emulating any other philosopher's complex notion of how the mind works and consistent with the reports of other beings who claim not to be zombies, I don't think it's apparent.
So how can I tell I'm not a zombie? How can I tell whether I am or am not experiencing qualia?
They're made out of meat
My point in bringing up the case of the blind man was to illustrate the idea that we are continually discovering surprising things about how the brain works. No big news there, but dualists are invoking god-of-the-gaps arguments: I can't explain x, therefore x will never be explained without recourse to a supernatural cause. The fact that we have no current explanations is not convincing evidence of jebus.
Avery Amdres: I did read Dennet, albeit some time ago, and he seemed to me to be doing exactly what Chalmers says he does, namely, simply ignoring the existence of qualia. ... But that doesn't dissolve the puzzle surrounding them at all, it seems to me.
Sure, and chemistry doesn't dissolve the puzzle of phlogiston, and modern physics doesn't dissolve the puzzle of the ether. I think we should give neuroscience a good run, and in half a century or so we might look back and figure out that the medieval concept of qualia was simply not the right question to ask.
Brian Lynchehaun: "Because the neuron is triggered that creates the "I just felt x" process. Much like when your left hand is touched, the 'touch' nerve sends a signal to your sensorimotor cortex. *That* neuron is just 'touch', not a location. But the firing of that neuron triggers a second neuron that says "source location is left hand"."
I think there is still something left out of this story. I'm not exactly sure what it is, and it might go the way of elan vital, but I'm still left with wondering how that could feel like something, as opposed to simply causing me to retract my hand. There is a mechanism here that neurophysiology will eventually address or render irrelevant.
~~ Paul
@gregory
Hey, your last verse is missing a syllable :P
I did read Dennet, albeit some time ago, and he seemed to me to be doing exactly what Chalmers says he does, namely, simply ignoring the existence of qualia. ... But that doesn't dissolve the puzzle surrounding them at all, it seems to me.
Read "Quining Qualia" specifically. Qualia seem important, but as Dennett points out, the whole concept is so poorly defined as to prevent coherent analysis.
A big part of this problem, I think, is that we can't put the qualia themselves into words; we're always describing qualia, and the descriptions are therefore reducible in a way that the qualia aren't. For instance, Glen mentions "the color green," as a quale -- but it's not. It embodies two concepts, "green" and "color" that many human beings have no access to -- the pirahan have no word for "color" (they use analogies to indicate hue), and there are several hunter gatherer tribes where "green" is lumped in with what westerners would recognize as several different colors.
This is ignoring the fact that color representations are different person to person -- red green colorblindness presenting one obvious complication -- and that color perception in an individual isn't fixed -- you can trick people into seeing or missing various colors, or misinterpreting shades or hues depending on the context.
Finally, I've been wanting to point out that without some degree of color vision, the visual field would be incredibly difficult to interpret; objects of similar brightness would look identical, and textures would be very hard to determine. Black and white vision is not really an option; as far as I know, there is no mammal with fewer than two color receptors, meaning that all mammals have some degree of color differentiation. Many birds have four color receptors, meaning they can differentiate more colors than human beings. Think about that for a second, that the visual field of, say, a blue egret is much more detailed and vibrant than your own and then consider what this means for the concept of "qualia."
To keep it short, I think a big part of the problem is that people have trouble separating the sensory stimulus and their conscious, linguistic response to that stimulus. The conflation of the stimulus with the response is a "quale" -- a fiction composed of a category error and an identity error.
--Dan L.
Glen mentions "the color green," as a quale -- but it's not.
Totally wrong. I didn't even use the terms "quale" or "qualia" at all. I could have, since I'm not hung up on the category nonsense that you and Dennett are. I know full well that words are slippery (indeed, the same day I was mentioning at Pharyngula the difficulty in splitting off "qualia" from other mental categories), but then the slipperiness is probably partly why I avoided it, in fact.
Doesn't keep you from getting it totally wrong, of course. It doesn't surprise me that someone who argues from Dennett's supposed "authority," along with bizarre notions about what a "quale" means, would project onto someone else his own lack of understanding, thereby highlighting his lack of attention and concern to get another's statements correct.
It embodies two concepts, "green" and "color"
No, it does not, or at least it need not. Most people who refer to "qualia" indeed are speaking of what is qualitatively experienced when they use those terms, and are not referring to concepts such as "green" and "color." If Dennett fed you that nonsense, he quite badly misled you, but then that wouldn't be a first for him.
that many human beings have no access to -- the pirahan have no word for "color" (they use analogies to indicate hue), and there are several hunter gatherer tribes where "green" is lumped in with what westerners would recognize as several different colors.
Oh please, color relativism yet again? What has that to do with the qualitative experience of seeing, really? Conceptually, of course it does, but colors are used for distinguishing objects whether they have the same "name" or not. Only someone who mistakes concepts and basic perception could suppose that linguistic color relativism is a serious problem (I think that "qualia" are problematic at a deeper level, in fact).
To keep it short, I think a big part of the problem is that people have trouble separating the sensory stimulus and their conscious, linguistic response to that stimulus.
Well you do, plus you have trouble with reading comprehension.
Glen Davidson
When I first saw something about qualia and zombies, my reaction was, you can't be serious! A psychological theory based on the imaginary characteristics of a fictional creature? People actually get paid for such stuff?
I guess I'll have to read Dennett so I'm not just the reverse of the lady who didn't understand computer programs, although the previous commenters have fleshed out my intuitive reaction fairly well.
What makes it hard for me to accept dualism is 38 years of mechanical engineering, designing turbines. Design, as it is actually practiced by humans, is an evolutionary process, driven by by random permutations of ideas until things seem to fit together. The fact that it is the result of millions of neurons firing unconsciously (since we have no nerves which monitor what our neurons are doing) gives it a semblance of something magical, just like when I move my hand it seems to be an act of pure will rather than nerve impulses and coordinated muscle twitches.
I recently spent almost a year finding a proof for Fermat's Prime Theorem. I did it by looking at thousands of cases (in spreadsheets), finding patterns, testing them, describing them in equations and juggling the equations algebraically. It all seemed like a mechanical process (with some random elements) to me.
"Read "Quining Qualia" specifically. Qualia seem important, but as Dennett points out, the whole concept is so poorly defined as to prevent coherent analysis. "
Unconvinced so far, largely because Dennet seems to think that qualia need to fit into a sensible theory, rather than being just an observation (in a sense, the most basic observation), and a puzzling one at that.
I think they're important because of their connection to ethics: the laws that most civilized countries have about what it's acceptable to do to what kinds of animals are largely based on judgements about what kinds of qualia (especially painful ones) those animals probably feel.
And while we can't know much what other organisms' qualia feel like to them, we can draw credible conclusions to the effect hat creatures with similar internal structure to ours showing similar reactions to similar things are probably feeling similar things as well.
The justifying assumption being that qualia are probably a side-effect of certain kinds of physical processes.
Post a Comment