Well, the first episode of 'Watson' on Jeopardy was shown last night. I didn't see it live, but luckily it's available on Youtube, at least for a little while.
It's a great achievement. Question-answering systems are a hot topic now - my colleague Ming Li, for example, has created such a system, based on word associations it finds on the Internet. But Watson is much better than anything I've seen before. A system like Watson will be extremely useful for researchers and libraries. Instead of having to staff general inquiry telephone lines with a person, libraries can use a system like Watson to answer questions of patrons. And, of course, there will be applications like medical diagnoses and computer tech support, too.
I predict, however, that the reaction to Watson will be largely hostile, especially from Mysterian philosophers (like Chalmers), strong AI skeptics (like the Dreyfus brothers), and hardcore conservative theists firmly committed to the special status of humans (like David Gelernter). We'll also hear naysaying from jealous engineers (like this letter from Llewellyn C. Wall, who earns my nomination for Jerk of the Week).
Despite its impressive performance, we're going to hear lots of claims that Watson "doesn't really think". Critics will point gleefully to Watson stumbling on an answer, replying "finis" when the correct response was "terminus" or "terminal" -- as if humans never make a mistake on Jeopardy. We're going to hear columnists stating "But Watson can't smell a rose or compose a poem" - as if that is a cogent criticism of a system designed to answer questions.
I predict none of these naysayers will deal with the real issue: in what essential way does Watson really differ from the way people think? People make associations, too, and answer questions based on sometimes tenuous connections. Vague assertions like "Watson doesn't really think" or "Watson has no mental model of the world" or "Watson is just playing word games" aren't going to cut it, unless critics can come up with a really rigorous formulation of their argument.
Watson is just another nail in the coffin of Strong AI deniers like Dreyfus - even if they don't realize it yet.
Addendum: Ah, I see the moronic critiques are already dribbling in: from Gordon Haff we get typical boilerplate: "Watson is in no real sense thinking and the use of the term "understanding" in the context of Watson should be taken as anthropomorphism rather than a literal description." But without a formal definition of what means to "think" in a "real sense", Haff's claim is just so much chin music.
Subscribe to:
Post Comments (Atom)
45 comments:
I don't see how Mr Wall's letter is anything to get upset about, and there does not seem to be any information about his occupation or education. The guy is simply not particularly impressed by the accomplishment.
I think he has a point. I do hope that the programming developed for Watson will have other applications, but I'm not sure it represents a breakthrough of any kind. Watson's performance, although impressive by relative standards, involves a lot of... special pleading. Watson is a very expensive, extremely specialized one trick pony, a concept car which hopefully will, but may not even lead to any kind of actual prototype.
AI is a very tough nut to crack, and I suppose we should be celebrating minor victories, but, on the other hand, I don't think that Mr Wall's expression of impatience deserves such vituperation.
I share his temperament. Truly useful AI robotics have the potential to completely revolutionize society. Like many exciting proto technologies, there is a plenty of sizzle and no steak.
Where is my affordable solar cell, my paint-on solar collector? Where is my carbon nanotube capacitor-battery, my hydrogen fuel cell, my liquid glass?
I too have become cynical - show me some real beef.
I think the interesting implication is that human Jeopardy champions must have *at least* the same amount of storage that Watson does. (I don't have that number at hand at the moment) in order to make the same connections, and be able to make those connections quickly.
Not to take anything away from Watson, but it does show how amazing our brains are.
Well, Ginger, I think it's not a one-trick pony at all. Question answering is really, really tough, and a good system will have all the applications I gave - and more.
"Well, Ginger, I think it's not a one-trick pony at all. Question answering is really, really tough, and a good system will have all the applications I gave - and more."
Have you seen the documentary about Watson? IBM basically admits it is a one trick pony. The machine has had many millions of lines of facts from Jeopardy and syllogisms about humans programmed into it. The programming was refined many times specifically for Jeopardy by repeated mock shows.
I am no computer expert, but I think the question is whether IBM has developed programing that allows Watson to 'think' like a human, or merely to be able to answer Jeopardy questions.
As I understand it, Watson could not be useful to a library for answering questions from patrons. It would require huge amounts of library-specific information and human-library contextual associations to be entered into its programming.
On the other hand, Watson does have some capacity to 'learn' from its mistakes which is promising. But its higher cognitive architecture - how it resolves the meaning of a question through context is not necessarily applicable outside of Jeopardy.
But take all I say with a large grain of salt - I know next to nothing about AI and computers.
I caught the last 10 minutes and was completely amazed at Watson's performance.
The negative reactions to what really is a great achievement remind me of the comedian Louis CK's routine Everything is amazing and nobody's happy
I tend to think the type of AI I'm looking for is one that can generate new insights.
Watson synthesizes answers out of existing data but provides no insight much the same as Deep Blue crunched positions and analyzed them based on heuristics generated by humans. Deep Blue provided no insight into chess and how it should be played. Mathematical proofs by brute force analysis/construction similarly provide no insight.
I do think systems like Watson are very useful and powerful as tools to pull relevant bits out of ever increasing amounts of data, but should not be the end goal of AI.
Are there some recent, accessible papers/articles by Ming Li you would recommend?
Have you seen the documentary about Watson? IBM basically admits it is a one trick pony.
Yep, I saw it, and I came away with the opposite conclusion. And if you view all the Youtube videos by IBM, they are certainly claiming this technology will have general applications in the fields I mentioned and others I hadn't thought of.
Here is Ming Li's paper:
http://portal.acm.org/citation.cfm?doid=1281192.1281285
If you can't access it, send me e-mail and I'll send you a copy.
"I tend to think the type of AI I'm looking for is one that can generate new insights."
Wasn't there an AI system that could do geometry proofs? Does that count?
I want to see Watson programmed with medical diagnostic knowledge and go up against House. 8^)
Does Watson do speech recognition or does it receive the questions typed? This is not explained in most news stories, although I found one story where it says Watson receives the typed question at the same time as the human competitors hear it. If so, timing would be tricky, I would think. Does Watson get the whole question when Trebec starts speaking, or when he finishes, or some time in the middle?
How well do you think a computer could do with free use of google? Or for that matter, a human? I suppose it would be hard for a human to use google fast enough.
It receives the questions typed as text.
Jeffrey Shallit said...
It receives the questions typed as text.
So that's why the hosts (on the mock games) openly insult it. I don't have verbatim quotes, but some of the hosts' comments are along the lines of "that answer isn't even close" or "that was terrible." (Again, I'm talking about the training runs here, not the actual Jeopardy! episodes.)
Are there skeptics of strong AI that are also strict physicalists?
I personally have a hard time imagining a good reason to doubt our ability to build human equivalent intelligence, but it seems like it'd be a lot easier to doubt if I were a dualist, or had any credulity for spiritualist/supernatural/mystical/etc. beliefs, which I understand most (sadly) do.
" A system like Watson will be extremely useful for researchers and libraries. Instead of having to staff general inquiry telephone lines with a person, libraries can use a system like Watson to answer questions of patrons. And, of course, there will be applications like medical diagnoses and computer tech support, too."
I can hardly imagine that, in the light of Google, libraries any longer receive general inquiries any more.
In regard to the last point, its my experience that tech support and many other services offered by companies, use layer upon layer of recorded messages to discourage any attempt to use the services offered (and increasingly it seems chat-based on-line help is a computer pretending to a person, and they generally don't do anything to help you either). Its hard to conceive how this could be a good development. It more likely it will be another barrier between you and real help.
"I can hardly imagine that, in the light of Google, libraries any longer receive general inquiries any more."
Try going to a library and asking them. You'll be surprised, I guarantee it.
Arguments based on a lack of imagination aren't particularly effective.
Watson is a wonderful marketing stunt, and the Jeopardy shows are basically IBM infomercials that tell you only what IBM wants you to hear without the annoyance of having to deal with competing systems who have the unpleasant habit of beating IBM's tech in scientific contests. Watson is a nice demo of IBM's technology, but it is certainly no better than other NLP work on the same topic. IBM does okay in QA competitions, but they aren't the best, typically. Indeed, cynical researchers --- or good marketers --- see this as a great way for IBM to make its particular flavour of QA tech front and center in the public's mind. It's a lot better than IBM advertising "top 10 finisher in QA contests"!
It's also disingenuous of IBM to talk about Watson in the same breath as "grand challenge" problems. That's pure marketing. People who are amazed by Watson have perhaps not seen many other QA systems in action.
Compare Watson to Deep Blue: at the time, most experts really did consider Deep Blue the best chess-playing program. It had beaten all its computer rivals, and then beat the world champion. In contrast, Watson has no computer rivals because Jeopardy is a made-up marketing challenge. If you just consider the QA tech, then IBM's work is fine, but certainly not head-and-shoulders above its rivals in the way Deep Blue was.
That said, the tech underlying Watson is certainly much more useful than that underlying Deep Blue, and so far more likely to spin-off useful products.
``In what essential way does Watson really differ from the way people think?''
Watson, like most current state-of-the-art QA systems, treats words as symbols, and then it does things, like, calculate the frequency that pairs or triples of words occur together withing a certain range. Turing himself pioneered some of these ideas.
Like all such systems, Watson cannot generalize very well: it's learning is limited to re-jigging certain symbol statistics, and certainly needs a lot of brittle and domain-specific rules hand-crafted by domain experts. It cannot elaborate on its answers: it doesn't know how to generate text (beyond simple cleansing of fragments its retrieved from its corpora). It knows distressingly little grammar: it can be fooled by complicated or unusual phrasings that someone (machine or person) with a deeper understanding would find trivial (e.g. it's final answer in the 2nd show wasn't even an American city!). It doesn't know the meaning of words in the sense that it could not answer a question like "What does 'boat' mean?". It cannot make reasonable --- or probably any --- guess about the meaning of word it has not seen before (a trick that humans are often quite good at it).
The limitations of the techniques on which Watson are based are pretty well-known, and, for instance, were strongly critiqued in the 1980s when they were coded in the form of neural networks. Current researchers have, wisely, not restricted themselves to the neural network formalism, but they still suffer from the same fundamental problems of treating language as sequence of symbols that every research since Turing who has tried this approach runs into.
The main thing that's different about current interest in statistical language processing is the availability of many large corpora, i.e. the web and news feeds. It's now easy for anyone to process gigabytes and gigabytes of text and get results that were better than anything in the pre-corpora days.
But there are still performance ceilings, and certainly no one in the area (and, I am sure, no one on the Watson team) would claim that this approach to language processing, as it is currently understood, is going to lead to a truly intelligent language processing system. Certainly, the performance of the best QA systems (which I tend to find more impressive than Watson's Jeopardy answers) are not quite good enough yet to be used in practice: they have too many wrong answers that show their short-comings, and most users quickly get frustrated by them.
The fact that IBM uses their technology to make an infomercial instead of a useful system is telling: it is quickly apparent to even the most casual users of these sorts of systems that they are not yet good enough to, say, make searching the web easier.
JP, also note that there's been work with computers generating new insights for some time. See for example Simon Colton's work where he made a computer program capable of constructing new definitions and conjectures in number theory. (Disclaimer/horn-tooting: I wrote my first paper ever on followup conjectures made by Colton. Both Colton's original paper and my paper were published in the Journal of Integer Sequences that Jeffrey was the editor of at the time.) There's also been work in the last few years in other areas including robots doing biochemistry work but I'm less familiar with that.
Dear Anonymous @ 12:51:
I'd be more impressed if you could come up with a rigorous definition of "truly intelligent".
AI critics and skeptics don't seem to acknowledge that intelligence is a continuum and multifactorial. My thermostat is not as intelligent as I am. But Watson is both more intelligent and less intelligent than I am, depending on what particular aspect you are measuring.
I guess some critics and skeptics will never be persuaded until we get a system that exceeds all human intelligence in all coordinates - which will be a while.
There are Toronto's in the US, including one in Ill., though it is closer to Springfield than Chicago.
Did it specifically say Toronto in Canada?
I think too we need to separate "thinking" from "consciousness."
Though, before Jeffrey asks, I refuse to define either of them. 8^)
It doesn't take a Watson to notice that Anonymous @12:22 and Anonymous @12:51 both refer to this exercise as an infomercial.
1) Isn't it IBM's prerogative to conduct their publicity the way they see fit?
2) I can think of better ways to spend twenty-nine minutes than reiterating disapproval of IBM's efforts.
Watson would leave a better impression if it were attached to a well-developed human-sounding speaking program.
I know that isn't the problem Watson was meant to solve. But that's what people watching the show may notice, particularly if the game itself isn't exciting. And it is evident that IBM or the Show spent some time working out how Watson would look and sound: so appearances were not an afterthought.
Who watching would want to interact with a computer that sounded like Watson?
I believe that Watson is truly a great accomplishment and an impressive feat that I thought was about ten years away. But, I do not believe that Watson is understanding the questions and I say this as a engineering in the field of robotics.
Watson's ability to form associations is truly remarkable, and I believe it is the foundation for Strong AI. But I will believe the computer has actual Strong AI when it can solve basic logic word problems coded in natural language.
A true strong AI, would have seen "U.S. Cities" and restricted it's answers to U.S. cities. I'm sure the engineers at IBM thought of this, tried to implement it and watched as Watson began to fail on more nuanced questions.
So, I believe that true Strong AI lies in the ability to use associations, like Watson's, to form logic predicates and then reason out solutions. The associations that Watson is good at is needed to determine when and when not to form or break a predicate. The actual execution of reasoning using the predicates should be easy for any computer.
Also, we need to be careful not to fall on the definition treadmill. Ships with autopilots were considered to be robots up until people realized it was actually pretty simple.
I do not believe that Watson is understanding the questions
Define your terms. What is a rigorous definition of "understanding"?
Define your terms. What is a rigorous definition of "understanding"?
I believe "understanding the question" requires the ability to extract and declare logical statements about the answer.
The question: "Which fruit hit Issac Netwon on the head?" is a question I'm very certain Watson could get. But is there any part of the program that has declared: "The answer must be a fruit." I expect the first choice would be "apple", but then the second choice would be "gravity."
Now, it appears that Watson does use Question type classification, so it may be able to perform some reasoning in this respect (When asked for the Grand Prix abbreviation, it only generated abbreviations for candidate answers.) but it had difficultly in parsing phrase "his victims" in the Harry Potter question to generate the predicate "The answer is an antagonist."
I have to admit that I am uncertain if Watson is doing this or not, especially after reviewing some of the footage from the matches, but some of the answers seem to indicate that either it is not, or it requires more computational power to do so more efficiently.
Finally, I really want to stress that I am very excited about this work, I consider it to be a tremendous accomplishment and I think it is the herald of Strong AI.
according to here:
http://thenumerati.net/?postID=726&final-jeopardy-how-can-watson-conclude-that-toronto-is-a-u-s-city
Watson had 14% confidence in Toronto and 11% confidence in Chicago. If it had not been FInal Jeopardy, it wouldn't have buzzed in.
C'mon guys, haven't you seen people give bone-headed answers on Final Jeopardy?
Watson seemed to have an advantage on the buzzer. Many of the questions were easy but the two other contestants seemed unable buzz before Watson unless Watson didn't know the answer, which wasn't very often. Ken Jennings particularly seemed unable to get the jump on Watson.
The Wall letter makes me laugh because his description of Watson (10 refrigerators, etc.) reminds me so much of another computer: ENIAC. When a student at U. of PA, I had the chance to see pieces of the first general purpose electronic computer. Now my cellphone has more powerful processing logic than that monstrous computer.
Humans and computers are converging thanks to Watson-like work on the computers, and neuroscience steadily removing the ghost from the human machine.
I'm a materials scientist so forgive any basic AI ignorance that may be expressed in the followng paragraphs.
I caught a few minutes of Jeopardy during the Watson games and was most interested in how the % probabilty was calculated for Watson's answer. I'll dig into that at some later time.
What I found surprising re all the coverage is the sense of "Wow! Look at what computers do!" and find myself wondering if people are that unware of the computing power in their cars, their hospitals (medical imaging, anyone), in aerospace.
The power of computing is obvious to those of us whose work is facilitated by their existence (i.e. me.) I am happy that the greater public can see more applications.
To answer Gingerbaker - where are *affordable* materials like paint on solar collectors and CNT batteries? When the true cost accounting of hydrocarbons (e.g. environmental impact) or hydrocarbon scarcity becomes manifest, you'll find them on the shelf then. It's not a question of "sizzle and no steak" - it's simple market economics.
Well.. if we're impressed by somebody's program that plays Jeopardy!, then we have to ask, is this because it's taking a lot of data and doing something really stupid like the chess programs do, having no knowledge of chess itself but only knowing how to do, say, 20 of a certain kind of search and that's all there is to it? If that's the answer, then yes, ignorant people will be impressed, but people who understand how it works won't be impressed...
Benson:
Why do you think being able to do a search 20 moves deep in a game tree somehow doesn't correspond to "knowledge about chess".
In my opinion, this is yet another example of fuzzy thinking about terms like "knowledge".
Watson is an interesting AI ap. Just as voice recognition and OCR are interesting AI aps. At issue though is whether it has any true depth. If you want to pick some task, invent a computer to do that task better than I can (answer questions, ride a unicycle, dance) you likely could do that. But, even when a system is designed to do the thousands of tasks can do, I can still just invent another one and learn that. The program can't.
While certainly there's plenty of skepticism with regard to AI, and the terms are typically very poorly defined, and even the rigorous definitions are full of vagueness (because they are typically constructed by people who don't really have an answer to the central question "what is intelligence?"), it doesn't mean that the rather intuitive notion that most people decrying "that's not real intelligence!" are wrong.
Insisting that they "define their terms" is just a gotcha, to try to shift the burden of evidence on to one's opponent. Because most people, even within AI, can't substantively answer the question what is the essential difference between how a calculator works and a human thinks, or in what essential ways do they differ.
Much of the anti-skeptic rhetoric seen here is basically, 'until you can definitively answer the question "what is intelligence?" -- I win.'
I don't really think that applying their internal Justice Potter, "I can't define it, but I know it when I see it."-sense to realize that there's few essential ways that Watson or Deep Blue differs from a calculator is an error.
I don't think it's a "gotcha" at all.
Until we have a really good definition of "intelligence", who can say whether claims like "Watson is not really thinking" are true or false?
And if there's no way in principle to decide the truth of such claims, why do people make them with such confidence?
Someone it got left off my posting, sorry, that the quote
"if we're impressed by somebody's program that plays Jeopardy!, then we have to ask, is this because it's taking a lot of data and doing something really stupid like the chess programs do, having no knowledge of chess itself but only knowing how to do, say, 20 of a certain kind of search and that's all there is to it? If that's the answer, then yes, ignorant people will be impressed, but people who understand how it works won't be impressed..."
is from Marvin Minsky, one of the acknowledged founders of the field of Artificial Intelligence, not from me.
Hey, Minsky's a very smart guy, and my academic grandfather, but I still think that quote's silly.
Yes, I suspect that the more a person knows about Artificial Intelligence, the "sillier" he or she is (provision haven first duly been made for the truth in the maxim "a little knowledge is a dangerous thing").
Anonymous:
1. I didn't say MInsky was silly, I said his comment was silly.
2. You could have at least linked to the entire interview, which is here.
3. Minsky's comment is not really about whether Watson is "thinking", but rather whether we should be "impressed" with the system. Apparently he is not impressed by systems like Deep Blue, either.
But if we find that aspects of human intelligence are answerable by simple systems with lots of data, why should we find that less of interest?
I don't think there's anything irresolvably mysterious about thinking, consciousness, or intelligence. I think we would do better if we spent less time making claims like "Watson doesn't really think" and more time building systems.
4. Did you ever hear of Clarke's first law?
1) My main point was clearly not that Minsky was silly. it was that people who knew AI would agree with his comment. I just said that somewhat fancifully, and incorporating Shallit's insult, in speculating that "the more a person knows about Artificial Intelligence, the "sillier" he or she is".
This in fact is an example of something that an intelligent AI would have to be able to do. To understand my sentence its context, the AI would have to figure the above out. Then an AI interested in serious inquiry would reply to that point, but a different sort of AI might pretend to misunderstand in an attempt to win debating points...
2) Normally I am scrupulous about references, but strictly speaking I don't see why I should have to give a reference to the entire article, given that Google can and probably did trivially find it for you.
3) Minsky says lots of things in that article. He wasn't adressing specifically the (vague) claim (exactly expressed as a token of ascii characters) that "Watson is really thinking", but it seems pretty clear that Minsky would reject such a claim given the article as a whole (subject to learning what the innards of Watson are really like).
This is another thing an AI would be able to do. Read that article and see that this point 3 of Shallit's does not really help his case.
Without having the desirable but unnecessary "formal definition" of intelligence, that Shallit has been going on about, I suspect lots of AI researchers have tried to list desiderata such as these and even tried to generalize them and round them out into provisional "definitions".
And they *are* doing this, and they *are* trying to build systems they see as intelligent, even as they may not believe any system every devised can be correctly said to be "really thinking". So Shallit's complaint about what "we" should be doing seems off the mark.
4) I thought when I saw the Minsky quote that Shallit would excuse Minsky as one of the "nay-saying jealous engineers". But he has apparently invented a new category for Minsky, effectively, "old has-been geezers who haven't kept their finger on the pulse of the field".
By the way, of course Minsky doesn't fit Clarke's so-called "law", because Minsky never said AI is impossible. In fact, he thinks it is very possible. To connect the possibility (or even immanent necessity) of AI with one particular program such as Watson is simply a non-sequitur.
My main point was clearly not that Minsky was silly.
Never said it was. You seem extremely confused.
it was that people who knew AI would agree with his comment.
You mean, like the artificial intelligence experts at IBM who built it, or the AI professors at Waterloo - both of whom think Watson's an achievement?
but a different sort of AI might pretend to misunderstand in an attempt to win debating points
I think the only person misunderstanding here is you.
But he has apparently invented a new category for Minsky, effectively, "old has-been geezers who haven't kept their finger on the pulse of the field".
It's not a new category. It's a simple fact that many old, reputable scientists say silly things from time to time (e.g., Pauling on Vitamin C). And it's also a simple fact that Minsky hasn't been actively publishing research in mainstream AI conferences for quite some time.
because Minsky never said AI is impossible.
I guessed when I quoted that that you might misconstrue it. The quote is evidently not a perfect fit, but it does apply to the general point I was hinting at: that elderly scientists don't always make the best judgments, even about their own field.
"Never said it was. You seem extremely confused."
I never said Shallit said it was. I just pointed out it was not my main point (actually not my point at all) and that my point was simply ignored. (Note: my point was also just a speculation, and with uncharitable stuck-in-the-grip-of-an-ideology sophists like Shallit sneering about my confusion, etc., I'm no longer going to be sympathetic to any demands to statistically back it up with a barrage of examples)
"You mean, like the artificial intelligence experts at IBM who built it, or the AI professors at Waterloo - both of whom think Watson's an achievement?"
No one denies it is an "achievement".
"It's not a new category."
It is a new category for Shallit's list of categories that can be used to excuse anyone who doesn't agree with him. An AI would, again, if sincere, not so obtusely misread what I said. Obviously he has not himself invented the "Clark's First Law" that he referred to.
Anonymous:
Your comments have degenerated into name-calling and sneering, so further discussion is pointless.
The differences between Watson and an average human:
Humans can answer many types of relatively simple questions before the entire question is read/spoken.
Humans can look at situational photos (ones far more complex than the structured arrangement of pieces on a chess board) and explain what is happening or about to happen and perhaps even go into detail about such things as the state of mind of the subjects.
I could go on.
Regardless how we choose to define "information", "knowledge", "intelligence" or any other term...at the end of the day Watson is an impressive purpose-built achievement, however humans have had a couple hundred thousand year head start.
What I would like to see is some form of AI "raised" by a family for several years to give it time to "learn and understand" just as a child would.
>>"Until we have a really good definition of "intelligence", who can say whether claims like "Watson is not really thinking" are true or false?"
Because generally we do tend to have a sense of what is an isn't intelligent. While we can't really put a finger on it. We really do have a feeling that we'll know it when we see it. I don't think the inability to robustly prove the accuracy of our intuition, necessarily negates the fact that it's the best idea we currently have.
>> "And if there's no way in principle to decide the truth of such claims, why do people make them with such confidence?"
For the same reason they claim with certainty that hindsight is 20/20 (which science has established that it is certainly far more myopic) or that one's confabulations after-the-fact were the important parts of their decision making process. Just because they could be wrong about it or falling for an illusion, doesn't negate the feeling. And humans are very good at finding correct answers even if how the arrived at those answers is unknown to them (though often they're more than happy to confabulate how they did it).
Demanding people define things that nobody has properly defined really is a sort of gotcha if you're asking it to be a prerequisite to asserting a contrary position to yours. This fairly universal sort of internal Turing-Test sense is the only game in town. And while I must agree that the evidence doesn't justify the typically expressed confidence levels, I must also agree that the sense is the best evidence for the claim commonly available, it's also pretty much the only evidence, so long as the very nature of the terms remains ineffable.
While certainly people are wrong to say it's 100% proven for sure that Watson is just a calculator, their feeling that there's nothing really intelligent about Watson is, given the current evidence, more likely to be the correct opinion.
This is a pretty impressive accomplishment. I would suggest that the main difference between Watson and a human in the process of answering a questions is that a human can derive new information from the question itself, whereas Watson compares the contents of the question against an existing database. As an example, a question where the rules are established anew and new information is given which is absolutely necessary to find the answer would easily deduced by any child but missed by Watson.
Example: "I like to call people who love dogs 'Furdinand'. I'm a 'Furdinand' what do I like?"
Since the answer can be extracted only from deciphering the question itself and it's not available in any external database, I think Watson would fail. It would probably see "Furdinand" as a misspelling or error, not a pun or play on words. That's probably the main difference between the way Watson and humans think. However, given Watson's performance, this way of understanding doesn't seem that distant anymore.
Post a Comment