We will be told that this program or that one isn't "really" creative (but no testable definition of `creative' will be given). We will be told that some new system doesn't "really" understand what you are saying (but no testable definition of `understand' will be forthcoming). We will be told that computers cannot have "minds" (but no test for having or not having a "mind" will be provided). We will be told that computers aren't "really" reading and understanding what they translate, even though no definition of what "really reading and understanding" will be given.
So it doesn't surprise me at all that there has been an enormous amount of ignorant nonsense written about ChaptGPT and related models. This dreck from Gary N. Smith, a professor of economics at Pomona College, is pretty typical for the genre.
Smith claims that "while GPT-3 can string words together in convincing ways, it has no idea what the words mean". Yet he does not provide any testable definition of "knowing what words mean". Does it really have no idea? I conducted a little experiment of my own, by asking ChatGPT, "What is a cow?" And here is the response:
A cow is a large, hoofed animal that is often kept on a farm for its milk, meat, and other products. Cows are part of the bovine family and are known for their docile nature and distinctive mooing sound. They are typically reddish-brown or black in color and have long horns on their head. Cows are used for a variety of purposes, including the production of milk, cheese, butter, and other dairy products. They are also raised for meat, leather, and other products. Cows are important sources of food and other resources for people around the world.Does that really look to you like ChatGPT doesn't know what the word "cow" means?
So ChatGPT does indeed seem to define words. But can it reason correctly about them? So I asked, "Can a cow fly over the moon?" and ChatGPT responded as follows:
No, cows are not able to fly over the moon or any other celestial body. Cows are terrestrial animals and do not have the ability to fly through the air. They are able to walk and run on land, but they do not have wings or other body parts that allow them to fly. In fact, most animals are not able to fly, and those that are able to fly typically have specific adaptations, such as wings, that allow them to do so.I was not satisfied with this, so I continued to ask, "Then how did the cow fly over the moon in "Hey Diddle Diddle" if it is impossible?" And here is how ChatGPT responded:
"Hey Diddle Diddle" is a popular nursery rhyme that has been passed down through the generations. It is a simple, nonsensical poem that is meant to be fun and entertaining, rather than literally true. The rhyme tells the story of a cow jumping over the moon, but it is not intended to be taken literally. Cows do not have the ability to fly, and the rhyme is not meant to be taken as a factual account of events. It is simply a lighthearted, playful rhyme that has been enjoyed by children and adults for many years.Seems pretty accurate to me.
Smith justifies his claim by giving an example of a conversation about how many pennies there are in each group if you start with 47 pennies, you throw away 44, and divide the remainder into equal-size groups. In his example, ChatGPT gave the wrong answer and incorrect reasoning to get there. But that conversation isn't any evidence at all that ChatGPT didn't know the meaning of the words "pennies" or "group" or "47". If anything, it shows it was unable to do logical problem-solving in that particular domain, but that's a completely separate issue from knowing the meaning of words.
There's nothing easier than taking a chatbot and getting it to produce something nonsensical. After that, you can just claim it doesn't "really" understand English. But human speakers of English produce a flood of nonsense every day, as anyone reading Salon can verify. Does that mean that the writers don't "really" understand English?
I tried Smith's example, and here is what I got:
If you start with 47 pennies and throw away 44 of them, you are left with 47-44=3 pennies. If you divide these 3 pennies into three groups, each group will have 3/3=1 penny. So each group will have 1 penny.Looks pretty good to me! So can we conclude that ChatGPT really does understand the meaning of words now?
I wish the people commenting on AI would demonstrate a little more seriousness about what it means to "really" understand utterances. Or any realization that "understanding" is not a binary; one can have "understanding" to a greater or lesser degree. Does a five-year-old understand the nuances of a word like "duty" the way a retired member of the military does?
I think the right way to view ChatGPT is that it does exhibit some understanding of what words mean, as evidenced by its ability to define them and use them correctly in context in many conversations. It doesn't mean that everything ChatGPT says is correct, nor that it holds no incorrect beliefs.
I am sure we will see many more crappy pieces like Smith's in the near future.
1 comment:
I totally agree with you here. In fact, my own view is that I agree with Gary Smith that AI is dangerous, but not because it is so fallible and lacking (as he suggests), but because of the opposite. If it really performed poorly, there would be much less to worry about.
Post a Comment