Here's an interview with Sherry Turkle, originally released back in April, but replayed yesterday.
For me, here was the most interesting exchange:
Nora Young: "So if we imagine a future where we have robotic companions, the way we now have Roomba vacuum cleaners and Furbies, what's the problem with transferring our idea of companionship to things that aren't actually alive, what's at risk of us losing?"
Sherry Turkle: "Well, these are companions that don't understand the meaning of our experiences, so it forces us to confront what is the meaning of a companion. It's like saying, 'I'm having a conversation with a robot.' Well, you have to say to yourself, 'You've forgotten the meaning of a human conversation, if you think a conversation is something you can have with a robot.'"
Now, I understand that an interview like this is necessarily shallow, and I haven't read Turkle's latest book on the subject. But still, this interview seems to suggest a real misunderstanding on Turkle's part.
Yes, when we interact with technology that mimics living creatures, we run the risk of having an overly-optimistic mental model about how much the technology "understands" us. That's the lesson of ELIZA. But in terms of "companionship", many of our companions fail to understand us, in exactly the same way.
When you tell your troubles to your dog, how much do you think your dog understands? A little bit, obviously -- a dog can pick up on your mood and react appropriately. But it seems unlikely a dog will "understand" the details that your best friend just died of AIDS, or that your latest book got a bad review, or that your spouse just walked out on you. Nevertheless, a dog can be a great companion. Why is a living dog a legitimate companion, and a robot dog not?
Even when we interact with other people, they will often listen and express sympathy (and we will happily receive their sympathy and feel comforted by it) without really understanding. As children, we had our crises that were beyond our parents' understanding. And now, as a parent, my children have emotional lives that are largely hidden from me. Yet we can comfort each other, and be good companions, without the deep understanding that Turkle seems to think is required.
Turkle seems to have a mental model of "understanding" that is too black-and-white. Just as, in the famous words of McCarthy and Dennett, a thermostat can be said to have "beliefs", so too can animals and robots have "understanding" of our experiences and needs. Here, by "understanding", I mean that animals, young children, and robots have limited models of us that suffice to provide the appropriate responses to comfort us. A dog can come and lick your face or curl up with you. A child can come sit in your lap. A robot can commiserate by asking what's wrong, or saying it's sorry to hear about our troubles, or even make the right facial expression.
I think it's foolish to obsess about what such a robot "understands". For, after all, we can do the same thing with dogs and young children. How much do they "really" understand of our troubles? Less than an adult human, probably, but the experience is not necessarily worthless despite this lack of understanding.
When Chuck paints a face on a volleyball and makes it his companion in the movie Cast Away, nobody stands up and says, "You idiot! That's just a ball with a dumb face on it." We don't say that, because we understand what loneliness is like and the value of companionship. When Wilson falls overboard later in the movie, we understand why Chuck is so devastated.
I know the value of human conversation, but I still think you can have a conversation with a robot. As I said, I admit there's a danger in overestimating how much a robot understands about us. But children who have grown up with technology have a better understanding of the limitations than those adults who were fooled by ELIZA decades ago. They're not going to be fooled in the same way. Already, as Turkle points out, they've constructed a new category for things like Furby, which is "alive enough". And furthermore, the technology will improve, so that future robots will have better and better models of what humans are like. As they do so, they will become better companions, and questions about whether they "really" understand will simply seem ... quaint.
Subscribe to:
Post Comments (Atom)
7 comments:
Sometimes it's better to have something less like a human than more. A cute disney type robot with exaggerated expressions and big cartoon eyes would make a better companion than a more humanoid robot, which would be susceptible to the uncanny valley. Disney and others found this out a while ago.
Jeff, although I follow your logic and don't really disagree with any of it, I confess to feeling discomfort with the conclusion. Of course, I have never had any urge to talk to a dog or even a person who isn't really listening to what I say.
This reminds me of the Outer Limits episode 'Valarie 23'.
.. er, spelling correction ..... that should be 'Valerie 23'
And I am reminded of the Twilight Zone episode "The Lateness of the Hour."
"You've forgotten the meaning of a human conversation, if you think a conversation is something you can have with a dog."
Everybody is sidestepping the real issues. First of all, Turkel is assuming from some sort of neo-vitalist position that it will be impossible to have an intelligent emotional machine.
Jeff is operating from the notion that a man's horse, gun, and dog can be his best friends.
The real questions come up when we consider machines that are sufficiently like us to be disturbing.
If we could build androids that were as or more intelligent than we are, but which were essentially faking emotions, then we would be talking about sociopaths. Possibly very dangerous sociopaths. Isaac Asimov created his 3 laws of robotics to keep the sociopaths in line.
A sufficiently lifelike machine might also act like a life form in other ways by asserting self preservation, competing with us as biological entities, and/or demanding equal rights.
"Robot companion" is really just another name for "sidekick"---an inferior, yet faithful companion---the Lone Ranger's Tonto.
Another term might be "servant" or "slave". The slave is not just your "conversation buddy" but becomes the outlet for your darker desires and passions---whatever your power will let you get away with.
Slavery brings the worst out in the master, because power corrupts. The master fears the slaves, lest the slaves take over, become the new masters, and exact revenge for those corrupt actions.
Post a Comment