The Turing Test is intended to test whether an artificial intelligence can “think”— the test assumes that we would know that the artificial intelligence is capable of thinking if it can convince us in a blind conversation that it is human.
Never mind the problem of assuming that intelligence must be human-like (or that it must be expressed in ways similar to the ways humans express it). The real problem is that the Turing Test doesn’t test intelligence at all. It tests the appearance of intelligence. It tests the ability of human programmers to program that simulacrum. The classic rebuttal is that appearing sufficiently intelligent is being intelligent. I’ve gone back and forth over this one for some time, and Christian’s article—along with my recent reading in Antonio Damasio’s Self Comes to Mind—has convinced me that appearing intelligent—no matter how apparent—cannot be equated with being intelligent.
Eventually, in asking these sorts of questions, one comes down to the nature of intelligence, of knowledge, of mind, of awareness, of self. What does it mean to say one is intelligent? What does it mean to say one possesses knowledge? That one has a mind? What’s the difference between mind and self? And how can one test for any of those things? What does it mean that a computer is able to “spin half-discernible essays on postmodern theory” [Christian] yet is unable to compose a joke more complex than the most basic pun?
There are, of course, degrees of intelligence. Watson is more intelligent than my microwave oven, and you, dear reader, are more intelligent than Watson. But what does that statement mean? Watson clearly can store more information than my microwave; your brain might store more information than Watson, but surely a version of Watson could be built that would store far more information than your brain. But as the recent Jeopardy match proved, more information storage (that is, more memory) is not the answer to achieving intelligence. In the questions that Watson missed, the errors clearly came from a lack of context. Watson took wrong turns in his decision trees (associations) when he did not have adequate depth of knowledge. Okay, what does that mean—depth of knowledge? What is knowledge?
Knowledge is not mere information, not merely filled memory banks. Knowledge is stored information that the holder is able to use in a creative way—to write a sentence, to paint a representational picture, to plant a crop, to build a house, to make up a joke. In order to use this information, the holder must know that it knows (that is, it must have a sense of truth; which is not to say that the holder can only use true information). One of the greatest accomplishments of Watson is that his creators gave him a sense of what he knows he knows (a percentage likelihood that he had sufficient information to get the right answer)—which gave him the ability to choose whether or not to ring in. So Watson has some knowledge—he holds information and is able to determine how reliable that information is and when it is reliable enough to venture an answer. It’s conceivable even that Watson could acquire the knowledge necessary to instruct someone (or a crew of robots) on how to build a house. This level of knowledge clearly, though, would be less complex than the knowledge that a human contractor would have, because the contractor will have worked on actual houses, will have experienced what it is to build a house, which is a level of complexity that Watson cannot have.
Why can Watson not have that experience? Because he does not have a body. Damasio and others have demonstrated at this point that human intelligence is embodied, that it rises from a triple function of the brain—its regulation and mapping of the rest of the body, its mapping of relationships to objects outside the body, and an additional layer of mapping of its own activity in the process of mapping the body and objects outside the body. That is, our intelligence rises from a combination of self-reference and context. Watson cannot have that context because he not only cannot perceive his environment, he also cannot perceive his “body” and has no regulatory function that monitors his physical existence and attempts to repair it when necessary (infection, hunger, thirst, fatigue, nutrition, hygiene); in the absence of either of those, it goes without saying that Watson cannot map the relationship between the two. Even if Watson were to be equipped with a more complex version of today’s facial recognition software, he would not have a function that maps what is perceived relative to his “self”—it would only be a function to add more information (recognition of objects) with no deep context.
But what does this necessary organicism, this relationship between self and non-self tell us about our intelligence? Christian says:
Perhaps the fetishization of analytical thinking and the concomitant denigration of the creatural—that is, animal—and bodily aspects of life are two things we’d do well to leave behind. Perhaps, at least, in the beginnings of an age of AI, we are starting to center ourselves again, after generations of living slightly to one side—the logical, left-hemisphere side. Add to this that humans’ contempt for “soulless” animals, our unwillingness to think of ourselves as descended from our fellow “beasts”, is now challenged on all fronts: growing secularism and empiricism, growing appreciation for the cognitive and behavioral abilities of organisms other than ourselves, and not coincidentally, the entrance onto the scene of an entity with considerably less soul than we sense in a common chimpanzee or bonobo…
…It’s my belief that only experiencing and understanding truly disembodied cognition—only seeing the coldness and deadness and disconnectedness of something that really does deal in pure abstraction, divorced from sensory reality—can snap us out of it. Only this can bring us, quite literally, back to our senses.
That is, seeing what intelligence is not should help us to better see what our intelligence is, help us to see and appreciate ourselves and each other better. Should.
Could Watson or another computer be equipped with a complex sensory apparatus and self-referential monitoring processes that could achieve the complexity of human intelligence? Yes, it’s possible. Mind, after all, is simply a function of a physical system—our bodies. But our technology has a long way to go before it catches up with millions of years of evolution which compacted immense computing power into an amazingly small space—and did so organically with intricate backup systems, redundancies, and shortcuts. We have a long, long, long way to go before Lt. Data or the androids of Bladerunner, AI, or I, Robot.
All of which is to say that we are far inferior engineers to nature.
No comments:
Post a Comment