Blog || Politics || Philosophy || Science || Fiction || Quotes
At first, the whole idea seems rather silly. We can imagine even the most simple of unintelligent programs being able to consult a table of preset responses, perhaps based on a list of keywords typed by the judge. Such a simple, unintelligent program could appear human in a small-talk conversation, commenting on the weather or telling jokes. But the Turing Test is not so simple as this, since a judge could probe much deeper than simple small-talk. The judge could ask the computer to compose a unique sonnet about love or ask it to react to hypothetical situations; the judge could teach the subject something and look for the ability to learn and make reasonable jumps without prodding; the judge could even ask the computer to perform large math computations, where too perfect and fast of a response would prove that it is not a human. So the test proves to be much more complex than it first seems. However, it is not without problems.
False Positives: Searle's Chinese Room
One big problem with the Turing Test is that it appears an unintelligent computer could theoretically pass the test. The most common illustration of this criticism is an analogy called the Chinese Room Thought Experiment, and was put forth by John Searle:
Imagine a man who does not speak Chinese locked up in an enclosed room with no windows and only a small slit to the outside. Beyond the room there is a Chinese-speaker, who will perform the role of Turing Test judge. The Chinese-speaker will pass into the room (through the slit) cards containing questions in Chinese, and await a response to judge if it seems intelligent. The man inside the room has at his disposal a huge pile of cards with Chinese characters on them (which all look like random meaningless squiggles to him), and a large book with two columns of Chinese characters running all the way through it. The man is instructed that the first column is input that he will receive through the slit. The second column is output; he will find the cards with characters that match this column and pass those cards back out the slit.
Now let us say the book-given responses are intelligent, meaningful responses (like if the judge asks what color the sky is, the output would read 'blue'). The Chinese-speaker would determine, by the Turing Test, that who- or whatever is inside the room is intelligent. However, it is clear from the initial setup that the man inside the room does not understand Chinese. He is just manipulating squiggles - which are meaningless to him - by the rules of the book. Thus, no understanding takes place by the man regarding the symbols he is manipulating.
From here, the obvious analogy is to a computer - a computer could have its own book of columns (a program) and input/output these meaningful answers in Chinese without understanding what it is doing at all. Thus the computer would pass the Turing Test with this Chinese judge, yet obviously since it lacks understanding of the Chinese words it is manipulating, it would not be intelligent.
Holes in the Chinese Room
The fault I find with Searle's argument is in his use of the book (or program) that lists what responses to give to questions. Such a book/program seems highly suspect; and while it is true this is just a thought experiment, for such an experiment to have any applicable results, the assumptions have to be possible at least. I am not convinced they are. I can only imagine three possible ways such a book/program could be created:
1. It is limited in the questions it can cover in a meaningful or human-like way. It may have a ready-made response for all sorts of usual and even eccentric questions. However, since the Turing Test allows for any sort of question, there is a good chance the book/program will not be ready for a great many possible questions. It might have a command at the end of the input column dictating a certain output for any input which is not listed elsewhere in the book - but any single such response would certainly fail to meaningfully answer at least some of the possible questions. Thus, such a book/program setup would eventually fail, and the duration of its success would depend on the sheer volume of inputs it is prepared for.
2. The book or program has an infinite number of responses ready so that it can accommodate any question that could possibly be given. This, of course, is absurd since an infinite number of columns would take up an infinite amount of space.
3. The book/program is set up such that it can find a way to meaningfully answer questions which did not fall into its input column, in which case it is hard to conclude anything other than that the book/program itself understands. Thus, artificial intelligence.
Therefore, in my opinion, Searle's entire argument is question begging. He seems to require an intelligent book/program in order to prove that the man can pass the Turing Test without understanding Chinese. Ironically, Searle's situation ends up with an artificial intelligence using a man as a tool for symbol manipulation!
I see the book [program] as analogous to the human mind, and the man + Chinese characters + slot [computer, or hardware casing] as analogous to the human body. Searle's problem is in trying to show that the man in the room is not intelligent; he misses the point. No one claims that the hardware casing is where the intelligence is at - it is found in the software, in the book/program structure. So I think Searle's thought experiment fails to prove that a Turing Test could be passed by a being without understanding, since the book/program is required to understand.
False Negatives: Uncultured Aliens
One other apparent flaw with the Turing Test is that a judge could determine that an intelligent computer is unintelligent just because it does not seem human; but it is easy to imagine an intelligent being that simply is not familiar with human experience and customs. An intelligent, conscious alien from another planet might have completely different sense perception and think or experience in vastly different ways than us. For that matter, many humans might fail such tests because they come from a sufficiently dissimilar cultural background. If asked to write about love, someone from a culture that expresses romantic love differently might appear wrong.
The Turing Test certainly does not seem a perfect empirical test of intelligence, but we are still stuck with the problem of how to determine intelligence. In my opinion, the Turing Test may still be valuable and useful despite its non-empirical nature. Consider the problem from a different view. Imagine a computer was created that could pass the Turing Test, and then this computer was put into a carefully crafted human-like body. It might have skin that felt real, mechanisms that simulated digestions for the intake and disposal of food, artificially-produced blood that ran through artificially-crafted veins, simulated eyes and ears, a mouth that produced human-like vocal sounds, and basically - short of an autopsy that would kill any human - this disguised computer looked and felt and moved just like a human, passing the Turing Test time and again when it interacted with humans.
How do we know that the human beings we interact with every day are not such skin-covered computers? Simply put, we do not know. However, despite this possibility we still consider the people we meet as intelligent human beings - and I think that is a very reasonable thing to do until given some strong evidence to the contrary. In the same manner and for similar reasons, I think it is just as reasonable to assume that a skinless computer which can pass the Turing Test is intelligent, until given reason to think otherwise. In this way, the Turing Test becomes less a question of complete empirical proof, and instead becomes a useful functional tool.