Tom Jonard's Chinese room Page

The Chinese room is a thought experiment developed by John Searle in the 1980’s as a counter example to the proposal that it is possible to build a thinking machine and the idea that the conscious mind is simply an organic version of such a machine.  It is also taken to be an argument counter to Behaviorism.  As such it crosses several disciplines and joins their insights in a way not otherwise done to raise questions about the nature of the mind and the brain.

With the invention and rapid development of the Computer in the second half of the last century there arose questions about the ultimate capacities and capabilities of machines.  First of all let’s be clear:  computers are simply calculating machines.  The human mind can calculate too just not as fast or accurately in most cases1.  However it has become evident that fast, complicated computers are capable of performing many other functions performed by the human mind.  Decision making, game playing, puzzle solving and visual processing are but some examples.  In a parallel development philosophers of the mind saw in computers a possible analog of brain function that might lead to insights about the mind.

Surely if developments continue the capacity of machines to mimic the human mind appear unlimited.  This is the idea of Artificial Intelligence (AI).  AI is not just a future possibility as many systems have already been created to act as decision-making assistants or to mine information sources and correlate the results.  In these forms AI exists today.  The only outstanding question is how far can we take this technology.  The optimistic technological answer is that there is no limit.  Machines can be made as intelligent as we want to perform any function we can do ourselves.  This is the outlook of Strong Artificial Intelligence.

The statement that “machines can be made as intelligent as we want” begs a fundamental question.  Intelligence is not normally an attribute of machines but of minds.  The reason it’s called Artificial Intelligence is that it is normally thought to be illegitimate to ascribe real intelligence to machines.  This points to the fundamental question raised by computer technology and proponents of AI – can machines be made to think and not just solve problems we put to them?  Can they be made to possess a mind and perhaps decide for themselves what problem to solve?  Can they become conscious and aware of the world around them so that they make decisions based on that consciousness?

We shall overlook the question of why one would want machines to think.  There are sufficient tales in the science fiction literature on both pros and cons of this astounding possibility.  We shall be more concerned with the possibility alone, the reality of this possibility and its implications.  The implications are not restricted to computer science alone.  That machines could possess a mind or consciousness raises the possibility that they are simply not the first machines to do so.  That distinction might be held by the human mind.  If so then what we learn about AI might tell us as much about the nature of our own minds as about the limits of technology.

What do we mean by intelligence?  We measure the intelligence of others by testing them – asking them questions and observing and recording their answers.  What is measured in such cases is not intelligence itself but a performance capability assumed to be based on intelligence.  This idea is old and based on the common sense observation that intelligent people are people who act or speak intelligently.  The actual processes of the mind of another are not directly available for examination.  So we depend on others reporting these or tests to measure them.

But how shall we assess whether a machine possesses intelligence?  In the 1950’s Alan Turing – a philosopher and computational theorist -- proposed that the same test we use for humans should suffice.  If we can freely communicate with a machine and not be able to detect that it is a machine then we should say that it is as intelligent as any other being with which we might communicate.  If in the course of our conversations with the machine it claimed to be conscious then there would be no reason to dispute that claim.  This is known as the Turning test.  Notice that this is a behaviorist’s answer to the question of how to test for intelligence and consciousness.  It is perhaps no coincidence that Behaviorism was at the height of its prominence in psychological studies at the time the Turing test was first proposed.

The implementation of the Turing test should have no impact in its outcome.  We can normally speak to others but if we are unable to do so with a machine that should not count against it.  If we are communicating with the machine through a computer keyboard and screen and the response is slow that should not count against it either.  The only thing we should pay attention to in the Turing test is the content of responses.  In days when computers were slower and less capable these caveats were necessary to level the playing field.  They are still necessary in the case of the Chinese room as we shall see.

In the case of the Chinese room we suppose that the interface the Turing tester communicates through is a slot in a door.  The tester who is Chinese writes his questions in Chinese on paper and passes them through the slot.  Responses come back through the same slot also written in Chinese.  Now suppose that instead of a computer there is in the room a person and a stack of instruction books. This person does not understand Chinese.  But the books perfectly describe the characters of the Chinese language and provide instructions for responding to any combination of them so that it appears to the tester that a Chinese fluent person is responding to them.

Since the person in the room does not understand Chinese his response is obviously slow but this does not count against him and the tester is satisfied that he is communicating with an intelligence.  But since the person in the room remains ignorant of Chinese the fact that the tester is satisfied only disproves the validity of the test.  The Chinese room is more interesting if a group of people in a large room are substituted for the one person in a small room.  Each person in such a so-called Chinese gymnasium2 would be responsible for only part of the process of responding to the tester and operate solely according to instructions as again none would understand Chinese.  This setup is more akin to the mind where specific areas of the brain process part of the input and generate part of the response.

The Chinese room and its variants raise the question, “Where is the intelligence or consciousness supposedly reflected in the responses that come through the slot?”  The obvious answer is that there is more to the mind and consciousness than behavior.  The formulation of the Turing test ignores the fact that we use something like it only to confirm that others have minds like our own.  Nevertheless we each experience our own minds entirely differently – introspectively.  What we seek to confirm when we test or just converse with others is that they have a similar interior mental life.  The Turing test says nothing about this and says nothing about what might be the source of the responses we receive during the test, which after all is the point of asking "What is intelligence?" and "What is the mind?"

On the other hand the argument that because no one in the Chinese room understands Chinese or understands the exchange with the tester there is neither intelligence nor consciousness at work also cannot be correct!  What do we suppose goes on in our brains?  Our neurons which process bits of our perceptions and thoughts do not individually understand the process they are a part of either.  As in the Chinese gymnasium they operate according to chemical rules written not in books but in the structure of which they are part and the genetics of which they are formed.  If neither intelligence nor consciousness can arise from an aggregate of agents operating blindly according to predetermined rules then neither can our minds which contradicts our direct experience.

This thought experiment shows that behavior alone cannot resolve the question of whether a machine is intelligent or the nature of the mind and consciousness.  If anything it suggests a radical dualism.  It assumes both artificial and natural intelligence are closely linked without asking if that is a valid assumption.  It might be that only organic machines can be intelligent or that only the kind of processing done in our brains can result in intelligence.  But in this context it is hard to see why we should be special in this way or why special requirements should be needed to produce intelligence.  Neurological studies are beginning to suggest that our neuro-physiology is special and that the whole body more than just the brain may have something to do with it.

For further understanding further research is needed.

The Turning test fails to be an adequate test of any kind of intelligence because it is too behavioristic.  What is lacking is any requirement for an introspective mechanism – to be implemented in the Chinese room in a set of instructions that would tell the occupants how to act upon what they are doing to produce a consciousness of that action.  Of course we don’t even know yet how a brain accomplishes this, but this is a thought experiment.  The Turing test assumes that all that is intelligence is embodied in the process of interaction.  A liberal interpretation might assume that intelligence arises some how out of the process.  But such a pure epiphenomenalogical approach may be wrong.  A more explicit embodiment of intelligence may be what occurs in natural minds and may be the approach necessary to make artificial ones as well.


1 There are certain mental deficits that are accompanied by prodigious calculating capabilities known collectively Idiot Savant syndrome.  So technically computers are not unique in their abilities.
2 I first encountered the Chinese gymnasium variation of this thought experiment in a 1990 Scientific American article by Paul and Patricia Churchland.

Created June 2, 2003, 
© 2003, Thomas A. Jonard