Chinese room

From RationalWiki
(Redirected from Chinese Room)
Jump to navigation Jump to search
Thinking hardly
or hardly thinking?

Philosophy
Icon philosophy.svg
Major trains of thought
The good, the bad,
and the brain fart
Come to think of it

The Chinese room is a thought experiment designed by John Searle in his 1980 article "Minds, Brains, and Programs", largely as a response to Alan Turing's Turing test and functionalist approaches to the mind. It aims to prove that computers cannot be thinking machines comparable to the human brain, by showing that a program performing symbol manipulations can appear to be intelligent while lacking the comprehension intuitively believed to be part of intelligence. The experiment has become well known and influential in various scientific fields, especially cognitive science.[1]

The experiment[edit]

Searle describes the thought experiment as follows:

Suppose that I'm locked in a room and given a large batch of Chinese writing...[but] to me, Chinese writing is just so many meaningless squiggles. Now suppose further that after this first batch of Chinese writing I am given a second batch of Chinese script together with a set of rules for correlating the second batch with the first batch. The rules are in English, and I understand these rules as well as any other native speaker of English. They enable me to correlate one set of formal symbols with another set of formal symbols, and all that 'formal' means here is that I can identify the symbols entirely by their shapes. Now suppose also that I am given a third batch of Chinese symbols together with some instructions, again in English, that enable me to correlate elements of this third batch with the first two batches, and these rules instruct me how to give back certain Chinese symbols...from the point of view of somebody outside the room in which I am locked -- my answers to the questions are absolutely indistinguishable from those of native Chinese speakers. Nobody just looking at my answers can tell that I don't speak a word of Chinese.'

The conclusion is that just as the man's production of coherent answers by symbol manipulation fails to show that he understands Chinese, a computer's production of coherent outputs via comparable symbolic manipulations fails to demonstrate that the computer has understanding.

The argument sets the Turing Test on its head. A human, equipped with the time and algorithms, is imitating a computer that imitates a human being. There's nothing remarkable or special about using Chinese in this thought experiment; it is merely used as something sufficiently different from English to provide a good visual image to work with.[note 1] It could be many things: French, German, a made up language, or something else, such as processing scientific data loaded with jargon (i.e., we can ask whether an individual, loaded with the right sort of tools, can imitate a knowledgeable expert without absorbing the information themselves. Some people suggest that this is, in fact, what school sets out to achieve[2]).

The experiment was briefly demonstrated on an episode of the BBC's Horizon programme, "The Hunt for AI". Here, mathematician Marcus Du Sautoy sat in a room and compared Chinese script put through a letterbox with a book of common phrases, successfully returning answers to questions. In this demonstrative version, the algorithm is far simpler than the one imagined by Searle. In reality, to mimic a language would require far more information and a far larger instruction book, as well as the need for the person-powered algorithm to retain some memory — meaning it might take years for a human to process even the simplest idea. In the realm of a thought experiment, however, such a limitation is of little concern.

In contrast to the Turing Test[edit]

Oops… wrong room!

The aim of the Chinese Room thought experiment is to demonstrate that computers cannot truly comprehend what they are saying even if they do pass the Turing Test — because they lack any concept of semantics. The Turing Test, originally called "the imitation game", seeks to see whether a computer, engaged in a natural language conversation with a human, can imitate a human well enough that a second human observer cannot tell the difference.[note 2]

The Turing Test comes from a concept within computer programming that code should pass certain tests in order to be considered functional. The tests are laid out in advance, and the code is considered to "work" if it passes all specified tests without error, without regard for what the code actually does behind the scenes. An analogy is made with artificial flight, which was successfully achieved when people stopped trying to make a replica of a bird, and simply looked at building a machine that would pass the required test: namely, flying. In the case of the Turing Test, this principle is applied to sentience and intelligence. As an outward display of intelligence and the ability to communicate is pretty much the only evidence we can get from other humans to prove that they are conscious ("Hey, you can totally trust that I'm not a figment of your imagination or a computer program!")[note 3] unless we are medics with access to the machinery for imaging their brains, the Turing Test simply holds machines to the same standard.

Searle's Chinese Room reverses the test-focused nature of the Turing Test, suggesting that true understanding is not discernible on the basis of an external (and apparently superficial) result. Instead, it questions what must be going on inside the machine's brain to see if it really does understand. The problem here is that such a thing is difficult — all we have to prove a human is conscious is their ability to imitate a human, as cutting open a brain and having a poke around doesn't, in itself, prove anything.[note 4] Turing did design the imitation game to avoid this sort of difficulty, drawing inspiration from the test-focused concepts within computer science.[note 5]

Problems[edit]

There are a few problems with this interpretation of the result, however, as well as the form of the thought experiment.

Consciousness and emergence[edit]

A computer that passes the Turing Test is no more alive when it is switched off than the code is alive when it's printed onto paper and left stored in a room, much in the same way that a heap of neurons on their own isn't alive — most people agree that a corpse that hasn't yet rotted to dust isn't actually alive. It takes them all working to make consciousness. When a hypothetical Turing Test compliant computer is switched on, and its program executed, it produces a result indistinguishable from human consciousness. Because of this, asking a man in a room whether he understands Chinese while following an algorithm would be to miss the point of how consciousness operates and where it actually stems from. In the isolated room, it's not the man that needs to be comprehending or understanding Chinese, but the code and the instructions. The man is simply a tool for executing the program, much in the same way that blood supply and electrical conductivity by potassium and sodium ions is a tool for executing the functions of the human brain. It is the algorithm itself when combined with the operations of the man that understands Chinese.

Searle responded to these criticisms of the thought experiment by suggesting an extension where the man completely internalized the necessary algorithms, remembering and executing them within his own head. In this case, it's not external activities that are understanding the language, it's entirely within the human brain. Yet the man in the thought experiment still doesn't understand Chinese.[3] However, critics are quick to point out that regardless of where the algorithm happens — whether it be with pens and paper, on a computer or in someone's head — it is still the algorithm combined with an ability to execute it doing the work and understanding. Thus, Searle's refutation doesn't actually answer this criticism at all. Indeed, it's probably indicative that Searle didn't really understand the criticism in the first place. The Chinese Room ignores any emergent properties of consciousness; that it exists as part of the order and execution of the algorithm within a computer, or within the vast network of neurons in a human brain.

It's the algorithm, stupid[edit]

The experiment really proves little overall. The man is going through the same algorithms executed by a computer that can "understand" Chinese. He may be doing them manually (or even in his own head), but he is still performing them in the same manner. It can be conceived that the pens, paper, and filing cabinets needed to do this are a form of "help". We can give him more help with a calculator, then finally a robot assistant to go through the filing cabinets and flip the pages in the instruction book. We can then consider adding more "help" and automation to the process gradually until the man is essentially typing into a computer to get the result and at no point will we cross a line where it suddenly becomes the computer doing the work compared to the man. The man in the room is now talking to a Turing Test passing computer! A similar gradual process can be also said for Searle's internalization variant, although this would raise the issue of a separate Turing Test compliant computer inside the man's head and whether it would cause a bit of a schizophrenic episode (although this is a thought experiment, as such a thing would likely be impossible in reality and not cause a problem).

But basically, this pretty much brings us back to whether the man, the agent that simply executes the emergent programme, needs to understand Chinese at all. For the thought experiment to conclude that computers cannot be conscious, this needs to be demonstrated. The man himself has no more need to understand the Chinese than the atoms in his body need to understand English when he describes the odd day he's been having, pushing strange symbols around in a locked room, to his friends down the pub.

Response vs Initiative[edit]

There of course remains the fact that a man inside the Chinese Room, speaking with a fluent speaker of Chinese, would only be able to respond with fluency. He could receive input and create a response with full accuracy, and could provide an output with the expectation of a particular input, yet could not, for example, ask a series of questions seeking particular input. He would be unable to ask where he is, as he would not know the translation. Likewise, he would not know how to ask for food, or drink, or to be released. It is in this way that a distinct point is made via the experiment: a consciousness has to be able to learn, and to sustain itself. The man could arrange questions until food was provided, and then through testing associate that question with food; he would then have learned a question in Chinese which would produce an answer of food, and know something about that part of the Chinese language. This contradicts the experiment's conditions, as he would in fact begin to understand Chinese; thus a system must have the capacity to learn, rather than to just respond.

See also[edit]

Want to read this in another language?[edit]

Se você procura pelo artigo em Português, ver Quarto chinês.


External links[edit]

Notes[edit]

  1. Assuming an English-speaking audience, and thereby introducing a cultural bias, of course.
  2. Given the aims of the Turing Test, there are certain additional constraints. In particular, the human observer should not make a judgment based on the computer's external appearance (which might make it obviously not human), so the computer and human interlocutor are meant to be hidden from view and communicate remotely.
  3. Introverts are more likely to think either of these things than extroverts.
  4. It also tends to preclude the continued functionality of said brain.
  5. Of course, the success of Searle's argument might not have enormous practical consequences for the computer scientist. Just as making a bird might be a bad idea if you only care about getting off the ground, making a computer with understanding might be a bad idea if all you care about is getting certain external behaviors. It would still be a mistake, however, to conclude that you are a bird, just because you can fly.

References[edit]

  1. Cole, David. The Chinese Room Argument. Stanford Encyclopedia of Philosophy. February 20, 2020.
  2. Less Wrong — Guessing the Teacher's Password.
  3. Minds, Brains and Programs