Tuesday, June 18, 2013

The Guy in the Chinese Room

Imagine that we invent a machine which is able to perfectly communicate in Chinese and passes the Turing Test which means the machine is able to convinces a human Chinese speaker that it's alive Chinese  speaker. Does the machine literally "Understand" Chinese? or it is just simulating this ability?

John Searle has created an example to answer this question. Something like this: He supposes that if one, let's say an English speaker, is in a closed room and has an instruction book in English. Every time that someone throws a message under the door, he looks up the corresponding response in the book, writes it down and sends it out. The instruction is so complete that his answers convinces a native Chinese speaker that the guy in the room knows Chinese although he has no clue of what he is saying. 

He says that there is no difference between him and that machine and obviously he does not understand Chinese. Therefore, he argues, the machine does not understand the conversation either. Also, in more general content he says: 
"computational models of consciousness are not sufficient by themselves for consciousness. The computational model for consciousness stands to consciousness in the same way the computational model of anything stands to the domain being modelled. Nobody supposes that the computational model of rainstorms in London will leave us all wet. But they make the mistake of supposing that the computational model of consciousness is somehow conscious. It is the same mistake in both cases." [John R. Searle,  Consciousness and Language, p. 16]

I believe there is a mistake in his example. Let's say that there is a tiny guy in my head looking up responses in an instruction book and send them out to you. Does it make me less conscious than you? As others have mentioned it before, how can you say that I am thinking the same way you are doing it and does the method of understanding matter? 

The problem of Searle's example is that he is comparing a part of his sample (the guy in the room) with the whole other system (human). Consciousness is a very abstract characteristic of things and we do not have a very accurate definition for it. If the computational model is used for making us all wet, then why you think that you are not really wet. I am not trying to define consciousness. But my guess is that we call a thing conscious because it has a functionality or behavior that comes from abstract relationships among its parts. It does not matter if those parts are made of organic or non-organic materials. A human with an artificial eye, hand, ear, heart or maybe someday brain is still considered as a conscious being. Actually, I think "The Chinese Room" is a proper title for Searle's sample as it is the room that is able to speak Chinese not the guy in it.


[Added in July 2020] This is the whole point of Turing's test. We were mixing two things and I think he cleared it out by his test. One is how to categorize something, in this case an abstract non trivial characteristic of a more concrete object, like consciousness. The other thing, being able to describe how it happens. Turing emphasized that if something passes his test, he can call it conscious without even knowing how it achieved it. You or John Searle may say, I do not agree that this specific behaviour is consciousness. But his counter example is an incomplete comparison as mentioned above.

Another example that I can think of is,  flying. Let's say, you see a UFO flying in the air. (Yep, an Unknown damn Flying Object). You don't say, I am not convinced that it is flying because I do not know if its method of achieving that is the same as an airplane or a bird.

Pouya Bisadi

No comments:

Post a Comment