Talk:Chinese room

From RationalWiki
Jump to navigation Jump to search
Icon philosophy.svg

This Philosophy related article has not received a brainstar for quality. Please consider expanding the article appropriately. See RationalWiki:Article rating for more information.

Steelbrain.png

Nice start. More will surely come. ħumanUser talk:Human 04:42, 7 November 2009 (UTC)

I was thinking about this quite a bit yesterday about why the argument is total bollocks (it mostly is IMO). Now I'm in a pickle about whether to write these thoughts down or whether to play Tomb Raider. Decisions decisions... Scarlet A.pngmoral 19:53, 7 November 2009 (UTC)

Not sure about some of this[edit]

Though it seems that Searle must be wrong, I think some of our counter-arguments are flawed as well. For example, this paragraph:

First, the thought-experiment ignores the emergent nature of consciousness, which is the most widely accepted idea about what the phenomena is. A computer that passes the Turing Test is no more alive when it is switched off than the code is alive when it's printed onto paper and left stored in a room. When the computer is switched on and the program is executed, however, it produces a result indistinguishable from human consciousness. Therefore, in the isolated room, it's not the man that needs to be comprehending or understanding Chinese, but the algorithm itself when combined with the operations of the man. The man no more needs to understand Chinese than the carbon atoms in his shoulder need to understand English for when he describes the odd day he's just had, pushing Chinese symbols around, to his wife.

The problem with this is that the system could be simplified: the man could theoretically memorize the algorithm so that he is the entire program, from start to finish, but he still would not understand Chinese. I suppose someone like Daniel Dennett would argue that this still creates a virtual mind that has understanding, but I'm not sure if this is true. (And if Dennett is right, why is the "virtual mind" unrelated to the man's mind?
Thoughts, anyone? Tetronian you're clueless 20:50, 23 November 2009 (UTC)

Well, you did answer your question with Dennett's idea. Even memorising the algorithms , the man is still using the algorithms rather than "his own" algorithms. And it's the algorithms that generate the consciousness emergently when they are executed, regardless of what (or who) is executing them. The man still doesn't have to comprehend anything, even if he does memorise it and executes the program mentally. Really, asking that the man comprehends Chinese is like asking his tongue to comprehend English, or his feet to comprehend a map of a place he's walking. Scarlet A.pngmoral 20:55, 23 November 2009 (UTC)
I accept emergence, but I'm struggling to comprehend Dennett's idea. I accept that virtual minds exist, but as Searle says, it is only "virtual." Is there a distinction between a simulated mind and a real one? Or is Dennett right in say that our minds are virtual as well. Tetronian you're clueless 20:59, 23 November 2009 (UTC)
Well, the thing about the Chinese Room idea is that it concludes that there are virtual minds and "real" minds, but that's also one of the premises; if you don't accept that the man is "real" and the machine "virtual" in the first place, it's conclusion is nonsense. But the thing is, if there were virtual and real minds, what would the difference be? You can't directly detect it; a brain scan isn't going to detect the picture of what you're imagining any more than staring at a CPU is going to tell you what program is running. So you have to test what the effects are. Hence the idea behind the Turing Test; if someone interacts with an individual that simulates a person perfectly, then you've got the exact same evidence that the mind is "real" that you have that a person's mind is "real". So as you can't detect a difference, you may as well say that there is no difference. Scarlet A.pngmoral 21:12, 23 November 2009 (UTC)
So, to quote our article: Finally, the entire concept begs the question that human consciousness is special and different. I see your point there: as the WP article says "simulation is as good as the real thing." So I suppose you could say that the Turing Test is valid, but not for the reasons that Turing intended. Very interesting. But couldn't it be argued that your point begs the question as well? You are assuming that human consciousness is not special or different, whereas Searle assumes that it is not. Even if we concede that Dennett is right about there being a virtual mind at work here, I don't see how you could prove that all virtual minds are alike.
BTW, your sig just said "I'll ruffle your philosopher." Very appropriate, haha. Tetronian you're clueless 21:19, 23 November 2009 (UTC)
I don't think it is begging the question that "virtual" and "real" minds are the same, in fact, that's not really what is being claimed. I'm just saying that they have the same properties (judged via the Turing Test or something similar) so it's far more presumptuous to say that they're different entities than to say that they're the same. For example, an electron has the same properties as any other electron. Are they the same electron? They can't be distinguished with any known experiment, so for all intents and purposes they are identical (to be fair, that example isn't as good as I thought it was, but I'm putting it out there anyway). It doesn't particularly matter if something is "different" if you can't detect that difference - indeed, the fact that you can't detect a difference is what makes something identical, so they're really only "different" in a metaphysical or philosophical sense. Scarlet A.pngmoral 21:37, 23 November 2009 (UTC)
I still think that virtual mind created by a computer program running would be radically different than a human one, but I see your point. Here's the thing, though: isn't the Turing Test irrelevant? According to Dennett, something as radical as this creates a virtual mind, but there is no way it would pass the Turing Test. Sure, Searle's hypothetical man does pass the test, but that doesn't seem to matter much.
I guess my argument is simply an argument from incredulity, which is obviously a fallacy but is still hard to dispel. I fail to see how Searle's setup creates a virtual mind, but emergence tells me that it does anyway. Tetronian you're clueless 21:44, 23 November 2009 (UTC)
If you're not 100% convinced, I'm happy to work on some "pros" to the argument, most of that part came from the fact that I had a long walk home just after I read the initial description of the experiment when it was posted up elsewhere. While I'm all for one-sided articles, a bit more coherent discussion first is always good. Scarlet A.pngmoral 21:47, 23 November 2009 (UTC)

(undent)I like this from WP on the China Brain,

Our intuition that this is impossible is just a bias against non-neuron minds, furthered by the implausibility of the scenario. There is a natural desire for us to locate the mind at a singularity because the mind feels to us like it is just one thing. Functionalist philosophers of mind endorse the idea that something like the China brain can realise a mind, and that neurons are, in principle, not the only material that can create a mental state.

It's very much what I think when I see the set up for the Chinese Room. People like the nice narrative of a "special" human mind so set up a bias that says "me; alive - computer; not alive", which is all well and good based on common sense and intuition, but there's no real reason that should be the case in reality. What if we could simulate a brain perfectly in a computer, and then transfer a persons consciousness into it; it then not only passes the Turing Test with respect to replicating a person, but with respect to replicating a specific person. I think that would be a very interesting extension to ponder. Scarlet A.pngmoral 21:52, 23 November 2009 (UTC)

That is a good point, though I'm not sure exactly how you would transfer consciousness. Although I do remember reading an article somewhere about simulating brains by simulating the interactions of neurons using a supercomputer. In any case, I'm going to do some reading on this as well, I don't think I really know enough to truly form an opinion. Tetronian you're clueless 21:58, 23 November 2009 (UTC)
Well, I nearly put transfer of consciousness in quotes because it's more a thought experiment than anything practical; it's nice sci-fi but I'm not convinced it'll happen realistically. Anyway, it's very intuitive to think of a mind as something special and unique, and very difficult to think that China could simulate a brain and produce an emergent consciousness. But think of it this way, you know that you are conscious and have a mind (more accurately, I know that I am); you don't have the same proof for other people besides their assurances that they are alive and conscious - you can infer that "because they look like you" then they think like you, but that's not very good reasoning, it's certainly not airtight. So if all you really have are assurances and you're happy to accept that, does it matter if those assurances come from a human being, a computer, mainland China doing something very weird or half a boiled cabbage? Scarlet A.pngmoral 22:12, 23 November 2009 (UTC)

Why Chinese?[edit]

I'm not quite sure how this relates to the argument, but I'm just gonna throw this in here anyway: unlike other written languages, it's possible to learn to read & write Chinese without understanding the spoken language(s) at all, because it's a pictographic script (the symbols represents words & concepts, not sounds). I don't know whether this is why Searle chose Chinese, or it's just coincidental. WéáśéĺóíďWeaselly.jpgMethinks it is a Weasel 21:27, 23 November 2009 (UTC)

I think its just coincidental. Speaking is not really the important point, understanding is. Tetronian you're clueless 21:30, 23 November 2009 (UTC)
I think the primary reason for Chinese is because it is dramatically different (at least in appearance) to English. It's arbitrary, of course, but the Chinese makes it slightly more emphasised. Scarlet A.pngmoral 21:30, 23 November 2009 (UTC)
Probably the nature of the alphabet used. French or German, say, you could probably rough guess what the words are. Unless you knew Chinese, there's no way you can assign a correct meaning to the symbols. He probably could have just as easily used Cyrillic, Greek or Hebrew. --PsygremlinSiarad! 21:34, 23 November 2009 (UTC)
Nope. Korean might work instead of Chinese, but all three of the examples Pysgremlin gives use related alphabets, and they're also related to Latin, ie this alphabet. The letters A, Alpha and Aleph aren't similar by eerie coincidence but because they're all descended from an earlier symbol which already represented roughly this vowel. As a result, Searle's argument would be confused because the person is clearly doing some of the heavy lifting by recognising some of the symbols and some of what they mean, even if Searle argues that they don't understand the entire language.
What Weaseloid wrote isn't really true, by the way, a very long time ago a predecessor of Chinese probably did mostly work this way, but Chinese doesn't. Pictographs aren't expressive enough to do a full-blown language, they're easy enough for "Toilets" vs "Trains" but not great on "Discipline" vs "Enthusiasm" so no extant written languages rely on pictographs. The Han (Chinese) characters are instead loosely logographic, that is, typically one or two characters represents a word, and the specific word is communicated by combining a "radical" which suggests the general topic, and then other elements which are included for their sound alone. For example (lazily stealing from Wikipedia) 嘿 indicates amusement (like "Ha!") and is assembled from a radical 口 which suggests an exclamation of some sort, and 黑 which means black, but this word 嘿 doesn't mean anything to do with black, that's just there to indicate that 嘿 sounds like the word for black, as distinct from say 哎 which incorporate the same radical (so we can guess it's also an exclamation) but incorporates strokes from a character meaning "big", not because it's an injection about enormousness, but to suggest that this word (roughly meaning "What?!") sounds a bit like the word meaning big. Tialaramex (talk) 02:31, 19 August 2014 (UTC)

Problems with problems.[edit]

The second paragraph of the Problems section claims that the thought experiment fails to take into account the possibility that the computer/algorithm system (or man/algorithm system) is conscious and understands. Searle explicitly acknowledged and offered a rebuttal to this claim. Whether his rebuttal works or not is not really my point. Rather, it is disingenuous to include the criticism without commenting on his rebuttal. Phiwum (talk) 12:24, 18 October 2010 (UTC)

Do you have a link to the rebuttal? I haven't read it before and would be interesting to see how he gets around it. Scarlet A.pngmoral 13:02, 18 October 2010 (UTC)
It's in "Minds, Brains and Programs", the first reply (the systems reply). A PDF of the penultimate draft can be found online at http://www.google.com/url?url=http://citeseerx.ist.psu.edu/viewdoc/download%3Fdoi%3D10.1.1.83.5248%26rep%3Drep1%26type%3Dpdf&rct=j&sa=U&ei=D0a8TIDCFMKx8Ab4pOSVBA&ved=0CBUQFjAA&q=searle+minds+brains+and+programs&usg=AFQjCNEgNveKIhcMoEKFtTvcu3ARlQyr_w&cad=rja, but hopefully editors of this page have a copy at hand anyway.
His basic idea is to let the man internalize the algorithm, so that there is no algorithm distinct from the person. Then the man/algorithm system is redundant. I've heard others try to argue that the algorithm nonetheless remains distinct, as if it were a platonic entity itself, independent of its tokens/implementations. Again, I'm not necessarily defending Searle's reply (nor rejecting it), but rather want to point out that the objection raised here has been explicitly considered in the original article. Phiwum (talk) 13:11, 18 October 2010 (UTC)
I see, but I agree with his critics that the algorithm is distinct. It also still begs the question of some sort of dualism for his reply to that issue to even make sense. He may have a point in the difference between knowing that "squoggle" follows "squiggle" rather than "squiggle squoggle" means "hamburger", but the thought experiment itself doesn't interrogate the algorithm whether it knows this fact or not. I wouldn't ask a CPU if it understood the HTLM that composed this RationalWiki page - because of course it doesn't understand the HTML of this page. But the program running on that CPU does. I can't see a difference between the man totally internalizing the method and totally placing it inside a computer. His argument also seems to stem from some personal incredulity about emergent consciousness, like its totally possible for a mess of neurons to produce consciousness but it isn't possible for an equally complex array of paper and pens to produce consciousness? We could compare the China Brain thought experiment there. Overall, it's worth mentioning some of his responses, but judging by that page, I don't think he's fully grasped the criticism in the first place. Scarlet A.pngmoral 13:35, 18 October 2010 (UTC)
Anyway, thanks for the info. I will sit and process it into something more coherent than the above paragraph. Scarlet A.pngmoral 13:37, 18 October 2010 (UTC)
To be fair to the point of the thought experiment, it is a purely negative result. He doesn't have to explain how neurons can produce consciousness in order to argue that purely formal symbol manipulation is insufficient to do the same. (He did write about a positive account of consciousness in later works, but I don't recall even the large picture of that.)
As well, this notion that an algorithm exists and is conscious in some abstract sense — that is, that the algorithm itself, independent of any implementations, is the sort of thing that can understand Chinese — well, that's just not a position I'd like to have to defend. Abstract entities aren't the sorts of things we usually regard as conscious.
If I were to work on criticisms of Searle, I think I'd find published criticisms and summarize them here rather than try for an original response. Shouldn't be too hard to find. As I recall, the "combination reply" was largely regarded as his weakest response back when I was studying this stuff. Phiwum (talk) 13:51, 18 October 2010 (UTC)
The trouble with the published criticisms is that they're already cataloged on the monstrously large Wikipedia article on the subject. It's nice to list them as external links but from the perspective of an RW article, it doesn't have to compete with Wikipedia and, really, it shouldn't.
However, I think that Searle has to explain what is the difference between interacting neurons and a pen-and-paper algorithm in order for the experiment to show what it intends to show. Otherwise all it does is assume that what is inside the man's head is different to what is going on in the room to conclude that what is going on inside the man's head is different to what is going on in the room. To support the conclusion that the human brain is different to an algorithm you either need to pre-assume something like dualism or do some special pleading on behalf of the brain - which is what it is when someone says that abstract things like algorithms don't have consciousness. Scarlet A.pngmoral 15:46, 18 October 2010 (UTC)
I suppose you and I disagree on this point. In order to show that formal symbol manipulation is insufficient for understanding, I don't see any reason he needs a positive account of what neurons do. It's clear (to me) that I'm conscious and I presume that it's because of my neural activity. If his argument regarding symbol manipulation works, then I can conclude that whatever my neurons are doing, it's something besides (or in addition to) mere symbol manipulation.
I reckon that's how the argument is intended to be taken. Phiwum (talk) 21:02, 18 October 2010 (UTC)
But it's far from clear to me that you're truly concious. If I ask a Turing Test passing computer if it's concious, it replies "yes, of course I am". I say "prove it" and it will say "well, prove to me that you're concious". Thus the Turing Test provides a proxy for conciousness based on the end result being the same - it's the exact same quality and level of evidence that we apply to humans as we apply to a machine. Searle's Chinese Room tries to refute the use of the Turing Test by assuming that the human brain is special, and the problem is it circularly concludes that the human brain is still special compared to a machine. I suppose you could say that the main criticisms of Searle's thought-experiment assume the brain isn't special - and then conclude that it still isn't special. However, it's a simpler set of assumptions this way. And also, based on the evidence we have, the brain is entirely materialistic without any special soul-like entity preventing a machine doing the same. As there's no clear distinction between artificial and natural and, in principle, no clear distinction between a brain and a computer than can perfectly simulate a brain, you have to assume some form of non-materialist dualism to conclude that Searle's thought-experiment correctly refutes the use of the Turing Test. It's criticism of the Turing Test is valid to a point, but to accept it, you have two options: assume that no one is concious (or at least that we can never prove that any one else is) or go for Biblical levels of special pleading for the brain (i.e., invoking dualism). Scarlet A.pngmoral 14:21, 19 October 2010 (UTC)

While I agree with most of the above criticisms of the experiment as a criticisms re the possibility of artificial intelligence I had always taken the chinese room experiment to be proving a much narrower point, namely just that the turing test itself, which only works on the level of language and semantic understanding, was insufficient for establishing artificial intelligence/consciousness. I.E. because the person in the chinese room can pass the turing test we need something better than a purely input output language based test for assessing the consciousness of machines. I'm too tired to do it right now but it would be nice if this could be pointed out in a nicely worded way (i.e. just that maybe the experiment can be read in a slightly narrower way), especially since at the moment the criticisms sections looks a bit too biased in favour of AI/Singularity/Computers can do everything views. — Unsigned, by: 110.20.5.233 / talk / contribs

You can't say "we need something better [...] for assessing the consciousness of machines" without also having to say "we need something better for assessing the conciousness of other humans". Scarlet A.pngpostate 17:51, 27 March 2014 (UTC)
Indeed, this was the thrust of my discussions with Stevan Harnad (founder of the journal where Searle published this idea) back when I was a postgraduate student. Harnad believes, and he thinks Searle would concede, that in practice if an AI seems to us to be conscious, it's because it's conscious and not because of some bizarre trick invoked by a philosopher. The Chinese Room argument depends upon Searle asserting that he does not understand Chinese and the only problematic case with the AI would be if it stubbornly insisted it was not conscious. A bunch of seemingly conscious aliens who insist that nope, they're actually just sophisticated automota and not ethically equivalent to us would be very confusing for us. For example, can we slice the aliens open to check inside them? It might seem as though we can, they believe they're just automota, and it would be ethically OK to risk damaging the inside of a mere machine. But what if they are, as we suspect, wrong? Then we're perhaps injuring or even killing another person, just to prove some sort of point. Well that's clearly not OK! Tialaramex (talk) 03:18, 19 August 2014 (UTC)

Another country heard from[edit]

I hope I’m not just talking to myself – I notice that the most recent post here is some years old. Still, in the hope that someone will read this, here goes; I’d really like to contribute to this article, but I have some problems with doing so. It seems to me to be largely a polemic against the CR argument rather than a balanced discussion here. Certainly no neutral point of view as in Wikipedia, even within the brief summed up at the top of the Rationalwiki main page. Presume nobody is claiming that Searle’s argument is ‘pseudoscience’ or part of the ‘anti-science movement’, let alone a ‘crank idea’ or part of ‘authoritarianism’ and ‘fundamentalism’.

Now I think that there may be some misunderstanding or misrepresentation of Searle, and some arguments that are mistaken or maybe invalid. As I say, I’d like to put my thoughts forward, but I’m having trouble seeing how to do this without drastic restructuring of the article, which could come over as a hostile act. That’s not my intention. Perhaps I could intersperse material sympathetic to Searle or response to some of the argumentation in the existing article, to create a kind of Socratic dialogue?

I’d like comments in response to this, before I spend some time drafting my contribution. Thoughts, please?

Graham — Unsigned, by: Gwarner99 / talk / contribs

Strong AI Folly Nailed Dead-to-Rights By Searle's Chinese Room[edit]

RationalWiki, who are you kidding with this smoke-and-mirrors? See a detailed response at Essay:The Death Knell of Dualism?. More Than Magnetic Ink (talk) 00:02, 17 October 2022 (UTC)

Even though I respect noted Sex-Pest John Searle’s thought experiment I think there are multiple grounds for rejection from this essay — especially in it’s characterization of consciousness and it’s commitment to the existence of volition. There are physicalist positions non-reducible to the kinds of computational theories that Searle’s argument undermines or are not characterized by emergentism. I personally reject both in favour of a connectionist theory of mind which is accepted within the philosophy of mind as not subject to chinese room objections or objections to emergentisim. Which if your entire characterization of “materialism” rests upon only those two positions then it’s arguably not representative of various physicalist stances like my own and many contemporary cognitive scientists. It would read to some as a strawman since most contemporary advocates for AI do not work from the perspective of the mind as Turing machine or even consciousness itself as an emergent property. Consciousness isn’t an essentialist concept that necessarily entails volition or qaulia. - Only Sort of Dumb (talk) 00:19, 17 October 2022 (UTC)
Either way, we'll find out if consciousness can be exhibited by non-organic beings or not soon enough. Vee (talk) 01:12, 17 October 2022 (UTC)

Circular Assumptions[edit]

This section of the article is problematic, for multiple reasons:

  • In order to accept the conclusion that a machine cannot comprehend the same way as humans do This is already a misrepresentation of the argument; Searle takes the brain to be a thinking machine, he denies that is is a wholly computer-like machine.
  • you must assume that there is something different about the man and that he must somehow understand Chinese throughout the process There is something different about a man, namely, that his brain has a different material constitution from a standard man-made computer (and from the Chinese Room). The assumption that a man can understand Chinese is taken to be obvious, and the counterclaim that no person can understand Chinese to be obviously absurd. This is not unreasonable on its face, so it is a strange choice for critique.
  • Compare a brain running on connected neurons and a computer running on silicon that completely and perfectly simulates a brain This sort of seems to beg the question against Searle.
  • the Turing Test is valid The Turing Test, taken as a test of machine intelligence, is sort of terrible. Arguably, for some subjects, ELIZA passed the test back in the '60s, and was far from intelligent.
  • assumes some non-materialist dualism first Searle does not take his argument to establish dualism.
  • That understanding, some semantic "magic", can happen only in organic brains is nothing but bio-chauvinism that one has no right to assert. Searle does not assert it, he provides an argument that he takes to establish that there is a significant difference between the brain and a standard computer, namely, the Chinese Room argument.
  • we have no proof that this exists in others, so are reliant on projecting our consciousness onto people we can interact with. That this heuristic for assessing consciousness should only apply to human beings and cannot under any circumstances apply to computers is just special pleading. This seems to confuse absolute proof with evidence, and to suggest again that Searle is just asserting his position, rather than providing an argument for it. While a number of critiques have been raised, this section of the article does not really articulate any of them. Searle is arguing that symbol manipulation is insufficient for understanding. If computers do nothing but symbol manipulation, and this is true, then they are incapable of understanding. Searle does not take this to establish dualism, he take it to indicate that the brain does something that cannot be wholly reduced to symbol manipulation, even if it does only physical things.

Undoubtedly, I am also not representing Searle with complete accuracy and charity, because it has been a long time since I read his paper, and I suspect I am going rather beyond the scope of what Searle states explicitly. But I know that I am closer than this section of the article is, and I frankly see no reason to try to keep the section, given how far off target it is. 𝒮𝑒𝓇𝑒𝓃𝑒 talk 00:53, 17 October 2022 (UTC)