What is the Chinese Room? (...And There's Zombies In There?!) - Part I

Hello, and welcome to a new philosophical exploration! This one is near and dear to my heart, because it discusses two of my favorite things, and I feel like this was one of the wildest ideas I had when taking my Epistemology class in undergrad! In this article, I will explain a thought-experiment which is one-half of putting forth my thesis: Searle's famous 'Chinese Room Experiment'. In the later articles in this series, I will explain the second half, which is something called "philosophical zombies" (I know, it sounds so cool!), and I will draw a very explosive inference using the two. But for now, let's dive into Searle's thought experiment. 

1. Introduction:

After the famous ‘Turing Test’ asked the question of whether intelligence can be achieved by a machine, Berkeley philosopher John Searle posited an argument that has now become the most discussed topic in the field’s history. Called the ‘Chinese Room Experiment’, it is meant to highlight a subtle difference between perceived and “realized” understanding to make the point that a machine can never be labelled intentional or responsible in the same way human minds can. This is a stark departure from Turing and the school of thought that agrees with the “computational theory of mind” [1], which draws parallels between human thinking and algorithmic processing systems

2. The Experiment:

I will simply present the argument and Searle’s conclusion in this section for the sake of brevity; we only need an understanding of the features of the Chinese Room for this essay and are not concerned with the replies or justifications of intelligence surrounding it. The Chinese Room argument is presented as follows: 

“Imagine a native English speaker who knows no Chinese locked in a room full of boxes of Chinese symbols (a data base) together with a book of instructions for manipulating the symbols (the program). Imagine that people outside the room send in other Chinese symbols which, unknown to the person in the room, are questions in Chinese (the input). And imagine that by following the instructions in the program the man in the room is able to pass out Chinese symbols which are correct answers to the questions (the output). The program enables the person in the room to pass the Turing Test for understanding Chinese but he does not understand a word of Chinese.” [2] 

From this, Searle comes to the following view: 

“In the Chinese case I have everything that artificial intelligence can put into me by way of a program, and I understand nothing; in the English case I understand everything, and there is so far no reason at all to suppose that my understanding has anything to do with computer programs; that is, with computational operations on 2 purely formally specified elements…. these [programs] by themselves have no interesting connection with understanding. They are certainly not sufficient conditions, and not the slightest reason has been given to suppose that they are necessary conditions or even that they make a significant contribution to understanding.” [3] 

Searle makes it abundantly clear that he rejects functionalism as the right way to view intelligence – the Turing Test accepts the Chinese Room as intelligent since it believes mental states, like understanding, can be demonstrated via mental functions. According to him, understanding requires more than functional implementations to be realized [4]. The idea that mental states are more than purely functional states precisely violates the doctrine of functionalism. Now the question becomes: what more can we understand about “understanding” so that we may be able to give a better account of the constitutive “mind” than the one functionalism provides? The conception of this argument serves as a starting point for an interesting intersection of cognitive science and epistemology. 

3. The Setup for Discussing Zombies:

That wasn't that complicated, was it? If you decide to read more into his response, you will realize why this experiment became the spiritual successor to Turing's question of machine intelligence. But we are taking this in a different direction - we are going out of the computer science realm and going deeper into the idea of meta-consciousness and what it means to be a "human" rational agent (the human part is what differentiates us from some imaginable logical test-subject). To do this, we pick a very interesting and famous concept in epistemology - the idea of 'philosophical zombies', or 'zombies' for our use-case.

The notion of “zombies” is a famous argument in the physicalism v. dualism debate in epistemology. My main thrust in this series of articles is to help you make the connection that the fundamental ability of humans to be self-entrusted, moral decision-makers, is in jeopardy if subjected to the Chinese Room. I will assert that the Chinese Room contains the capability to not only confirm the conception of zombies, but also actualize zombies from human minds, thus transferring their burden of explaining consciousness upon us. This revelation corrupts our ability to deny zombie-like states of being and virtually eliminates the certainty with which we can claim consciousness and moral agency, something that nearly all of epistemology and ethics hinges on. Stay tuned for the next article!


REFERENCES:

[1] Cole, David. “The Chinese Room Argument.” Stanford Encyclopedia of Philosophy, Stanford University, 20 Feb. 2020, https://plato.stanford.edu/entries/chinese-room/. 

[2] Searle, John R. “Chinese Room Argument.” The MIT Encyclopedia of The Cognitive Sciences, Robert A. Wilson, Frank C. Keil, 1999, http://web.mit.edu/morrishalle/pubworks/papers/1999_Halle_MIT_Encyclopedia_Cognitiv e_Sciences-paper.pdf. 

[3] Searle, John R. “Minds, Brains, and Programs.” Behavioral and Brain Sciences, vol. 3, no. 3,1980, pp. 417–424., doi:10.1017/S0140525X00005756. 

[4] Searle, John R. “Is the Brain a Digital Computer?” Proceedings and Addresses of the American Philosophical Association, vol. 64, no. 3, 1990, pp. 21–37. JSTOR, https://doi.org/10.2307/3130074. Accessed 1 Oct. 2022

Comments

Popular posts from this blog

On the Acknowledgement of My Tiring Morbid Curiosity

Poem: Ode to A Pink Glove

Kafka in Arendt's Human Condition - Part I