A Zombie Walks Into A Chinese Room... And Breaks Reality - Part III

Please read Part II (click!) and Part I (click!) first! This is going to be the final (and most exciting) article in the series, because it all comes together! Here, we will use the concepts of p-zombies and Searle's thought-experiment about conscious machines, and try to think about the philosophical interplay between them - what happens when we put a zombie inside the Chinese Room? I will then go on to defend the theory following this logic.

Before that, we will explore another haunting idea - is the algorithm that this allegory is meant to represent actually a zombie in disguise all along?!

1. Algorithms Are Moral p-Zombies:

We can now restate the Chinese Room in the context of the anti-physicalist arguments laid out earlier. If we can conceive of p-zombies, we can use the same arguments that refute physicalism to refute computationalism. Recall that understanding is a qualia, which is in the realm of semantics in Searle’s terminology, while an algorithm is only allowed syntactic resources to deduce understanding [6]. In this sense, an algorithm can be constructed with the same order of operations for solving a problem that a mind would employ – it would be syntactically identical to human thought [4] – yet we can agree that there is nothing it is like to be an algorithm, because there is no phenomenal feeling of consciousness to go along with it. Still, the fact that we can conceive of them makes algorithms equivalent to p-zombies.

Véliz [12] comes to this revelation in a very different manner in her seminal work on the ontology of machine morality. She first defines the concept of the “moral zombie” – beings that make the same moral decisions as a (rational) human mind, but without consciousness. Subsequently, moral zombies do not possess intentionality or responsibility for their moral decisions – they “would not rejoice at saving a life or suffer guilt from taking one.” She argues that any algorithm constructed for making ethical choices is a moral zombie, but it could never become a moral agent since algorithms are, prima facie, not sentient. Sentience allows conscious experience of feelings like pleasure, pain, and fear which she maintains are necessary for moral agency. 

Still, I find that both our methods of reaching this algorithm/zombie equivalence complement each other; my method makes it clearer why the term ‘zombie’ is technically appropriate in a computationalist context rather than a physicalist context, and Valdez’s addition of the morality concern makes the connection between algorithmic and human decision-making more natural to digest, as we will further see. For now, we have established that algorithms are essentially p-zombies, devoid of consciousness, that act morally indistinguishable from us. Searle would, speculatively, find no reason to disagree that algorithms lack consciousness and subsequently lack moral agency.

2. The Chinese Room Makes Us Moral p-Zombies:

In this vein, let us apply this inference to the Chinese Room experiment. Whoever is in the Chinese Room must answer all questions via pure symbolic manipulation, i.e., syntactic operations [1]. Whoever is in the room has completely constricted their consciousness – there can be no phenomenal feeling behind any of the answers even if they are moral in nature, because the person does not understand the semantics of the answers they relay as an output. Additionally, being isolated in the room, the person is not allowed the conscious feelings necessary to be a moral agent [12]. We must thus conclude that the person in the room is a moral p-zombie.

However, the thought experiment hinges on the man having the capability to understand; Searle wants to show that the man fails to understand Chinese, and this effort would be pointless if the man is a p-zombie who does not possess the attribute of consciousness and understanding in the first place. The fault then lies in the formulation of the experiment itself: Searle’s man in the Chinese Room fails to understand Chinese because he cannot understand anything; he is not conscious. Searle concluding that “the man in the Chinese Room cannot gain understanding” gives us no new information about the world given that the man is a p-zombie; it is a purely analytic 5 statement in Kantian terms, equivalent to saying something like “the ice in the freezer cannot be liquid water.” Searle’s reasoning requires that the man entering the room is not a p-zombie; otherwise, he would be defending his view of a lack of understanding via an argument that already pre-supposes a lack of understanding.

In essence, Searle’s Chinese Room is constructed to be applied to mind-machine comparisons even though it compares one moral zombie (algorithms) to another moral zombie under the guise of it being a “mind”. The experiment only works if the man in the room cannot be reduced to a zombie, but the Room is intrinsically designed such that any sentient creature is virtually “p-zombified” inside it. However, if the room is able to do this, we run into an even bigger realization – the Room is a bridge from the conscious to the non-conscious. More importantly, the Room is a bridge in the reverse direction as well, which means that Levine’s epistemic gap [10] has been crossed from the physical to the phenomenal. The uncrossability of this gap previously ensured that that minds stayed perpetually conscious, and zombies stayed perpetually unconscious. 

Now that it has been bridged, we can no longer believe in that certainty of perpetual consciousness. Since we need perpetual consciousness to hold on to moral agency, we come to the scary inference that moral agency is not possible for human minds, because all of us are vulnerable to moral zombification in the Chinese Room. Moral agency is normatively a binary property – it is imperative to fully eliminate the possibility of losing sentience if we are to be maintain agency for every decision. The Chinese Room is the counterexample that keeps that possibility greater than zero, throwing all minds into the abyss of hesitation regarding whether that phenomenal feeling always exists behind every moral action or not. 

3. Further Discussion (Conclusion):

I will now address a few potential responses to this theory. Firstly, there is admittedly some vagueness in saying that a mind is “zombified”. How can we be sure that this is exactly what happens? One may argue that a man entering the room merely acts on so-called zombie mechanisms but does not “become” one. To this, I will give an answer using modality theory using Diver’s definition [13] of the dynamic between contingency, possibility, and necessity. I request the asker to assume that their point is right. Still, we can both agree that the creature in the room operates independent of the need for consciousness. Clearly, for all intents and purposes of the Room, the qualia of consciousness is a contingency, i.e., it rules out necessity and requires possibility. What about non-consciousness? That is a possibility, and it requires either one of contingency or necessity. So, non-consciousness holds a greater degree of truth since we have not ruled out necessity for it, while we have done that in the case of consciousness. Thus, the claim that the creature is non-conscious, i.e., the creature is truly a zombie, holds a greater degree of truth than saying that the creature is conscious, since the latter does not have the chance to even be a contender for necessity.

Second, one can argue that it does not follow that the man is a p-zombie just because he acts without moral agency or understanding for some operations. It does not imply that he then completely lacks moral agency or understanding in those instances. I find this to be the most realistic reply, and we can see some intuitive justification behind this – I classified the jump from conscious to non-conscious as an epistemic gap that must be unbridgeable, even though we do see real instances (and can easily conceive) of someone possessing consciousness but doing something unconsciously. However, the Room is special because the jump is also reversable when the man exits the room; this means that he acquired something that gave him the ability to be aware of his consciousness, which was taken away when he entered the Room. It is this aspect of the Room that allows us to come to the equivalence inference and is central to my theory.

Beyond moral concerns, the Room has transformed from a simple thought experiment to a highly disruptive example that dangerously breaks epistemological beliefs in radical aspects. The Room, as it happens, has become an example of a conscious mind “knowing what it is like to be zombie”. And if there is supposedly nothing it is to be like a zombie, then the man has just actualized what it is like to be non-conscious. The Room is a chamber where we can view the jump from the phenomenal to the physical and back, but the fact that we cannot explain it means that we are indefinitely uncertain of which identities of the man are non-conscious (“zombified”) and conscious. This, in turn, casts doubt upon not only the moral agency of minds but upon the consciousness as a perpetual attribute of minds itself. From this point onwards, a rabbit hole of potentially breaking consciousness’ circular paradox and uncovering the truth behind phenomenal feeling awaits. 

That was it. Thank you all so much for sticking through my ramblings, I hope you were able to appreciate the point I tried to make. I absolutely love how the chain of thought from two very popular philosophical theories is able to systematically break down our self-conception as moral, rational and autonomous agents. See you on my next post!


REFERENCES:

[1] Cole, David. “The Chinese Room Argument.” Stanford Encyclopedia of Philosophy, Stanford University, 20 Feb. 2020, https://plato.stanford.edu/entries/chinese-room/. 

[4] Searle, John R. “Is the Brain a Digital Computer?” Proceedings and Addresses of the American Philosophical Association, vol. 64, no. 3, 1990, pp. 21–37. JSTOR, https://doi.org/10.2307/3130074. Accessed 1 Oct. 2022.

[6] Searle, John. “Why Dualism (and Materialism) Fail to Account for Consciousness.” Richard E. Lee (ed.), Questioning Nineteenth Century Assumptions About Knowledge, Iii: Dualism. 2010 Suny Press. pp. 5-48.

[10] Levine, J., Purple Haze: The Puzzle of Consciousness, 2001 Oxford and New York: Oxford University Press.

[12] Véliz, C. Moral zombies: why algorithms are not moral agents. AI & Soc 36, 487–497 (2021). https://doi.org/10.1007/s00146-021-01189-x

[13] Divers, John. “Possible Worlds.” Routledge & CRC Press, 2002, https://www.routledge.com/Possible Worlds/Divers/p/book/9780415155564#. 


Comments

Popular posts from this blog

On the Acknowledgement of My Tiring Morbid Curiosity

Poem: Ode to A Pink Glove

Kafka in Arendt's Human Condition - Part I