This is an online article from the Christian Research Journal.
When you support the Journal, you join the team and help provide the resources at equip.org that minister to people worldwide. These resources include our ever-growing database of more than 2,000 articles, as well as our free Postmodern Realities podcast.
Another way you can support our online articles is by leaving us a tip. A tip is just a small amount, like $3, $5, or $10, which is the cost of a latte, lunch out, or coffee drink. To leave a tip, click here
“I’m not so sure, actually, that AI has knowledge, let alone consciousness,” I demurred. A computer programmer and I recently were discussing the merits of undergraduate studies, when he speculated that artificial intelligence (AI), bearing these key cognitive traits, may be poised to upend higher education as we know it. “Well,” he responded, nonplussed by my disagreement with his characterization of the technology, “ever since AI passed the Turing test…”
It’s commonplace to hear the language of consciousness applied to computing technology, especially AI. Neural networks, machine learning, artificial intelligence, automated reasoning, knowledge engineering, emotion AI. This isn’t surprising, though, given AI’s (seeming) ability to approximate various functions of human consciousness. No harm, no foul. After all, we use language figuratively all the time. The problem arises when people believe AI literally has consciousness in the same sense in which human persons are conscious. People often point to Turing tests to support this idea. Contrary to popular belief, though, passing a Turing test does not establish that AI is conscious (or much else of interest). This should matter to Christians, because to attribute genuine consciousness to AI is seriously to demean the imago dei.
Turing Test
Alan Turing (1912–1954) was a British mathematician, known for his work at Bletchley Park of cracking the Enigma machine. Turing is also widely recognized as the father of modern computer science. His famous article, “Computing Machinery and Intelligence” (1950),1 asks the question, “Can machines think?” To get at an answer, Turing proposes the “imitation game.”2
The game itself is simple. We have two rooms. In the first room we place a person and a machine, and in the second room we place an investigator. Unable to see into the first room, the investigator knows the other person and the machine simply as ‘X’ and ‘Y.’ The investigator passes questions into the first room, directed to X or Y. For example, “Does X play chess?” The other person aims to help the investigator correctly identify which of X or Y is the machine, while the machine’s aim is to trick the investigator into mistaking machine for human. The object of the game is for the investigator to identify correctly, on the basis of the answers returned, whether X is the person or the machine. Hence, for the machine (or AI) to “pass the Turing test” is for it so to function in such a way that humans cannot recognize it as non-human.
For his part, Turing believed “that in about fifty years’ time it will be possible to programme computers…to make them play the imitation game so well that an average interrogator will not have more than 70 per cent. chance of making the right identification after five minutes of questioning.”3 Was Turing right? More or less. One recent study, conducted by researchers at the University of California San Diego, evaluated three systems (ELIZA, GPT-3.5, and GPT-4). The report, published under the title “People Cannot Distinguish GPT-4 from a Human in a Turing Test,” claims to provide the first serious empirical proof that any artificial system passes an interactive Turing test. The study found that human participants “were no better than chance at identifying GPT-4 after a five minute conversation, suggesting that current AI systems are capable of deceiving people into believing that they are human.”4 If this is great news for Turing, it may be less so for teachers grading student essays late in the year.
Tracing Consciousness
But so what? Suppose we stipulate that AI is regularly mistaken for human consciousness. Would that establish that AI is, in fact, conscious in the same way humans are? Not at all. To see why, let’s reflect briefly on human consciousness.
Considerations of consciousness (and the philosophy of mind generally) can get fairly technical, but for our purposes a few basic observations will suffice.5 Each of us, as persons, are directly familiar with our own individual consciousness. I experience my consciousness, but obviously I cannot experience yours. And vice versa. I am directly familiar with what it is like to be me, but I am not — indeed, cannot be — directly familiar with what it is like to be you. And again, vice versa. This is because access to what it is like to be one is available only via one’s first-person, inner perspective. We each are the unique subjects of our conscious experiences, and in the absence of subjects there cannot be consciousness.
Each of us knows via first-person experience that there are various states of consciousness. We refer colloquially to being in a “semi-conscious state” when we’re half asleep or distracted, but that’s not the sort of state I mean. I’m referring instead to what philosophers call mental states. We experience sensations — being in pain, for example (“My toe hurts”). We also experience desires (“I’d really like to get out of attending that meeting”), beliefs (“I believe the party is at 6:00 P.M.”), thoughts (“I love my wife and son”), understanding, and others, all of which are impossible for AI.
Let’s focus on thoughts. Thoughts are about something (perhaps even something fictitious); they can be true or false; and they can logically imply further thoughts. As I type this, I can form thoughts about what I’m typing. I notice I can form thoughts about the appearance of the letters on the screen (“Gee, I don’t like that font”), but I can also form thoughts about the meaning conveyed by what I’m typing (this paragraph is about one’s thoughts). We can use thoughts to have the mental state of understanding, and that’s pretty extraordinary. Again, these states are mental; they are not physical (e.g., brain) states.
Touring the Chinese Room
The suggestion that AI can form thoughts and have understanding depends on a radically different view: that humans’ (physical) brains are what have mental states; humans do not have (nonphysical) minds. “Mental,” on this suggestion, does not mean nonphysical. The suggestion is that mental states are to be understood as functions, and AI can certainly exhibit functions. To get the idea, think in terms of input → programming (plus enormous data, if you like) → output. That is fundamentally how AI works; humans’ minds are to brains what programming is to AI.6 When fed input AI produces output indistinguishable from that of human consciousness, and so AI is said to have understanding (consciousness). In a word, AI is a “mind” in the same sense you are.
Yet, as (atheist) philosopher John Searle explains, the input → programming → output model cannot establish understanding. No matter how sophisticated the programming may be, functioning in a certain way is not identical to understanding. To see why, let’s imagine what Searle calls the Chinese room.
Suppose you have no knowledge of the Chinese language. Chinese characters are, to you, “just so many meaningless squiggles.” Now suppose you’re given a handful of Chinese writings and then locked in a room. Shortly, a second batch of Chinese writings are slid into the room beneath the door. Meantime, the room contains a rulebook, written in English. The rulebook tells you how to correlate symbols (e.g., when you see squiggle symbol, put it with a squoggle symbol). You’ve no idea what the symbols mean, but you find you’re able to locate symbols in the writings that match these squiggles and squoggles and get on with the correlations. Later a third batch of Chinese writings appear beneath the door, along with further English instructions. These instructions enable you to correlate this batch with the first two batches, and then to pass your latest correlations back under the door. Unbeknownst to you, the people giving you these writings “call the first batch ‘a script,’ they call the second batch ‘a story,’ and they call the third batch ‘questions.’ Furthermore, they call the symbols [you] give them back in response to the third batch ‘answers to the questions,’ and the set of rules in English…they call ‘the program.’”7
It’s easy to imagine that after a while you’d become really good at following the instructions for manipulating the Chinese symbols and the programmers would become so good at writing programs that someone outside the room would be unable to distinguish your answers from those of a native Chinese speaker. Ta-da — Turing test passed! Except you still don’t understand Chinese.
If you still don’t understand Chinese, then what exactly have you become really good at in the Chinese room? The answer is that you’ve become good at a certain syntactical operation, namely manipulating the symbols based purely on syntax. Your manipulations of the symbols, in other words, are based entirely on the shape of the Chinese symbols (e.g., squiggle and squoggle) and the order in which they appear. The instructions in the rulebook concern nothing beyond this syntax. Can AI perform this syntactical operation? Yes, perhaps even better than you can. In following the rulebook, though, are you not thinking “about” the Chinese symbols? Yes, in a sense you are — but only in the sense in which I formed thoughts about not liking the font on my computer screen. The manipulation of symbols in keeping with a syntax, after all, has (literally) no meaning. In order to understand Chinese (or anything else), you must be able to think “about” the meaning of the symbols.8 This is what Searle calls “semantic” understanding, and this cannot be done merely through complicated syntactical operations.
In the Chinese room experiment, you are in the place of AI. If you can follow the formal rules spelled out in the rulebook, after all, then surely an AI can, too. You’ve got the batches of writing (inputs); you’ve got ideal programming; and you’ve generated the expected outputs. Yet you lack any understanding whatsoever of Chinese. As Searle concludes, since “the program is defined in terms of computational operations on purely formally defined elements” (i.e., input → programming → output, which is how AI functions), the experiment reveals that mere program functioning cannot yield understanding.9 AI can make an impressive simulation indeed of human consciousness. But an impressive simulation of understanding is no more conscious than a computer simulation of rainstorms is wet.10
Thrice Blessed Souls
Scripture teaches that humankind bears the image of God. So Genesis 1:27: “God created humankind in his own image, in the image of God he created them, male and female he created them” (NET). How magnificent that among creatures we alone bear the image of the Creator!
In Christian thinking an important aspect of the imago dei is the (nonphysical) mind and consciousness. As the thirteenth-century philosopher and theologian Thomas Aquinas explains: “Since human beings are said to be in the image of God in virtue of their having a nature that includes an intellect, such a nature is most in the image of God in virtue of being most able to imitate God.”11 Aquinas goes on to explain that “only in rational creatures is there found a likeness of God which counts as an image….As far as a likeness of the divine nature is concerned, rational creatures seem somehow to attain a representation of [that] type in virtue of imitating God not only in this, that he is and lives, but especially in this, that he understands.”12 In short, Aquinas affirms an essential connection between the imago dei and having minds capable of consciousness.
John Calvin similarly explains that the image of God “extends to the whole excellence by which man’s nature towers over all kinds of living creatures….And although the primary seat of the divine image was in the mind or heart, or in the soul and its powers, yet there was no part of man…in which some sparks did not glow.”13 Contemporary Christian philosopher Alvin Plantinga agrees: “God has created us human beings in his own image: this centrally involves our resembling God in being persons — that is, beings with intellect and will. Like God, we are the sort of beings who have beliefs and understanding: we have intellect.”14
There are tremendously promising applications of AI in today’s world, and it is not my purpose to denigrate AI. Personally, I’m excited about (some of) AI’s potential, especially in the healthcare sector. But AI does not have consciousness, and Turing tests aren’t even relevant to establishing consciousness, as we’ve seen. Can we apply the language of consciousness figuratively to AI? Sure. The problem arises when people believe AI has consciousness in the same sense in which human persons are conscious. Such a view diminishes what it means to be a human and demeans the image of God.15
R. Keith Loftin, PhD, is Professor of Philosophy and Chair of the Politics, Philosophy, Economics department at Dallas Baptist University.
NOTES
- A. M. Turing, “Computing Machinery and Intelligence,” Mind 59, no. 236 (1950): 433–60, https://doi.org/10.1093/mind/LIX.236.433.
- Turing, “Computing Machinery and Intelligence,” 433.
- Turing, “Computing Machinery and Intelligence,” 442.
- Cameron R. Jones and Benjamin K. Bergen, “People Cannot Distinguish GPT-4 from a Human in a Turing Test,” arXiv, Cornell University, May 9, 2024, https://arxiv.org/html/2405.08007.
- For fuller treatment, see J. P. Moreland, The Soul (Chicago: Moody Publishers, 2014).
- John R. Searle, “Minds, Brains, and Programs,” Behavioral and Brain Sciences 3, no. 3 (1980): 421.
- Searle, “Minds, Brains, and Programs,” 418.
- E. J. Lowe, An Introduction to the Philosophy of Mind (New York: Cambridge University Press, 2000), 214–217.
- Searle, “Minds, Brains, and Programs,” 418.
- John Searle, Minds, Brains, and Science (Cambridge, MA: Harvard University Press, 1984), 37.
- Thomas Aquinas, Summa Theologica Ia q. 93 a. 4.
- Aquinas, Summa Theologica Ia q. 93 a. 6.
- John Calvin, Institutes of the Christian Religion, 1.15.3.
- Alvin Plantinga, Warranted Christian Belief (New York: Oxford University Press, 2000), 204.
- Thanks to John M. DePoe for helpful input on this article.