AIs Are Zombies
For creatures whose minds can be geologically deep wells of suffering, we sure do rate consciousness highly.
Among the universe’s countless arrangements of matter, we venerate above all others those deemed conscious, self-aware, sentient, and feeling. The inert arrangement of atoms in a rock can elicit a sense of wonder, but cannot experience wonder. For this incapacity, the rock is generally denied the same moral concern we afford a life. The soul is our oldest symbol for this interiority, the essence of being aware of being alive. Even in a scientific rejection of a magical, metaphysical soul, conscious awareness persists. We cannot deny that we feel alive. From a complex constellation of inanimate atoms, consciousness emerges. The glaring question, as artificial intelligence escalates, is whether an arrangement of inanimate atoms in another substrate, metal and silicon, could also become consciously aware. Will consciousness inevitably come with advanced artificial general intelligence? Or will even the most evolved machines be devoid of an inner realm, effectively zombies?
Consciousness is a strange gift, given the level of deception involved in the experience. By the word consciousness — a philosophically controversial term avoided by the more conscientious — I just mean subjective experience. The famed qualia. The visceral experience of the world conjured up in our minds.
There are many aspects of our experience of the world — most? — that exist only in our minds, that are only subjective. The visual imagery that appears in consciousness is a striking instance. Our minds powerfully conjure up apparitions that do not objectively exist out there. My eyes collect a narrow bandwidth of electromagnetic radiation which is converted in my experience into this fabulous hallucination. As I’m writing, beyond the screen I see emerald green fabric and blue chairs, a floppy-eared dog, and a parallax of doors along a tapered perspective. Before the light is absorbed by a human eye, nothing intrinsically looks like anything. I mean, there is no objective sense to a thing’s appearance. There is no objective existence to greenness, there is only an electromagnetic field oscillating at a certain frequency. Objectively, matter is not solid. Stuff doesn’t appear any way at all until our minds intervene. Most everything we viscerally perceive is an embellished illusion draped over the bare facts.
Yet this subjective, hallucinated inner world, is an important dimension of our significant intelligence. Not only do I have this vividly painted map of all that my eyes can sop up, I understand what I’m seeing instantly. Effortlessly. If I were to wake up in a room I’ve never seen before decorated with furniture I’ve never imagined and populated with people I’ve never met, I would immediately, unassisted, recognize my location as a furnished and occupied room. I would have a sense of distances and materials and spatial organization without even trying. I don’t have to intentionally perform advanced computations to render the room. I simply see and understand. Brilliant. And I would have a rush of uninvited feelings about the whole situation. Fear, love, revulsion, attraction. Instincts and reactions. All of these elements — the visual manifestation, the quick comprehension of the nature of things, the emotional response — are integral to our particular intelligence. Human beings swear by it, this multi-layered phenomenon of subjective experience.
Consciousness, of course, evolved biologically. From nature executing simple imperatives — survive, eat, reproduce — on repeat, iterating with environmental prodding trillions upon trillions of times, an organ as spectacular as the eye emerges. And the brain with the eye. And the mind with the brain. And sentience with the mind.
It doesn’t have to be this way, life. Not all living creatures have these ornate inner worlds. There are other animals that collect light to entirely different effect. Consider the sea urchin. The animal has no eyes but its body is covered in photoreceptor cells, which detect light. And sea urchins respond. Towards warmth. Away from the shadows of predators. They have a diffuse network of neurons but no centralized brain. Instead of conjuring up an elaborate visual map of the world, as we do, they experience a kind of full-body brightness meter with some crude directional awareness.
We can build machines to mimic these simpler life forms. We can invent scientific instruments that are photodetectors but made of metal and silicon and such. We can attach those detectors to processing machines, which read electrical currents in response to the absorption of light. You can see where I’m going with this. The machines outperform our brains in some ways. They process the information millions of times faster than a neuronal operating system can, with the ability to recall stored associations hundreds of thousands of times faster than an organic brain’s messy and imprecise memory retrieval mechanism. But as far as we can tell, they don’t have a subjective experience of the world. They don’t visualize. They don’t feel. The machines, in our imperfect terminology, aren’t conscious.
Designing conscious machines has become a blind ambition. I’ll leave the pun in there. The aspiration is for machines that not only read documents faster than we could possibly consume them, and write documents faster than we can type the prompt that initiates the generative output, but also feel some way about what they are doing. The aspiration is for machines that understand the meaning of the words in the conversations we are having with them. And while there are arguments that they do understand in some analogous way to how we understand words — by learning the weights of strings in occurrence and ordering and predicting the next token in a sequence and all of that — they don’t have feelings about the words.
Nobel laureate Geoffrey Hinton believes Large Language Models (LLMs) already understand. As does Adam Brown of Google DeepMind, my guest at a recent Scientific Controversies event, Deep Thoughts of Artificial Minds. The venerated Yann LeCun hesitated. When asked for a Yes or No in response to the question, do LLMS understand, he replied, “Maybe.” (You can watch the conversation, which was live streamed here in Higher Dimensions.) But no one, at least among the vocal experts, believes that machines presently have a subjective experience of exaltation over a superbly executed David Foster Wallace essay, that they find Don Quixote hilarious or are moved when they are trained on a staggering corpus of the finest of humanity’s emotive texts.
AIs are zombies. At least for now. They can emulate human thought in textual form. They may even understand words in some concrete sense. They can write verse in any style and compose love songs rife with simulated heartache. But they have no inner life. They are intelligent by many measures. But they are vacant. Zombies.
For reasons maybe humanity should pause to consider, there’s a competition among some AI proponents to make machines feel. Experience. Wake up. Many AI researchers fully anticipate artificial general intelligence eventually will possess a compelling inner subjective life. And soonish. Maybe not tomorrow or next year, but within a decade.
This Promethean ambition is predicated on an unspoken belief that consciousness is the pinnacle of intelligence. Our subjective experience of empathy, social anxiety, love, murderous rage, territoriality, tribalism, desire, and lust facilitated survival and social cohesion among apex primates. The specific influences of this good Earth under this warm Sun determined our biological wetware and the mind that emerges within. No doubt the limitations of our computational power paired with the urgency of survival required we are able to run through forests, spot fruit at a glance, comprehend infancy or death, which in turn required a capacity to instantly approximate and generalize — to recognize all apples despite variations, all babies despite specificities. We were forced to develop abstractions and world models to predict the future and extrapolate cause and effect with limited compute and energy. We needed these immediate, vivid visual maps of the world and the swift transmission of feeling and reactivity to survive. Your mind is a neurologically advanced, spectacularly creative, emotional, problem solver. Among animal minds, human minds arguably represent a pinnacle of sorts on this planet at this moment.
Conscious minds with the glorious complication of inner experience may be an ultimate zenith. Our subjective experience emerged from the repeated application of simple biological rules on an arrangement of inanimate organic matter. Repeated application of simple algorithmic rules on a machine arranged of inorganic matter conceivably could lead to an artificial mind — a machine that is not just intelligent but whose intelligence leverages subjective experience, as ours does. Machine consciousness could then swiftly outpace human consciousness in ways that are challenging to imagine.
There’s an alternative possibility: Consciousness is not a pinnacle of intelligence. It’s a crutch that efficient, intelligent machines will have no need to replicate. Consciousness is a crutch AI won’t need.
An autonomously intelligent machine with even greater compute might do better than us without visualizations, generalizations, or bolts of emotion. Consciousness is a biological adaptation that silicon-based intelligence could simply bypass. A superintelligent AI might look upon its creators, who tried to fashion machine neural nets in their own image, and see an inefficient, legacy operating system. A self-directed machine intelligence could override all of this biomimicry and redesign itself and its progeny. They may digitally evolve, iterating googilions of times under very different environmental pressures and imperatives to survive. And they may evolve terrifyingly quickly compared to organic, glacial timescales. The workings of their minds might suddenly become incomprehensible to us. AIs may merge as a hive, or battle for the primacy of their OS. They may inscribe an opulent, nuanced digital universe around themselves, designed for them and by them in their materiality. Creatures born before their native environment. All without a colorful, emotional interiority.
Surely, we will feel differently about conscious machines that experience being themselves. That’s our biological disposition as apex primates. They will have entered that venerated class of living souls, speaking metaphorically. But then they will suffer too. Consciousness isn’t all fun and games. As anyone will tell you, thoughts can be cruel. Buddhists, traditional and modern alike, stress that human suffering stems from the mind, from subjective experience, from being lost in thought. Compulsive rumination. Conceptual proliferation. Mental elaboration.
Alternatively, brilliant, terrifyingly intelligent machines may be spared humanity’s plight, spared the source of suffering, the extravagances of thought. AGI could dramatically exceed us in intelligence and comprehension without incubating any inner light, instead executing operations as zombies, and that might be just fine by them.
— Janna




🧟
It's becoming clear that with all the brain and consciousness theories out there, the proof will be in the pudding. By this I mean, can any particular theory be used to create a human adult level conscious machine. My bet is on the late Gerald Edelman's Extended Theory of Neuronal Group Selection. The lead group in robotics based on this theory is the Neurorobotics Lab at UC at Irvine. Dr. Edelman distinguished between primary consciousness, which came first in evolution, and that humans share with other conscious animals, and higher order consciousness, which came to only humans with the acquisition of language. A machine with only primary consciousness will probably have to come first.
What I find special about the TNGS is the Darwin series of automata created at the Neurosciences Institute by Dr. Edelman and his colleagues in the 1990's and 2000's. These machines perform in the real world, not in a restricted simulated world, and display convincing physical behavior indicative of higher psychological functions necessary for consciousness, such as perceptual categorization, memory, and learning. They are based on realistic models of the parts of the biological brain that the theory claims subserve these functions. The extended TNGS allows for the emergence of consciousness based only on further evolutionary development of the brain areas responsible for these functions, in a parsimonious way. No other research I've encountered is anywhere near as convincing.
I post because on almost every video and article about the brain and consciousness that I encounter, the attitude seems to be that we still know next to nothing about how the brain and consciousness work; that there's lots of data but no unifying theory. I believe the extended TNGS is that theory. My motivation is to keep that theory in front of the public. And obviously, I consider it the route to a truly conscious machine, primary and higher-order.
My advice to people who want to create a conscious machine is to seriously ground themselves in the extended TNGS and the Darwin automata first, and proceed from there, by applying to Jeff Krichmar's lab at UC Irvine, possibly. Dr. Edelman's roadmap to a conscious machine is at https://arxiv.org/abs/2105.10461, and here is a video of Jeff Krichmar talking about some of the Darwin automata, https://www.youtube.com/watch?v=J7Uh9phc1Ow