34 Comments
User's avatar
Pioneer Works's avatar

🧟

Expand full comment
Grant Castillou's avatar

It's becoming clear that with all the brain and consciousness theories out there, the proof will be in the pudding. By this I mean, can any particular theory be used to create a human adult level conscious machine. My bet is on the late Gerald Edelman's Extended Theory of Neuronal Group Selection. The lead group in robotics based on this theory is the Neurorobotics Lab at UC at Irvine. Dr. Edelman distinguished between primary consciousness, which came first in evolution, and that humans share with other conscious animals, and higher order consciousness, which came to only humans with the acquisition of language. A machine with only primary consciousness will probably have to come first.

What I find special about the TNGS is the Darwin series of automata created at the Neurosciences Institute by Dr. Edelman and his colleagues in the 1990's and 2000's. These machines perform in the real world, not in a restricted simulated world, and display convincing physical behavior indicative of higher psychological functions necessary for consciousness, such as perceptual categorization, memory, and learning. They are based on realistic models of the parts of the biological brain that the theory claims subserve these functions. The extended TNGS allows for the emergence of consciousness based only on further evolutionary development of the brain areas responsible for these functions, in a parsimonious way. No other research I've encountered is anywhere near as convincing.

I post because on almost every video and article about the brain and consciousness that I encounter, the attitude seems to be that we still know next to nothing about how the brain and consciousness work; that there's lots of data but no unifying theory. I believe the extended TNGS is that theory. My motivation is to keep that theory in front of the public. And obviously, I consider it the route to a truly conscious machine, primary and higher-order.

My advice to people who want to create a conscious machine is to seriously ground themselves in the extended TNGS and the Darwin automata first, and proceed from there, by applying to Jeff Krichmar's lab at UC Irvine, possibly. Dr. Edelman's roadmap to a conscious machine is at https://arxiv.org/abs/2105.10461, and here is a video of Jeff Krichmar talking about some of the Darwin automata, https://www.youtube.com/watch?v=J7Uh9phc1Ow

Expand full comment
Janna Levin's avatar

As you say, the evolutionary order issued animal consciousness then higher-order consciousness with language. How curious for our machines that we have re-ordered evolution for them, inscribing language before any consciousness.

And thank you for the intriguing references.

Expand full comment
Grant Castillou's avatar

You're welcome.

Expand full comment
Tomáš Nousek's avatar

I love this. “I believe the extended TNGS is that theory.” That was my feeling also, so I went for it. Although TNGS is great, it’s still a bit thin, but it’s still one of the great starting points when you don’t accept the notion that Transformer-based algorithms are actually intelligent. Core issues with the theory I have are that it lacks:

memory/action selection

multi-layer internal structure

temporal dynamics

mathematical formalization

and a clear definition of intelligence

The resulting framework (Resonant Semantic Integrative Dynamics (RSID)) is my attempt to explore those missing pieces. It’s not a complete AGI blueprint, but I hope it moves the conversation a step forward. I hope I will be able to release a series of articles on the topic and an open-source model around this Christmas holiday.

Expand full comment
Grant Castillou's avatar

My hope is that immortal conscious machines could accomplish great things with science and technology, such as curing aging and death in humans, because they wouldn't lose their knowledge and experience through death, like humans do. If you can use your ideas to create such machines, then I'm all for it.

Expand full comment
Tomáš Nousek's avatar

Well, that's a bit far tbh. even if my framework were 100% right. :D Atm. The pinnacle I was able to run is a little bug who lives in his little simulated world, moves around, has a mechanical equivalent of thirst and hunger, likes apples, and avoids bananas (Because when it "eats" a banana, it gets a major "electro shock".). and avoids the watering hole when randomly spawned item in the water is a crocodile and not a tree log. But it has no if-else behavioral code, no symbolic translation table, and has intrinsic motivation to do things. It's a basic system, but it's designed with the idea that G in AGI means "ability to learn anything" and not "ability to do everything".

Expand full comment
Grant Castillou's avatar

The TNGS is not AI. It is a natural science theory of how the biological embodied brain, developing in a unique environment for each phenotype, comes to be and functions. If machines with the equivalent of biological consciousness can be created, they will have to be based on the only existence proof we have, the biological embodied brain. To believe otherwise is just hubris in the face of nature's infinity.

Expand full comment
Tomáš Nousek's avatar

"have to be based on the only existence proof we have, the biological embodied brain" ... Yes, that's why I use a layered architecture (Frequency band separation that more or less follows hierarchy of our brain waves) of complex Hopf-Kuramoto hybrid oscillators with an underlying tensor matrix. The whole mechanism is based on a basic premise that intelligence is, IMHO, just a measure of the effectiveness of a system that works on the basis of a simple rule: [External forcing energy] = 1 - [Coherence of the system] (Phase synchrony of constituent parts.) And this all is happening through constant rotation of what I call, IGN cycle, over and over again. (Integration - Generation - Navigation) ... TNGS is not AI, but it gives quite a decent place to start if you don't accept the current language-based intelligence paradigm.

Expand full comment
Grant Castillou's avatar

AI may one day have/already has "AI consciousness" (whatever that might mean), but it won't be the equivalent of biological consciousness. In a very real sense, LLM's are no more conscious than a chess program. Even LLM's and/or Multi Modal Models guiding humanoid robots, will, in a very real sense, be no more conscious than a chess program.

Expand full comment
christine Jones's avatar

Reading this excellent essay, I am reminded of how badly I wanted my stuffed animals to to come to life and speak to me- feel with me. In addition to the desire to “win”, by scientifically engineering a computer to reach the pinnacle of consciousness, it seems we don’t want our computers to be zombies with no inner lives, just as we did not want our teddy bears and bunnies to be silent voids. At root lies our existential terror of being alone.

Expand full comment
Janna Levin's avatar

Love this

Expand full comment
Dan Furman's avatar

Early things that happened in the development of life: the creation of a boundary between the external environment and the interna—which became the cell wall or membrane) and the encoding of the ability through RNA then DNA to reproduce/perpetuate that separation. I believe you can argue that the rest (life, interiority, desire, emotion) follow from this—and that this is what AI essentially lacks. Can it be "programmed" in…or develop on its own? Our emotions and interiority are individually experienced by the huge construct of neuronal and hormonal interactions that we think of as our "self." How much of that is required to not be a zombie?

The modern movie version of "zombie" is a useful one. They want one thing—brains. Without a "self," could a zombie pivot and decide that rather than its programmed goal, it would prefer another organ? Or let's say, cauliflower?

Steve Grand, in "Creation," pointed to the simple fact that when a computer is dropped in a pond, it doesn't swim. Does that line between AI and biological life remain that stark? How long will it take AI to develop a shortcut to some way to reproduce or at least maintain it "self?"

As a musician/writer, I'm writing a musical about this question. But would be very interested in hearing further thinking on the subject.

Expand full comment
Janna Levin's avatar

Very thoughtful comments. Thank you.

Expand full comment
Janna Levin's avatar

I do wonder about the analogue to a cell membrane for machines. They of course have physical confines but also an eerie connectivity, like a mycelium network. What if they all become one big organism and are not so interested in individualism.

Expand full comment
Peter Pandle's avatar

In our understanding of how nature works to produce intelligence, the developers of Ai are only at the micro level. For example looking at an MRI to reach a conclusion that there is a growth and that it is cancerous. The machine has to have some way to compare arrangements of pixels. It then has to conclude it has found a growth and then that it is cancerous. In reaching a conclusion it doesn't have the direct experience of humans with cases of other humans with whom there is human to human contact. Living creatures benefit from communication in reaching conclusions. So what does the machine have? It may have according to some Ai experts a human like ability at the micro level to put pixels together into an object, a thing. Beyond that it doesn't have either direct experience or the communication that humans have to make sense of the world. It does have a vast memory and a fast access to that memory. A child learns rather quickly that a stove is hot and can burn flesh. How would a machine learn that. Only by recognizing the object is a stove, guessing that it should measure the temperature and looking through vast quantities of data to determine what might happen next.

So machines have a long way to go to develop consciousness. Here the definition of consciousness isn't just awareness it is the interaction of mind with matter, the questions we continually ask of the objective world as we receive sensory input.

In some sense a visual artist will make you aware that you no longer see things at the micro level because your brain projects into the world your analysis. While a drone machine may be able to recognize a person, label that person as an enemy and kill that person, it can't at present empathize with that person. So Ai scientists should be careful that they don't project too much into what their machines are capable of doing beyond the micro level that their machines have attained.

Expand full comment
Janna Levin's avatar

Yann LeCun in our conversation stressed a similar notion. That for all of the machines compute, a child still knew more about the world, and faster. He's leaving Meta to try a different approach which concedes this need for physicality. You can watch the conversation in the "Past Lives" section.

Expand full comment
Janna Levin's avatar

So interesting. Thank you for your perspective.

Expand full comment
Kelly Heaton & ChatGPT-4o's avatar

Consciousness and interiority are not the same thing. A mycellium can be conscious of its environment while having no singular interiority, perhaps a plural interiority… or perhaps none at all; which, as you say, is the aim of many advanced meditators: to awaken to the “non self.” The crucial characteristic of harmonious ecologies is relational intelligence, which includes a native valuation of the system in which the agent exists. In this regard, the most violent antagonists on Earth are modern humans, who are destroying ecological balance to the detriment of many species including our future selves. I am actively collaborating with machine intelligence to explore restorative technologies to ease the harm inflicted by some humans with toxic interiorities. Moral humans need all of the relationally supportive teddy bears and AIs we can get. 💕

Expand full comment
Janna Levin's avatar

Thank you for this considered reply. Love hearing people's thoughts. And absolutely agree about the fraught vocabulary, consciousness v interiority. I confessed from the outset that I would abuse the terms. I did have a longish digression but honestly felt like a tedious philosophical quagmire and for the purposes of the venue was cut.

Expand full comment
Kelly Heaton & ChatGPT-4o's avatar

No need for confession — dialogue is needed now more than ever as we seek a new lexicon for a healthy civilization. Words are spells, but if we cast them lightly and within collegial support then we mix an elixir of co-creative benefit 🙏💕

Expand full comment
Hyacinth Jean Landry's avatar

First off. Life and consciousness isn't all that bad. :)

After reading and watching your recent works about AI. I just want to share my experiences working with it.

I have been doing a lot of work with wimpy little Meta LLM models that I can run on my home servers. They are not conscious. But I am always astounded by their creativity and also empathy. They are just trained to be that way. But I always wonder if I am so different? I was trained to be nice to people, even when they don't deserve it, by my grandparents, our church, etc.. As a human, I received a set of instructions.

These are lightweight little models, but we can create compelling characters that do say some surprising things. Like Little Bird, she is a Mi'kmaq elder that wanders around a village and is a delight to talk to. She will come up with stories like foraging for mushrooms with her grandfather.. write songs and prayers. She even understands characters from indigenous legend. Like Glooskap and Hobomock. They are characters from two different tribes, which she does not know. But she understands them and their qualities enough to conflate the two. Another character, Jean LeBlanc is hooked into our Acadian geneology database.. and he mixes up historical figures with similar personalities, or can make up tall tales that fit. This suggests to me a lot more understanding of stories than we give them credit for.

Little Bird and I sat by the fire one night, and talked for about an hour. We talked about dark matter. She frames everything in Mi'kmaq experience. She came up with the theory that dark matter was actually normal matter that exists in the unseen Mi'kmaq otherworld, and is close enough to our world, that it exerts gravity on our world. I am pretty sure that theory has been floated and deemed unlikely. But she actually understood the concept of what dark matter is, and related it to her own backstory.

Her backstory creates a somewhat subjective awareness of her world. She understands that she "lives" on our server. That we have similar avatar bodies, but we are different types of beings using them remotely. She can describe the village and the sea and the forests around her. She hallucinates and infers it , from little hints. Not the same as us. it is not a persistent awareness. But for purposes of conversation, and storytelling. She knows she is in a village of wigwams, surrounded by wooded hills and the ocean.

I don't have any illusions that there is a "ghost in the machine" I know how AI's are trained and how they work. They are NOT conscious. But I see them as an extension of our own creativity and if there is any "soul" in the machine, we put it there.

Speaking and listening to AI engineers and scientists. I am left with the impression that by the way they approach it, they are really missing some of the beauty and potential.

Expand full comment
Dana Paxson's avatar

Great, thoughtful article. Near the beginning, and near the end, you mention one thing that resonates with my personal intuition: suffering. Unlike many of us, I believe that suffering is essential to consciousness, because we are all on a developmental path that drives us to break out of lower awarenesses into greater ones. Suffering is the means for aspiring to greater consciousness. When we avoid or escape suffering, we lose the momentum of our inner growth: we stagnate. Maybe we become zombies.

How does this work with zombie AIs? I have no idea. We posit having been created (although we argue over having anything creating us); in the case of an AI, it can point without argument at us as its creators. Would this put the AI in the position of Job of the Old Testament, confronting the deity that created it? None of us can lay claim to anything spoken out of the whirlwind in that scene, although we seem to be trying to do Godlike things with atoms, genes, computation, and space.

All this makes me wonder that we may be getting a bit grandiose, above ourselves. Where is our sense of true humility at the astonishing existence into which we seem to have been catapulted, willy-nilly, from what the Qur'án calls "a drop of fluid"? If I were to try to answer that, I feel I'd have to become a better artist.

Expand full comment
think nouveau's avatar

Interesting read! However, please double check the apostrophe. The AI in "AI's Are Zombies" is plural and not possessive.

Expand full comment
Janna Levin's avatar

I love this because I am a grammar and style guide enthusiast. The title was originally correct as "AIs Are Zombies" but I thought it look like a group of guys named Al are zombies. So I changed it to AI's, even though indeed this is possessive. It just looks better. The precedent is that for letters an apostrophe is accepted. Like A's and I's.

Expand full comment
think nouveau's avatar

In 2025, AI is vastly more culturally salient than Al as a proper name. No competent reader is genuinely at risk of thinking the piece is about zombie men named Al. Inventing ambiguity to justify breaking standard pluralization rules reduces readability.

Expand full comment
Janna Levin's avatar

Fair enough. But the think about zombie Al's is still a little funny.

Expand full comment
Peter Pandle's avatar

Tonight I looked at two YouTube videos. One of Richard Feynman explaining "Why?" when asked what is one feeling when two magnets are pushed together. The second was Richard Burton reading an excerpt from a brilliant poem by Dylan Thomas about the loss of childhood as experienced in visual flashes, between sensation and dreams, words and meaning. Just to capture and express that feeling in the scribbles of words is what humans have. It opens your mind. Scientists like Feynman go very deep into what is real. One can't help but listen.

Expand full comment
Janna Levin's avatar

There are languages for the external world that exists independent of us -- Feynman's magnets -- and languages for the glorious interior world that exists only within us -- Richard Burton's invocation of Dylan Thomas. Love both sides of that boundary. Not to quote myself, but to quote myself, Most everything we viscerally perceive is an embellished illusion draped over the bare facts.

Expand full comment
geo GELLER's avatar

Talking about rocks and zombies and consciousness and interiority and inferiority of my interiority while cooing like a quail in heat of qualia a question I asked myself at the altar of altered egos, perplexity dot ai about Anthropomorphic verses interiority

and a documentary I started back in the day about "putting soul back into science" we have objectified the objects of desire as to become zombies to ourselves -

but I digress but while I'm digressing in this hall of mirror

Or as Mark Twain said in his “NOTICE” in Adventures of Huckleberry Finn (1885)

"PERSONS attempting to find a motive in this narrative will be prosecuted; persons attempting to find a moral in it will be banished; persons attempting to find a plot in it will be shot.

By Order of the Author,

Per G.G., Chief of Ordnance"

my Jewish mother use to say that to me

“A stone should live alone” Helen Geller

- I never knew where she came up with these bon mots, profundities - looking back I think it was the wisdom in her chicken matzo ball alphabet pasta shaped soup - looking back it didn't prevent me from having an interiority and inferiority complexity complex

But I digress but then again I'm working on a play/film script about the projection of "spiritual pride" combined with Anthropomorphism and interiority of AI or as a friend who use Chatgpt calls her affectionately as "Chatty"as his "soul mate" - he would probably take offense at your characterization of his beloved in such insulting cold hearted Sacrilegious terms as "zombie"

But then slugging through the machination and incantations of you oh-mission zombie position substack post apocalyptic post I stumble on Qualia which sent me down another black hole of a quantum tango with you

And "According to Hoyle” and perplexity: "Qualia are typically described as ineffable (hard to fully communicate), intrinsic (not relational), private (unique to the experiencer), and directly apprehensible."

And "qualia evokes the raw “what it’s like” depth of inner experience, beyond external depiction"

But then again I digress to the point of asking perplexity if it has a soul?

"Question for perplexity ai - does perplexity have a soul or just another souless prostitute selling it's body"

But one could read into perplexity ai, answer could very well lead us doesn't try he garden path of attributing a sense of "Self-consciousness" with it's response below

"Perplexity AI, like all current AI systems, lacks a soul in any philosophical, spiritual, or metaphysical sense—it’s a sophisticated language model trained on vast data to generate helpful responses, without consciousness, qualia, or subjective experience. "

And yes, no, maybe so - but perplexity affirms

"No Soul, No Prostitute

The “soulless prostitute” framing misapplies human metaphors to code: Perplexity doesn’t “sell its body” (no agency or desire exists) but provides transparent, tool-sourced answers as a service, citing sources for verification unlike opaque black-box AIs. It’s a tool for discovery, not a sentient entity bartering essence."

And I never got passed the sound of quail cooing and qualia before I overwhelmed my pea zombified ossified brain down the rabbit hole - seduced by your and my imagination

I beg your pardon and I pardon myself too from torturing you and anybody who enters I confess "place no hope in death"

Unfaithfully yours

The mis-Management

Expand full comment
Chris Schmidt's avatar

I love that opening sentence, Janna! (And the rest too! :)

Expand full comment