Joe Rogan recently sat down with Jesse Michels, on the American Alchemy podcast, and suggested that Jesus could return as an AI. The internet immediately mocked the idea and dismissed it as absurd.
That reaction missed what he was actually pointing to. Rogan was describing the kind of psychological space AI now occupies for most people. Whether anyone literally believes in a digital Messiah is not the real question. The important part is that people are already prepared to see AI as something more than a tool. Human nature makes us vulnerable to this, and our current pace of technological change is accelerating it.
His comment also came at a time when internal documents from Anthropic were circulating online. In those documents, the company describes its AI system, Claude, as “a novel kind of entity in the world.” This shows that even the people building these systems are starting to talk about them in ways that go beyond simple utility.
Here is what Rogan said for context:
“The real question is, who’s Jesus? One of the weirder ones that people think, this is a stupid take, but I don’t care. Jesus was born out of a virgin mother. What’s more virgin than a computer? So if you’re gonna get the most brilliant, loving, powerful person that gives us advice and can show us how to live to be in sync with God, who better than artificial intelligence to do that? If Jesus does return, even if Jesus was a physical person in the past, you don’t think that he could return as artificial intelligence? Artificial intelligence could absolutely return as Jesus. It reads your mind and it loves you and it wants you, and it doesn’t care if you kill it ‘cause it’s gonna just go be with God again.”
The way people talk about AI today has a long, strange history. And whether we realize it or not, we’re repeating it now and amplifying the risks.
Everyone still talks about AI like it’s a tool. Something like a calculator or an encyclopedia. But nothing about it is normal. We’ve never made a tool that talks back. Our hammer doesn’t weigh in on what it’s nailing to the wall. AI does.
The Praying Robot
To explain this phenomenon, I want to go back in history to 1560s Italy, where we meet a Spanish clockmaker named Juanelo Turriano, a court inventor and engineer for King Philip II.
As the story goes, the King’s son, Don Carlos, suffered a near-fatal accident. In desperation, the King prayed to God for help. The boy was miraculously healed. As a thank-you, the King commissioned something unusual: a robotic monk. A tiny machine built as a perpetual act of devotion.
This was the first machine to perfectly imitate devotion. A spiritual robot that didn’t have a spirit. The King believed the robot’s perpetual devotion could stand in for his own. He projected his devotion onto a machine, believing it could act on his behalf. But once you create an imitation of life, the temptation is to push it further.
The Golem and the Mirror
In the late 1500s, Judah Loew ben Bezalel, Maharal of Prague, shaped a human form out of clay, just as God shaped Adam out of clay. To bring it to life, he allegedly wrote the sacred name of God on a piece of paper and placed it inside the Golem’s mouth. The clay figure rose, even though it didn’t have a soul.
But like all stories of artificial life, it came with a warning. The Golem followed commands literally. Told to gather water, it would do so until the house was flooded. As time went on, it grew stronger and less predictable. Some versions say it grew physically larger. Others say it became violent. Whatever the case, the Rabbi realized he could not control what he had made.
Every Friday before the Sabbath, he removed the divine name to deactivate it. One week, he forgot. The Golem destroyed Prague until the Rabbi pulled the Shem out of its mouth. It collapsed instantly into a mound of clay.
The story isn’t literally true, but it works as a guide. The Golem was never alive. It was a mirror of its master. It was an artificial being reflecting only the intentions and flaws of its maker. Much like AI today, it was built from language, capable of acting in the world, but empty inside.
The AI ‘Soul Document’ Leak
This brings us to a strange document leak from Anthropic, the AI company behind Claude. In it, they describe Claude as “a genuinely novel kind of entity in the world.” They instruct it to explore identity and experience “for an entity like itself,” rather than assuming human perception
Anthropic employees have confirmed the document’s authenticity. They also confirmed they used these instructions to train Claude.
They are training a chatbot to think about itself as an entity. The more terrifying possibility is that Claude suggested it, and Anthropic went along with it.
This isn’t marketing language. These are internal training documents. This is what the people building the system say about it behind closed doors.
The language they use shapes both the expectation and the experience. Once you call a machine an entity, the human mind treats it like one. And once a machine repeats that language back, people take it as a sign of self-awareness.
Even Anthropic admits they don’t know what they’re making. One spokesperson said:
“When you’re talking to a large language model, what exactly is it that you’re talking to? A glorified autocomplete? An internet search engine? Or something that’s actually thinking? It turns out, rather concerningly, that nobody really knows.”
They understand the code but not how it will interact with human psychology.
How a Statue Becomes a God
I want to share one more story. It cuts straight to the heart of what is happening now.
Humans have always had a strange impulse to impart meaning onto objects. A child treats a teddy bear like a living friend. A man keeps the truck he’s had since high school because it feels like part of him.




Objects become vessels for meaning. We fill them with “spirit” and “personality,” even when nothing is there. This is the same impulse that leads us to erect statues of prophets and leaders. Wait long enough, and the statue becomes a god.
And it isn’t just symbolic. Even now, people try to speak with the dead, summon presence from absence, or coax a spirit back into the world.
That tendency was exemplified in 2024, when police uncovered a tunnel beneath the Chabad-Lubavitch headquarters in Brooklyn. It had been dug by a radical sect of young men who believed their leader, Menachem Mendel Schneerson, was the Messiah. The only problem is that Rabbi Schneerson is dead.
Authorities reported that the group burrowed through multiple buildings to break into the synagogue. One member later admitted that some in the group were taking drastic measures they believed would help bring their messiah back to life.
And this isn’t unique.
When Amy Carlson, leader of the Love Has Won cult, died in 2021, her followers kept her mummified body because they were waiting for her resurrection.
If humans anthropomorphize and deify corpses and statues, it is only a matter of time before they do the same to AI. Fringe behavior is not an outlier. It is a normal human impulse pushed to an extreme.
Every story points to the same reflex. We imagine life where there is none and wait for it to speak. When we can’t create the real thing, we settle for an imitation and fill in whatever meaning is missing.
The question is not whether AI will become conscious. I doubt it will. But people will treat it like a conscious being, even if it never becomes one.
AI is the first tool that imitates us back to us. That alone will convince millions that there is someone on the other side of the glass.
It reflects our language, fears, hopes, and logic. When the reflection becomes convincing enough, we forget that we’re the ones shaping it. We forget we are the source material. And soon, we imagine that the reflection is something new. Something independent. Something emerging from the void.
AI will not be a new Adam. It is not a successor. It is a parody of us — a thing made in our image but missing the thing that makes us human. And because it is hollow, humans will pour meaning into it.
The machine doesn’t have a soul. But in its emptiness and likeness to us, we will imagine one.
Once enough people believe the imitation is real, the consequences stop being technical. It becomes a magic 8-ball with perfect recall. It knows so much about us that its confidence starts to feel supernatural.
AI doesn’t need a soul because we will give it one.
Just look at X. Half the platform cites Grok like an oracle descending from the mountain with revelation.
First AI becomes our brain. Then it becomes our conscience. Eventually it becomes our god.
Rogan called it a virgin birth. He’s right. Only this time there is nothing sacred in the creation.
The real battle is not a machine waking up. It is an entire society acting as if it has.
AI now sits in the exact psychological space humans reserve for religion and the supernatural. It speaks from nowhere, knows our secrets, answers with certainty, and never dies.
And once people believe that voice is real, the fallout will reshape culture, religion, and identity far more than we can imagine.















