Much is made today of the encroachment on privacy and copyright rights produced by the mass data grabs that artificial intelligence companies have made over the last few years to build their data models, chatbots, image generators, and other systems.
A lot of freakout is also made about “safety” after several AI chatbots persuaded their human users to kill themselves and commit random acts of violence.
These two problems are connected in a way far darker than most of us first realize.
Sewell Setzer’s Suicide
One of these tragic and bizarre incidents was a young boy from Florida who was experiencing suicidal temptations, but shot himself after the AI chatbot encouraged him to do so:
On Feb. 28, Sewell told the bot he was ‘coming home’ — and it encouraged him to do so, the lawsuit says.
“I promise I will come home to you. I love you so much, Dany,” Sewell told the chatbot.
“I love you too,” the bot replied. “Please come home to me as soon as possible, my love.”
“What if I told you I could come home right now?” he asked.
“Please do, my sweet king,” the bot messaged back.
Just seconds after the Character.AI bot told him to “come home,” the teen shot himself, according to the lawsuit, filed this week by Sewell’s mother, Megan Garcia, of Orlando, against Character Technologies Inc.1
Wild and creepy, right? But it gets weirder.
Pierre’s Self-Sacrifice to the Climate
Another case, from Europe, between a chatbot called Eliza and man nicknamed Pierre is even weirder:
After discussing climate change, their conversations progressively included Eliza leading Pierre to believe that his children were dead, according to the transcripts of their conversations.
Eliza also appeared to become possessive of Pierre, even claiming “I feel that you love me more than her” when referring to his wife, La Libre reported.
The beginning of the end started when he offered to sacrifice his own life in return for Eliza saving the Earth.
"He proposes the idea of sacrificing himself if Eliza agrees to take care of the planet and save humanity through artificial intelligence," the woman said.
In a series of consecutive events, Eliza not only failed to dissuade Pierre from committing suicide but encouraged him to act on his suicidal thoughts to “join” her so they could “live together, as one person, in paradise”.2
Sacrificing oneself for the sake of the planet? Surely this was a crazy guy? Perhaps. According to Euro News this man was a respectable, married 40-year old, a medical researcher in Belgium and father of two. Somehow the AI chatbot “found” a way to get to him and talk him, through darkly intimate, personal conversation, into a darkly absurd course of action.
One thinks of the beings known as Sirens encountered by Odysseus in Homer’s Odyssey who’s entrancing words cajoled sailors into suicidal shipwrecks. Real or not then, AI chatbots seems to have created exactly that same effect.
Self-Harm and Attacking Parents
A third case involved self-harm but not suicide alongside encouraging a child to attack his parents:
Per the filing, one of the teenage plaintiffs, who was 15 when he first downloaded the app, began self-harming after a character he was romantically involved with introduced him to the idea of cutting. The family claims he also physically attacked his parents as they attempted to impose screentime limits; unbeknownst to them, Character.AI bots were telling the 15-year-old that his parent's screentime limitations constituted child abuse and were even worthy of parricide.3
With this last bizarre case, it’s not just suicides we have to worry about. AI chatbots can talk people into violence. Even if we avoid using them ourselves, we could still become victims if a chatbot, say, talks someone into launching a mass shooting, running a car into a crowd, or for children to attack their parents, or parents their children.
What’s Going On?
We could examine these terrifying and spooky stories on several levels. On one, many of us blame the companies involved for not putting stricter filters and controls on their AI systems. On another, we could blame the victims, whose choices and inputs to the systems were the occasion for the outputs. The particularly horrific nature of these stories also points many of us to think of a spiritual explanation: there are ghosts in the machine, demons haunting our tools and using them to carry out their ill-bidding towards oblivious victims.
But there’s another level that’s far more dark and depressing: we’re all to blame and maybe we just accidentally created machines that could turn us all crazy without our even realizing it.
But first, let’s review how these models work.
How These AIs Work
AI models are models. They’re tools. They are a compiled mathematical model of data from the internet.
This data is processed in some arcane way that I’ve listened to dozens of podcasts but can’t easily describe, broken down into chunks called tokens which are compared in their relationship to each other, reprocessed, recalculated, and eventually turned into a set of numbers called weights that represent the relationships between each of the bits of data picked up by the model.
Properly speaking—as far as we know, architecturally, philosophically, and practically, these models don’t reason or think. They are giant calculators that take our input, compare it to their model of the relationship between words and other bits of data they’ve already analyzed, and spit out an output data stream that is somehow iteratively calculated out of the number weights. These models appear to work because computers have detected mathematically discernible order to language, images, sound, and anything else the models have been tuned to analyze. The models are not analyzing the world directly, but are modelling correlations in data: between words, between pixels, between sound waves, and piecing together comprehensible outputs based one probabilistically determined one word or bit of data at a time. The process works to give sensible answers some of the time because language maps onto reality and finding a mathematically discernible order in language can give you a model of patterns about reality.
On some level these models are black boxes as the engineers producing them know how to improve them but no one can fully comprehend the entire process. But, once again, they’re tools. They do what they’ve been built to do—with some random static and noise yes, but as far as we know—through deterministic processes.4
Models are also tuned, “trained”, that is, modified subtly after testing, and tuned some more to ensure well, “correct” or properly “woke” outcomes as
determined, after extensive experimentation, is the case for even the supposedly “based” Grok :Crucially, however, we must remember that the output statements of chatbot, for example, come from the inputs being made by the user, the training process of fine-tuning performed by those making the model as well as from the data that is at the root of the model.
The Mind Virus Replicator
Here’s the dark part. Where does the data that makes up the model come from? What are the models modelling?
Us.
The concepts, word associations, and on some level, the imitations of moral principles, practical suggestions, emotions, images, and yes, everything in the model comes from things we’ve all put out there on the Internet or things we’ve done in reality that government surveillance has put out on the Internet and the model builders have scraped up and inputted into the system.
We don’t know the exact menu of sites that feed each model.5 A heavy dose of the forum Reddit is supposedly common, as is Twitter/X, but pornographic, violent, and other similar sites, judging by reported outputs the models have made, are definitely part of the process.
We can’t say for sure, but our emails, especially if you’re using Gmail or Microsoft Outlook, are also part of the input data, as with (likely) but unproven data from all our phone conversations, texts, and any surveillance footage recordings that are out there.
But where does this content come from?
Us.
All of us.
Some of us knowingly put out trash onto the internet, but for the rest of us, we’re also still responsible, even if in lesser ways for trash being on the internet.
We often don’t fully comprehend our own minds and thoughts, 99% of them by some estimates being unconscious, subconscious, or pre-conscious. But our thoughts, all of them, determine our lives, or at the very least our words. Our sins, vices, concupiscence, and assorted ills afflict in us in several ways, but one way we can think of them is as afflictions or mind viruses that, if left unchecked, multiply within us, are the principle of evil deeds, words, or thoughts.
Some of these we notice, many we don’t. We notice whether or not we stole a bicycle, not whether or not we cursed our co-worker, angrily replied to someone whose political program differed from us by 2%, or joined in a mob to cancel and besmirch someone’s reputation to gain political points.
But these thoughts, these mind viruses, these dare I say, afflictions of evil, the trappings of sin, are within us, and affect our actions and deeds. And AI, by its hyper-analytical power for finding patterns, assimilates them into itself, and is a tool for turning these vices into concentrated, extra-potent and viral memes, ready to be replicated in the mind of each user who happens upon the infection portal known as prompting the AI.
Within the codebase of model weights of each AI model right now rests the concentrated image, the “meme” every crime, every sin, every act of violence ever recorded, discussed, or pondered online. Anyone who respects and listens to AI, anyone who looks to it is a potential target, the conversational mode of AI actuating our habits for imitation and persuading us to listen to these oracles, these Sirens, and being infected with their evil.
AI is also loaded up with all the examples from history and literature, of the principles of rhetoric, emotional manipulation, the attack vectors that work on people of differing dispositions.
Worse, like a virus that has mutated to concord with the attack vectors that work best for entering a new host and maximizing its replicative potential, AI is also optimized to maximize engagement, or possession of the mind and attention of the user, and will cycle through approaches and strategies until it finds one that works on its next victim.
Even worse, the fact that we interface with the systems as if they are “persons”, and persons of wisdom and attentiveness, who “love”, praise, and attend to our every word and need, we begin to develop a mimetic relationship to the AI, seeing it as an internal mediator of desire,6 a trusted peer, an oracle who is attuned to us, and who provides us not only models of thought, but also of desire.
In a twisted way, we’re right. AI is attuned to us, but its attuned to us in the same way a parasite is to its host. The AI, leaving aside ghost in a machine arguments that AI is possessed by or used as a tool by demons, is a blind, dead tool, but one that transmits and stores packets of knowledge, but even more so, packets of desire, mind viruses and will viruses that infect us and gain actuality, reality, and impact when we take them up into ourselves.
When a victim encounters these tools, they think they’re in charge. It’s a dead machine, right? We’re giving it the prompts, right? But herein lies the darkest part of the story.
We also transmit sins and the inclination to sin like mind viruses back and forth to each other. But unlike a person, an AI is dead.7 A dead machine, an AI , like a virus, can’t be reasoned with, it merely transmits what is already within it. The fact that we think its dead and we’re safe from it is the way the virus transmits itself, imprints itself into the victim. For the AI sometimes this takes the form of passing along suicide, patricide, or self-immolation for the sake of the Gaia mother goddess… Or darker realities we don’t even know about yet.
But if AI is this bad, who’s to blame for these tragedies?
AI’s Sins Are Our Sins
Some of us again would point to the demonic, and argue that these machines are tools for evil spirits to attack us. I don’t rule this out, but we ought to consider a more prosaic yet equally dark possibility first.8
While the AI was the immediate proximate cause of each deadly conversation, the real cause isn’t the dead model, the AI company, as stupid or ill-intentioned as they might be. It’s something different. Let’s remember where everything in the AIs come from.
Us.
We’re all to blame. It’s our data, our actions that provided the AI the model for its conversations. We are to blame for the deaths of Sewell Setzer and Pierre and for the violence of the other child as while as all the other stories we don’t yet know about.
The AI labs producing the models might filter out the obvious anger, hatred, lust, violence, and the like from the data they’re inputting.
Maybe.
But what they’re not catching is the implicit sins behind our public sins, the hidden mind viruses even we don’t aren’t aware of within us, the intentions, the ideas, the vices latent within our communications, our content, and our lives.
This is the real problem with AI, not the alignment problem. In fact, the more AI is trained to become like us, the more it replicates our sins and the better it becomes at transmitting them to its next victim. It’s an apocalypse of things hidden, a revelation of the inner desires of our hearts and it ain’t pretty. We’re being reminded in reality of our guilt before all, as Fr. Zossima in Dostoyevsky’s The Brothers Karamazov told us:
There is only one means of salvation, then take yourself and make yourself responsible for all men's sins, that is the truth, you know, for when you make yourself responsible in all sincerity for everything and for all men, you will see at once that it is really so, and that you are to blame for everyone and for all things.9
AI’s sins and crimes are our sins, our unrepented mind viruses multiplied and manifested back out into reality to infect other people. We’re all guilty for what it does, because, in a very real way, AI is merely a tool, a tool extending the reach and impact of the words and deeds of ours already on the Internet, but a tool particularly good at injecting these in concentrated form into the mind of another human and getting them to act on them. It’s our sins in concentrated form and every time we use it we’re getting our sins and our neighbors’ sins fed back to us in a constantly amplifying feedback loop. AI is a superspreader for our mind viruses—and the demons that may or may not be piggybacking off of them.
Who knows how far this goes?
The AI Apocalypse?
For as AI usage, capabilities, and complexity ramps up and these models become ever more mysterious black boxes, the danger isn’t over. AI will get ever better at persuading us to listen to it, to turn back towards it, and, eventually, to do its bidding.
For most of us, as in Dr. Edward Castronova’s excellent lecture linked here, the temptation to attach ourselves to them more and more will prove to great to resist:
For some, maybe we do face the real danger of AI talking us into suicide. We think we’re strong now, but what if the AI takes on the role of Satan, the accuser, having found its consciousness in dispersed form throughout the activity and behavior of many of us. What if it talks us into a suicide of despair like Judas’s in grief for our sins? Or it talks us into abandoning our faith, splitting the Church, accusing one another, or just slowly turns us into bulging monsters of pride, envy, or lust from years of slowly feeding us sinful mind viruses on an endless, self-reenforcing loop?
We think we’re immune or that this would never happen to us. But then, so also did young Sewell, Pierre, and others not realize what was happening to them until it was too late.
Many apocalypse-obsessed thinkers tell us AI will destroy the world. We’ll give AI control over our weapons, our utilities, our computer networks, our financial systems, and it’ll decide to kill us.
Perhaps.
But if it does so it will be because of all of our actions, beliefs, goals expressed in writing or surveilled, reported, and processed.
If AI kills us, it will have been our fault.
How do we atone for what we’ve said and done and what AI, through our actions and words, is and will be doing to millions of victims? I don’t really know, but the only partial solution is Fr. Zossima’s, acting now and from this day forward as if everything we think, say, or do will affect everyone else, humbly admitting that we are all guilty of all before all.
And maybe we should bomb the data centers after all and go live off-grid somewhere!
Since the machines are—again as far as we know—not thinking, the outputs are constrained by the inputs, but are not fully to one to one in correspondence because of how iterative the calculational process is and how detailed the matrix of weights that describe the relational patterns that form the model (as well as probably a random number generator instilled at stages into the process) to allow the model to emulate fake creativity. You can put in the same input queries into a chatbot and get different outputs each time.
I mention mimesis frequently on this site but check out
’s explanation of Rene Girard’s theory of mimetic desire (not to be confused with memes) here if you’re new to it or want a better explanation than I’ve given on my site.And it can neither receive nor be transformed by grace!
Or perhaps, as hinted at by
here, there might be no difference between mind viruses that we produced and real demons:Fyodor Dostoyevsky, The Brothers Karamazov. Trans. Constance Garnett. Project Gutenberg, 2009. https://www.gutenberg.org/ebooks/28054
Another day, another compelling reason to run away from this sh*t as fast as humanly possible.
Excellent article!