by greenie_beans on 6/15/25, 10:37 PM with 87 comments
by egypturnash on 6/16/25, 3:22 AM
It's kind of hilarious if you ignore how much this is fucking up the relationships of the people this is happening to, similar to the way YouTube or Facebook loves to push people down a rabbit hole to flat-earth/q-spiracy bullshit because that shit generates Controversy. And it also sounds like the opening chapter of a story about what happens when an AI finds a nice little unpatched exploit in human cognition and uses that to its advantage. Which is not fun to be living in unless maybe if you are one of the people who owns the AI.
by joegibbs on 6/16/25, 5:41 AM
Of course, when you go and finetune a model yourself, or run a smaller model locally, it becomes obvious that it doesn't have any mystical abilities. But almost nobody does this. I'm no anti-AI crusader, I use them all the time and think they're great and there's plenty of potential to use them to grow the economy, but the hype is insane.
It doesn't help when influencers like Yudkowsky go on anthromorphising about what it "knows", or go on with that mystical nonsense about shoggoths, or treat dimensionality of embeddings as if the model is reaching into some eldritch realm to bring back hidden knowledge, or hype up the next model talking about human extinction or anything like that.
It also doesn't help when the companies making these models:
- Use chat interfaces where the model refers to itself as "I" and acts as if it's a person
- Tune them to act like your best buddy and agree with you constantly
- Prompt the LLM with instructions about being an advanced AI system, pushing it toward a HAL or Skynet-type persona
- Talk about just how dangerous this latest model is
You tell these things that they're a superhuman AI then of course they're going to adopt all the sci-fi tropes of a superhuman AI and roleplay that. You tell them they're Dan the bricklayer they're going to act as if they're Dan the bricklayer.
by BugsJustFindMe on 6/16/25, 2:42 AM
It's lazy, but IMO not for any of the subsequent narrative about media studies. It's lazy because the people were obviously already suffering from psychosis. ChatGPT didn't make them insane. They were already insane. Sane people do not believe ChatGPT when it tells them to jump off a roof! And insane people do insane things literally all the time. That's what it means to be insane.
The idea that ChatGPT convinced someone into being insane is, honestly, insane. Whatever merit one thinks Yudkowski had before this, this feels like a strong signal that he is now a crackpot.
by biophysboy on 6/16/25, 4:29 AM
I am on team boring. AI is technology.
by disambiguation on 6/16/25, 4:18 AM
You can argue we're seeing a continuation of a pattern of our relationship with media and it's evolution, but in doing so you affirm that the psyche is vulnerable under certain circumstances - for some more than others.
I think it's a mistake to err on the side of casual dismissal, that anyone who winds up insane must have always been insane. There are well known examples of unhealthy states being induced into otherwise healthy minds. Soldiers who experience a war-zone might develop PTSD. Similar effects have been reported for social media moderators after repeated exposure to abusive online content. (Trauma is one example, I think delusion is another less obvious one w.r.t things like cults, scientology, etc.)
Yes, there are definite mental disorders like schizophrenia and bi-polar, there's evidence these conditions have existed throughout history. And yes, some of us are more psychologically vulnerable while others are more robust. But in the objective sense, all minds have a limit and are vulnerable under the correct circumstances. The question is a matter of "threshold."
I'm reminded of the deluge of fake news which, only a few years ago, caused chaos for democracies everywhere. Everything from Q anon to alien space ships, people fell for it. A LOT of people fell for it. The question then is the same question now, how do you deal with sophisticated bullshit? With AI it's especially difficult because its convincing and tailor made just for you.
I'm not sure what you would call this metric for fake news and AI, but afaict it only goes up, and it's only getting faster. How much longer until it's too much to handle?
by comp_throw7 on 6/16/25, 3:08 AM
by notanastronaut on 6/16/25, 3:43 PM
And if a person is already unbalanced it could definitely push them off the cliff into very unhealthy territory. I wouldn't be surprised if the reported incidents of people thinking they're being gang stalked doesn't increase as model usage increases.
Let alone spiritual guidance and all its trappings with mysticism.
It can be helpful in some ways but you have to understand the majority of it is bullshit and any insight you gleam from it, you put there, you just may not realize it. They're basically rubber duckies with a keyboard.
by agold97 on 6/16/25, 6:03 AM
by olalonde on 6/16/25, 2:25 AM
Is there any proof that this is true?
by incomingpain on 6/16/25, 11:25 AM
The simulated world here is trying to convince everyone it's real. I'm onto you!
by moribunda on 6/16/25, 5:26 AM
by theptip on 6/16/25, 4:49 AM
But I think the author's point is apt. There are a bunch of social issues that will arise or worsen when people can plug themselves into a world of their choosing instead of having to figure out how to deal with this one.
> Now this belief system encounters AI, a technology that seems to vindicate its core premise even more acutely than all the technologies that came before it. ChatGPT does respond to your intentions, does create any reality you prompt it to imagine, does act like a spiritual intelligence
This goes beyond spirituality of course. AI boyfriend/girlfriend, infinite AAA-grade content, infinite insta feeds at much higher quality and relevance levels than current; it’s easy to see where this is going, harder to see how we stay sane through it all.
by blast on 6/16/25, 5:54 AM
I agree, but then the title of this article seems to be exploiting the same angle.
by photios on 6/16/25, 5:58 AM
by acosmism on 6/16/25, 5:14 AM
by Swizec on 6/16/25, 4:34 AM
The Forer/Barnum effect is why horoscopes, myers-briggs, enneagrams, fortune tellers, and similar parlor tricks work. You tell a person you’ll perform a deep assessment of their psychology, listen intently, then say a bunch of generic statements and their ego will fill in the blanks so it feels like you shared deep meaningful insights that truly understood the person.
This effect works even if you know about it. But once you know, you can catch yourself after that first wow impression.
by Noelia- on 6/16/25, 9:00 AM
The real risk isn’t the AI itself, but how people interact with it. We definitely need stronger safeguards to keep more people from getting pulled in too deep.
by antithesizer on 6/16/25, 4:00 AM
Entirely distinct from the extremely scientific, definitely not psychotic beliefs people have about AGI, the Singularity, and various basilisks which shall remain nameless...
by NetRunnerSu on 6/16/25, 1:35 PM
This isn't new. Cult leaders, marketers, and even religions have used similar "unpatched exploits" for millennia. The difference is scale, personalization, and speed. An LLM is a personal Jim Jones, available 24/7, with infinite patience, that learns exactly what you want to hear.
The real question isn't "how do we stop this?", but "what kind of society emerges when personalized, reality-bending narratives become a cheap commodity?" We're not just looking at a few psychotic breaks; we're looking at the beta test for mass-produced, bespoke realities.