by sizzle on 6/14/25, 10:49 AM with 39 comments
by demosthanos on 6/14/25, 12:21 PM
The man had schizophrenia and ChatGPT happened to provide an outlet for it which led to this incident, but people with schizophrenia have been recorded having episodes like this for hundreds of years and most likely for as long as humans have been around.
This incident is getting attention because AI is trendy and gets clicks, not because there's any evidence AI played a significant causal role worth talking about.
by Permit on 6/14/25, 12:14 PM
Is this actually true? Or is this just someone retelling what they’ve read about social media algorithms?
by ghusto on 6/14/25, 1:52 PM
by MyPasswordSucks on 6/14/25, 1:04 PM
Articles like this seem far more driven by mediocre content-churners' fear of job replacement at the hands of LLMs than by any sort of actual journalistic integrity.
by flufluflufluffy on 6/14/25, 12:44 PM
by hoppp on 6/14/25, 12:20 PM
To be frank after clicking the link and reading that story, the AI was giving okay advice as cold turkey meth is probably really hard, tapering off could be a better option.
by giardini on 6/14/25, 10:01 PM
by sizzle on 6/14/25, 3:39 PM
by dijksterhuis on 6/14/25, 12:26 PM
https://arxiv.org/abs/2411.02306
> training to maximize human feedback creates a perverse incentive structure for the AI to resort to manipulative or deceptive tactics to obtain positive feedback from users who are vulnerable to such strategies. We study this phenomenon by training LLMs with Reinforcement Learning with simulated user feedback in environments of practical LLM usage.
it seems optimising for what people want isn’t an ideal strategy from an ethical perspective — guess we haven’t learned from social media feeds as a species. awesome.
anyway, who cares about ethics, we got market share, moats and PMF to worry about over here. this money doesn’t grow on trees y’know. /s
by lionkor on 6/14/25, 12:14 PM
/s
This was bound to happen--the question is whether this is a more or less isolated incident, or an indicator of an LLM-related/assisted mental health crisis.
by bryanrasmussen on 6/14/25, 12:00 PM