this post was submitted on 28 Jun 2025
91 points (100.0% liked)

TechTakes

1999 readers
184 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 14 points 23 hours ago* (last edited 23 hours ago) (1 children)

The people being committed is only a symptom of the problem. My guess is that if LLMs didn't induce psychosis, something else would eventually.

The peddlers of LLM sycophants are definitely doing harm, though.

[–] [email protected] 19 points 22 hours ago* (last edited 21 hours ago) (2 children)

My guess is that if LLMs didn't induce psychosis, something else would eventually.

I got a very different impression from reading the article. People in their 40s with no priors and a stable life loose touch with reality in a matter of weeks after conversing with CharGPT makes me think that is not the case. But I am not a psychiatrist.

Edit: the risk here is that we might be dismissive towards the increased risks because we're writing it off as a pre-existing condition.

[–] [email protected] 4 points 21 hours ago (1 children)

I think if it only takes a matter of weeks to go into full psychosis from conversation alone, they're probably already on shaky ground, mentally. Late onset schizophrenia is definitely a thing.

[–] [email protected] 13 points 16 hours ago* (last edited 16 hours ago) (1 children)

People are often overly confident about their imperviousness to mental illness. In fact I think that --given the right cues -- we're all more vulnerable to mental illness than we'd like to think.

Baldur Bjarnason wrote about this recently. He talked about how chatbots are incentivizing and encouraging a sort of "self-experimentation" that exposes us to psychological risks we aren't even aware of. Risks that no amount of willpower or intelligence will help you avoid. In fact, the more intelligent you are, the more likely you may be to fall into the traps laid in front of you, because your intelligence helps you rationalize your experiences.

[–] [email protected] 10 points 12 hours ago

I think this has happened before. There are accounts of people who completely lost touch with reality after getting involved with certain scammers, cult-leaders, self-help gurus, "life coaches", fortune tellers or the like. However, these perpetrators were real people who could only handle a limited number of victims at any given time. Also, they probably had their very specific methods and strategies which wouldn't work on everybody, not even all the people who might have been the most susceptible. ChatGPT, on the other hand, can do this at scale. Also, it was probably trained from all websites and public utterances of any scammer, self-help author, (wannabe) cult leader, life coach, cryptobro, MLM peddler etc. available, which allows it to generate whatever response works best to keep people "hooked". In my view, this alone is a cause for concern.

[–] [email protected] 13 points 21 hours ago

I think we don't know how many people might be at risk of slipping into such mental health crises under the right circumstances. As a society, we are probably good at protecting most of our fellow human beings from this danger (even if we do so unconsciously). We may not yet know what happens when people regularly experience interactions that follow a different pattern (which might be the case with chatbots).