this post was submitted on 05 May 2025
435 points (95.6% liked)

Technology

70285 readers
3367 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
(page 3) 50 comments
sorted by: hot top controversial new old
[–] [email protected] 17 points 2 weeks ago

I lost a parent to a spiritual fantasy. She decided my sister wasn't her child anymore because the christian sky fairy says queer people are evil.

At least ChatGPT actually exists.

[–] [email protected] 26 points 2 weeks ago (1 children)
load more comments (1 replies)
[–] [email protected] 34 points 2 weeks ago* (last edited 2 weeks ago) (4 children)

In that sense, Westgate explains, the bot dialogues are not unlike talk therapy, “which we know to be quite effective at helping people reframe their stories.” Critically, though, AI, “unlike a therapist, does not have the person’s best interests in mind, or a moral grounding or compass in what a ‘good story’ looks like,” she says. “A good therapist would not encourage a client to make sense of difficulties in their life by encouraging them to believe they have supernatural powers. Instead, they try to steer clients away from unhealthy narratives, and toward healthier ones. ChatGPT has no such constraints or concerns.”

This is a rather terrifying take. Particularly when combined with the earlier passage about the man who claimed that “AI helped him recover a repressed memory of a babysitter trying to drown him as a toddler.” Therapists have to be very careful because human memory is very plastic. It's very easy to alter a memory, in fact, every time you remember something, you alter it just a little bit. Under questioning by an authority figure, such as a therapist or a policeman if you were a witness to a crime, these alterations can be dramatic. This was a really big problem in the '80s and '90s.

Kaitlin Luna: Can you take us back to the early 1990s and you talk about the memory wars, so what was that time like and what was happening?

Elizabeth Loftus: Oh gee, well in the 1990s and even in maybe the late 80s we began to see an altogether more extreme kind of memory problem. Some patients were going into therapy maybe they had anxiety, or maybe they had an eating disorder, maybe they were depressed, and they would end up with a therapist who said something like well many people I've seen with your symptoms were sexually abused as a child. And they would begin these activities that would lead these patients to start to think they remembered years of brutalization that they had allegedly banished into the unconscious until this therapy made them aware of it. And in many instances these people sued their parents or got their former neighbors or doctors or teachers whatever prosecuted based on these claims of repressed memory. So the wars were really about whether people can take years of brutalization, banish it into the unconscious, be completely unaware that these things happen and then reliably recover all this information later, and that was what was so controversial and disputed.

Kaitlin Luna: And your work essentially refuted that, that it's not necessarily possible or maybe brought up to light that this isn't so.

Elizabeth Loftus: My work actually provided an alternative explanation. Where could these merit reports be coming from if this didn't happen? So my work showed that you could plant very rich, detailed false memories in the minds of people. It didn't mean that repressed memories did not exist, and repressed memories could still exist and false memories could still exist. But there really wasn't any strong credible scientific support for this idea of massive repression, and yet so many families were destroyed by this, what I would say unsupported, claim.

The idea that ChatBots are not only capable of this, but that they are currently manipulating people into believing they have recovered repressed memories of brutalization is actually at least as terrifying to me as it convincing people that they are holy prophets.

Edited for clarity

load more comments (4 replies)
[–] [email protected] 67 points 2 weeks ago* (last edited 2 weeks ago) (3 children)

I read the article. This is exactly what happened when my best friend got schizophrenia. I think the people affected by this were probably already prone to psychosis/on the verge of becoming schizophrenic, and that ChatGPT is merely the mechanism by which their psychosis manifested. If AI didn’t exist, it would've probably been Astrology or Conspiracy Theories or QAnon or whatever that ended up triggering this within people who were already prone to psychosis. But the problem with ChatGPT in particular is that is validates the psychosis… that is very bad.

ChatGPT actively screwing with mentally ill people is a huge problem you can’t just blame on stupidity like some people in these comments are. This is exploitation of a vulnerable group of people whose brains lack the mechanisms to defend against this stuff. They can’t help it. That’s what psychosis is. This is awful.

[–] [email protected] 15 points 2 weeks ago* (last edited 2 weeks ago) (1 children)

the problem with ChatGPT in particular is that is validates the psychosis… that is very bad.

So do astrology and conspiracy theory groups on forums and other forms of social media, the main difference is whether you're getting that validation from humans or a machine. To me, that's a pretty unhelpful distinction, and we attack both problems the same way: early detection and treatment.

Maybe computers can help with the early detection part. They certainly can't do much worse than what's currently happening.

[–] [email protected] 10 points 2 weeks ago (3 children)

I think having that kind of validation at your fingertips, whenever you want, is worse. At least people, even people deep in the claws of a conspiracy, can disagree with each other. At least they know what they are saying. The AI always says what the user wants to hear and expects to hear. Though I can see how that distinction may matter little to some, I just think ChatGPT has advantages that are worse than what a forum could do.

load more comments (3 replies)
[–] [email protected] 8 points 2 weeks ago

I think this is largely people seeking confirmation their delusions are real, and wherever they find it is what they're going to attach to themselves.

load more comments (1 replies)
[–] [email protected] 12 points 2 weeks ago (1 children)

Oh wow. In the old times, self-proclaimed messiahs used to do that without assistance from a chatbot. But why would you think the "truth" and path to enlightenment is hidden within a service of a big tech company?

[–] [email protected] 11 points 2 weeks ago (2 children)

well because these chatbots are designed to be really affirming and supportive and I assume people with such problems really love this kind of interaction compared to real people confronting their ideas critically.

[–] [email protected] 4 points 2 weeks ago

I think there was a recent unsuccessful rev of ChatGPT that was too flattering, it made people nauseous - they had to dial it back.

[–] [email protected] 3 points 2 weeks ago

I guess you're completely right with that. It lowers the entry barrier. And it's kind of self-reinforcing. And we have other unhealty dynamics with other technology as well, like social media, which also can radicalize people or get them in a downwards spiral...

[–] [email protected] 28 points 2 weeks ago* (last edited 2 weeks ago) (1 children)

I admit I only read a third of the article.
But IMO nothing in that is special to AI, in my life I've met many people with similar symptoms, thinking they are Jesus, or thinking computers work by some mysterious power they posses, but was stolen from them by the CIA. And when they die all computers will stop working! Reading the conversation the wife had with him, it sounds EXACTLY like these types of people!
Even the part about finding "the truth" I've heard before, they don't know what it is the truth of, but they'll know when they find it?
I'm not a psychiatrist, but from what I gather it's probably Schizophrenia of some form.

My guess is this person had a distorted view of reality he couldn't make sense of. He then tried to get help from the AI, and he built a world view completely removed from reality with it.

But most likely he would have done that anyway, it would just have been other things he would interpret in extreme ways. Like news, or conversations, or merely his own thoughts.

[–] [email protected] 7 points 2 weeks ago* (last edited 2 weeks ago) (1 children)

Around 2006 I received a job application, with a resume attached, and the resume had a link to the person's website - so I visited. The website had a link on the front page to "My MkUltra experience", so I clicked that. Not exactly an in-depth investigation. The MkUltra story read that my job applicant was an unwilling (and un-informed) test subject of MkUltra who picked him from his association with other unwilling MkUltra test subjects at a conference, explained how they expanded the MkUltra program of gaslighting mental torture and secret physical/chemical abuse of their test subjects through associates such as co-workers, etc.

So, option A) applicant is delusional, paranoid, and deeply disturbed. Probably not the best choice for the job.

B) applicant is 100% correct about what is happening to him, DEFINITELY not someone I want to get any closer to professionally, personally, or even be in the same elevator with coincidentally.

C) applicant is pulling our legs with his website, it's all make-believe fun. Absolutely nothing on applicant's website indicated that this might be the case.

You know how you apply to jobs and never hear back from some of them...? Yeah, I don't normally do that to our applicants, but I am willing to make exceptions for cause... in this case the position applied for required analytical thinking. Some creativity was of some value, but correct and verifiable results were of paramount importance. Anyone applying for the job leaving such an obvious trail of breadcrumbs to such a limited set of conclusions about themselves would seem to be lacking the self awareness and analytical skill required to succeed in the position.

Or, D) they could just be trying to stay unemployed while showing effort in applying to jobs, but I bet even in 2006 not every hiring manager would have dug in those three layers - I suppose he could deflect those in the in-person interviews fairly easily.

[–] [email protected] 3 points 2 weeks ago* (last edited 2 weeks ago) (3 children)

IDK, apparently the MkUltra program was real,

B) applicant is 100% correct about what is happening to him, DEFINITELY not someone I want to get any closer to professionally, personally, or even be in the same elevator with coincidentally.

That sounds harsh. This does NOT sound like your average schizophrenic.

https://en.wikipedia.org/wiki/MKUltra

[–] [email protected] 5 points 2 weeks ago (1 children)

Oh, I investigated it too - it seems like it was a real thing, though likely inactive by 2005... but if it were active I certainly didn't want to become a subject.

[–] [email protected] 1 points 2 weeks ago (1 children)

OK that risk wasn't really on my radar, because I live in a country where such things have never been known to happen.

[–] [email protected] 3 points 2 weeks ago (1 children)

That's the thing about being paranoid about MkUltra - it was actively suppressed and denied while it was happening (according to FOI documents) - and they say that they stopped, but if it (or some similar successor) was active they'd certainly say that it's not happening now...

At the time there were active rumors around town about influenza propagation studies being secretly conducted on the local population... probably baseless paranoia... probably.

Now, as you say, your (presumably smaller) country has never known such things to happen, but...

load more comments (1 replies)
load more comments (2 replies)
load more comments
view more: ‹ prev next ›