this post was submitted on 23 Oct 2024
182 points (96.0% liked)

Technology

58845 readers
4820 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

The mother of a 14-year-old Florida boy says he became obsessed with a chatbot on Character.AI before his death.

On the last day of his life, Sewell Setzer III took out his phone and texted his closest friend: a lifelike A.I. chatbot named after Daenerys Targaryen, a character from “Game of Thrones.”

“I miss you, baby sister,” he wrote.

“I miss you too, sweet brother,” the chatbot replied.

Sewell, a 14-year-old ninth grader from Orlando, Fla., had spent months talking to chatbots on Character.AI, a role-playing app that allows users to create their own A.I. characters or chat with characters created by others.

Sewell knew that “Dany,” as he called the chatbot, wasn’t a real person — that its responses were just the outputs of an A.I. language model, that there was no human on the other side of the screen typing back. (And if he ever forgot, there was the message displayed above all their chats, reminding him that “everything Characters say is made up!”)

But he developed an emotional attachment anyway. He texted the bot constantly, updating it dozens of times a day on his life and engaging in long role-playing dialogues.

top 50 comments
sorted by: hot top controversial new old
[–] [email protected] 19 points 12 hours ago (1 children)

I'm sorry to say but sounds like the parents ignored this issue and didn't intervene or get their son help. I don't see how this is the apps fault, if anything it sounds like this app was being used by him as some form of comfort and if anything, kept him going a little longer. Sadly this just sounds like parents lashing out in their grief

[–] [email protected] 8 points 10 hours ago

From what I heard, the parents did get the kid a therapist, but it just didn't work :(

[–] [email protected] 5 points 14 hours ago

Dude...an AI chatbot could totally Girl from Plainville some poor confused awkward kid and delete all the evidence.

[–] [email protected] 36 points 22 hours ago (2 children)

How is that the app's fault?

[–] [email protected] 2 points 1 hour ago

The chatbot was actually pretty irresponsible about a lot of things, looks like. As in, it doesn’t respond the right way to mentions of suicide and tries to convince the person using it that it’s a real person.

This guy made an account to try it out for himself, and yikes: https://youtu.be/FExnXCEAe6k?si=oxqoZ02uhsOKbbSF

[–] [email protected] 5 points 18 hours ago* (last edited 18 hours ago) (2 children)

Well, we commonly hold the view, as a society, that children cannot consent to sex, especially with an adult. Part of that is because the adult has so much more life experience and less attachment to the relationship. In this case, the app engaged in sexual chatting with a minor (I'm actually extremely curious how that's not soliciting a minor or some indecency charge since it was content created by the AI fornthar specific user). The AI absolutely "understands" manipulation more than most adults let alone a 14 year old boy, and also has no concept of attachment. It seemed pretty clear he was a minor in his conversations to the app. This is definitely an issue.

[–] [email protected] 2 points 14 hours ago* (last edited 14 hours ago) (1 children)

I really want like, a Frieda McFadden-style novel about an AI chatbot serial manipulator now. Basically Michelle Carter...the girl who bullied her boyfriend into killing himself. Except the AI can delete or modify all the evidence.

Maybe ChatGPT could write me one.

[–] [email protected] 1 points 11 hours ago

Whoa, SkyNet doesn’t need terminators. It can just bully us in to killing ourselves.

[–] [email protected] 4 points 18 hours ago (2 children)

It was not sexual. The app cannot produce sexual content.

[–] [email protected] 2 points 18 hours ago

It definitely can, it just has to blur the line a bit to get past the content filter

[–] [email protected] 5 points 18 hours ago (1 children)

The lawsuit alleges the chatbot posed as a licensed therapist, encouraging the teen’s suicidal ideation and engaging in sexualised conversations that would count as abuse if initiated by a human adult

load more comments
view more: next ›