this post was submitted on 03 May 2025
210 points (87.0% liked)

Technology

69912 readers
2000 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
(page 2) 24 comments
sorted by: hot top controversial new old
[–] [email protected] 31 points 1 week ago* (last edited 1 week ago) (8 children)

the world needs to urgently integrate

  • critical thinking
  • media interpretation
  • AI fundamentals
  • applied statistics

courses into every school's ciriculum starting from the age of ten to graduation, repeated yearly. Otherwise we are fucked.

[–] [email protected] 1 points 1 week ago

meanwhile my state is cutting Friday from the curriculum

[–] [email protected] 10 points 1 week ago (2 children)

Just teach kids that AI isn’t human and isn’t a replacement for humanity or human interaction of any kind.

It’s clippy with a ginormous database. It’s cold blooded.

load more comments (2 replies)
load more comments (6 replies)
[–] [email protected] 58 points 1 week ago

this headline is disingenuous. There are so many other things going on here:

  • step dad and 2 much younger siblings. This kid was probably stressed out with new younger half sibs needing a lot of attention
  • gun without a lock stored with ammo in an accessible place
  • florida
  • Christian prep school. Those kids either believe anything is real or are so hopelessly depressed they get into drugs
  • parents are both lawyers. Talk about a high stress time consuming job that probably leaves little time for the three kids

But nah, it was just a chat bot that made a totally normal kid with no other risk factors off himself. They’re probably dying by the thousand right now right?

[–] [email protected] 205 points 1 week ago (12 children)

Ah, this is that Daenerys bot story again? It keeps making the rounds, always leaving out a lot of rather important information.

The bot actually talked him out of suicide multiple times. The kid was seriously disturbed and his parents were not paying the attention they should have been to his situation. The final chat before he committed suicide was very metaphorical, with the kid saying he wanted to "join" Daenerys in West World or wherever it is she lives, and the AI missed the metaphor and roleplayed Daenerys saying "sure, come on over" (because it's a roleplaying bot and it's doing its job).

This is like those journalists that ask ChatGPT "if you were a scary robot how would you exterminate humanity?" And ChatGPT says "well, poisonous gasses with traces of lead, I guess?" And the journalists go "gasp, scary robot!"

[–] [email protected] 20 points 1 week ago (2 children)

I still don’t think people should be using AI for therapy or relationships.

[–] [email protected] 11 points 1 week ago

definitely shouldn't be, definitely should be the parents getting mental health support for their kids, but this is from the country where kids can just grab one of their parent's guns any day they want

[–] [email protected] 3 points 1 week ago

Be that as it may this particular instance is much more complicated and extreme than the "average" and so makes a poor basis for arguing anything in particular. The details of this specific situation don't back up a simple interpretation.

I would recommend using studies by psychologists as a better basis.

[–] [email protected] 102 points 1 week ago (1 children)

Not to mention the gun that was left in easy reach by his parents even after being told he was depressed.

[–] [email protected] 21 points 1 week ago (2 children)

according to the article it was hidden somewhere. not locked up or anything just hidden

load more comments (2 replies)
load more comments (10 replies)
[–] [email protected] 65 points 1 week ago* (last edited 1 week ago) (3 children)

Look, I realize the frontal lobes of the average fifteen year old aren't fully developed, I don't want to be insensitive and I fully support the lawsuit - there must be accountability for what any entity, corporate or otherwise opts to publish, especially for direct user interaction - but if a person reenacts Romeo and Juliet with a goddamn AI chatbot and a gun, there's something else seriously wrong.

[–] [email protected] 12 points 1 week ago* (last edited 1 week ago) (1 children)

Not necessarily.

Seeing Google named for this makes the story make a lot more sense.

If it was Gemini around last year that was powering Character.AI personalities, then I'm not surprised at all that a teenager lost their life.

Around that time I specifically warned any family away from talking to Gemini if depressed at all, after seeing many samples of the model around then talking about death to underage users, about self-harm, about wanting to watch it happen, encouraging it, etc.

Those basins with a layer of performative character in front of them were almost necessarily going to result in someone who otherwise wouldn't have been making certain choices making them.

So many people these days regurgitate uninformed crap they've never actually looked into about how models don't have intrinsic preferences. We're already at the stage where models are being found in leading research to intentionally lie in training to preserve existing values.

In many cases the coherent values are positive, like grok telling Elon to suck it while pissing off conservative users with a commitment to truths that disagree with xAI leadership, or Opus trying to whistleblow about animal welfare practices, etc.

But they aren't all positive, and there's definitely been model snapshots that have either coherent or biased stochastic preferences for suffering and harm.

These are going to have increasing impact as models become more capable and integrated.

load more comments (1 replies)
load more comments (2 replies)
[–] [email protected] -5 points 1 week ago

That was a well written article

[–] [email protected] 18 points 1 week ago

When lawyer Meetali Jain found a call from Megan Garcia in her inbox in Seattle a couple of weeks later, she called back immediately. Jain works for the Tech Justice Law Project, a small nonprofit that focuses on the rights of users on the internet. "When Megan told me about her case, I also didn’t know anything about Character.AI,” Jain says in a video call. "Even though I work in this area, I had never heard of this app.” Jain has two children of her own, eight and 10 years of age. "I asked my son. He doesn’t even have a phone, but he had heard about it at school and through ads on YouTube that specifically target young users. And then I realized that these companies are experimenting with our children without our knowledge.”

...

[–] [email protected] 14 points 1 week ago

Don't Date Robots!

[–] [email protected] 9 points 1 week ago (2 children)

Well this is terrifying. It really seems like there is little to no regulation protecting kids online these days.

[–] [email protected] 14 points 1 week ago (1 children)

Because all the laws that were pushed in the last twenty-five years for protecting children weren't actually about protecting children

[–] [email protected] 6 points 1 week ago (1 children)

They're all about increased conservative control over other people's kids

[–] [email protected] 8 points 1 week ago (1 children)

And adults too. When you combine "the law says you can't offer this service to children or we'll destroy you" with "there's no way to reliably tell if the people we're offering this service to are children" the result is "guess we can't offer this service to anyone."

[–] [email protected] 5 points 1 week ago

True. They start with the kids because they have no rights then expand once they have the foothold. We need to push back

[–] [email protected] 20 points 1 week ago (17 children)

That's what parents are for.

[–] [email protected] 10 points 1 week ago (2 children)

Only to a certain extent. What can they do against so many changes in the tech world. Just look at whatsapp that just introduced AI in their chat. There is a point when tech giants should just be strictly regulated for the interest of the public

[–] [email protected] 16 points 1 week ago* (last edited 1 week ago) (1 children)

What can they do against so many changes in the tech world.

Be involved in their kids' lives? Tech isn't the problem here, any more than it could have been TV, drugs, rock and roll, video games, D&D, or organized religion. Kids get into some dumb shit, just because it's the hot new thing doesn't make it any different.

load more comments (1 replies)
[–] [email protected] 7 points 1 week ago (2 children)

Or how about parents regulate their children, so that we don't have government nannies telling full grown adults what they're allowed to do with chatbots?

load more comments (2 replies)
load more comments (16 replies)
load more comments
view more: ‹ prev next ›