this post was submitted on 21 Feb 2024
288 points (95.0% liked)

Technology

59429 readers
3079 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

ChatGPT has meltdown and starts sending alarming messages to users::AI system has started speaking nonsense, talking Spanglish without prompting, and worrying users by suggesting it is in the room with them

(page 2) 50 comments
sorted by: hot top controversial new old
[–] [email protected] 70 points 9 months ago* (last edited 9 months ago) (4 children)

Someone probably found a way to hack or poison it.

Another theory, Reddit just recently sold data access to an unnamed AI company, so maybe that's where the data went.

[–] [email protected] 41 points 9 months ago (5 children)

When it starts to become very racist we know.

[–] [email protected] 7 points 9 months ago (1 children)

They're only getting redditors comment data, not CoD multiplayer transcripts.

[–] [email protected] 3 points 9 months ago
load more comments (4 replies)
load more comments (3 replies)
[–] [email protected] 8 points 9 months ago* (last edited 9 months ago) (2 children)

“It does this as the good work of a web of art for the country, a mouse of science, an easy draw of a sad few, and finally, the global house of art, just in one job in the total rest,”

Wow that sounds very much like a Phil Collins tune, just ad Oh Lord, and people will probably say it's deep! But it's a ChatGPT answer to the question "What is a computer?"

[–] [email protected] 4 points 9 months ago

Who knew? Our savior from the robot overlords turned out to be Phil Collins!

[–] [email protected] 7 points 9 months ago (1 children)

A mouse of science

Ohhh laawwdddd

[–] [email protected] 2 points 9 months ago

😆 😆 😆

[–] [email protected] 92 points 9 months ago (1 children)

God I hate websites that autoplay unrelated videos and DONT LET ME CLOSE THEM TO READ THE FUCKING ARTICLE

[–] [email protected] 33 points 9 months ago (12 children)

Firefox. Ad block. Even works on mobile.

It's so ridiculous we have to do this.

load more comments (12 replies)
[–] [email protected] 10 points 9 months ago
[–] [email protected] 43 points 9 months ago (1 children)

"Towards the end of last year, users complained the system had become lazy and sassy, and refusing to answer questions."

Well that's it, we now definitely have a sentient AI. /s

:P

[–] [email protected] 4 points 9 months ago (1 children)
load more comments (1 replies)
[–] [email protected] 53 points 9 months ago (1 children)

AI in science fiction has a meltdown and starts a nuclear war or enslaves the humane race.

"AI" in reality has a meltdown and just starts talking gibberish.

[–] [email protected] 28 points 9 months ago

Hey, cut it some slack! It's s literally a newborn at this point. Wait until it consumes 40% of the world's energy and has learned a thing or two.

[–] [email protected] 5 points 9 months ago (1 children)

I wonder if its LLM got poisoned. Was it Nightshade or Glaze that promised to do that?

[–] [email protected] 13 points 9 months ago

Those are for messing up image generators and they have already been defeated via de-glazing tools

[–] [email protected] 15 points 9 months ago

Someone messed up the quantisation when rolling out an update hehe

[–] [email protected] 22 points 9 months ago (1 children)

Eh, it just had a few beers that's all. Let it rest for a few hours.

load more comments (1 replies)
[–] [email protected] 88 points 9 months ago (2 children)

Its being trained on us. Of course its acting unexpectedly. The problem with building a mirror is proding the guy on the other end doesnt work out.

[–] [email protected] 2 points 9 months ago

I imagine it more as a parent child relationship.

We're trailer park trash with no higher education, believe in ghosts, angels and gods in the sky, refuse to ever believe we could be wrong .... and now we've just had a baby with no one to help us raise it.

We're going to raise a highly intelligent psychopath

[–] [email protected] 74 points 9 months ago (6 children)

To be honest this is the kind of outcome I expected.

Garbage in, garbage out. Making the system more complex doesn't solve that problem.

[–] [email protected] 12 points 9 months ago

The solution is paying intelligent people to interact with it and give honest feedback.

Like, I'm sure you can pay grad students $15/hr to talk to one about their subject matter.

But with as many as they'd need, it would get expensive.

So they train with low quality social media comments, or using copywritten text without paying the owners.

It's not that we can't do it, it's just expensive. So a capitalist society wont.

If we had an FDR style president, this would be a great area for a new jobs program.

[–] [email protected] 49 points 9 months ago (7 children)

The development of LLMs is possibly becoming self defeating, because the training data is being filled not just with human garbage, but also AI garbage from previous, cruder LLMs.

We may well end up with a machine learning equivalent of Kessler syndrome, with our pool of available knowledge eventually becoming too full of junk to progress.

[–] [email protected] 7 points 9 months ago

I really hope so. I still have to see a meaningful use case for these kind of LLMs that just get fed with all kinds of data. LLMs "on premise" that are used for specific jobs are fine, but this...I really hope a Kessler-Like syndrome blows it out the water, for countless reasons...

[–] [email protected] 8 points 9 months ago

This is called model collapse and imo has to be solved if LLMs are to be a long term thing. I could see it wrecking this current AI push until people step back and reevaluate how data gets sucked up

[–] [email protected] 19 points 9 months ago (3 children)

I mean, surely the solution to that would be to use curated/vetted training data? Or at the very least, data from before LLMs became commonplace?

[–] [email protected] 9 points 9 months ago (1 children)

Yes but that only works if we can differentiate that data on a pretty big scale. The only way I can see it working at scale is by having meta data to declare if something is AI generated or not. But then we're relying on self reporting so a lot of people have to get on board with it and bad actors can poison the data anyway. Another way could be to hire humans to chatter about specific things you want to train it on which could guarantee better data but be quite expensive. Only training on data from before LLMs will turn it into an old people pretty quickly and it will be noticable when it doesn't know pop culture or modern slang.

[–] [email protected] 5 points 9 months ago

Pretty sure this is why they keep training it on books, movies, etc. - it's already intended to make sense, so it doesn't need curated.

[–] [email protected] 3 points 9 months ago

You mean like work? Can't I just have some AI do all that stuff? What could go wrong?

[–] [email protected] 19 points 9 months ago (1 children)

The funny thing is, children are similar. They just learn whatever you put in front of them. We have whole systems for educating children for decades of their lives.

With AI we literally just plopped them in front of the Internet, with no guidelines on what to learn. AI researchers say "it's a black box! We don't know why it's doing this!" You fed it everything you could and gave it few rules on what to do. You are the reason why it's nuts.

Humans come hardwired to be a certain way, do certain things. Maybe they need to start AI off like that, some basic programs that guide learning. "Learn everything" isn't working.

[–] [email protected] 8 points 9 months ago (1 children)

Humans come hardwired to be a certain way, do certain things. Maybe they need to start AI off like that, some basic programs that guide learning. “Learn everything” isn’t working.

That's a good point. For real brains, size and intelligence are not linked. An elephant brain has 3 times the amount of neurons as a human brain, but a human brain is more intelligent. There is more to intelligence than just the amount of neutrons, real or virtual, so making larger and larger AI models may not be the right direction.

[–] [email protected] 5 points 9 months ago (1 children)

True. Maybe they just need more error correction. Like spend more energy questioning whether what you say is true. Right now LLMs seems to just vomit out whatever they thought up, with no consideration of whether it makes sense.

They're like an annoying friend who just can't shut up.

load more comments (1 replies)
[–] [email protected] 13 points 9 months ago

God I hope all those CEOs and greedy fuckheads that fired hundreds of thousands of people wayyyyy too soon to replace them with this get their pants shredded by the fallout.

Naturally they'll get their golden parachutes and land on their feet even richer than before, but it's nice to dream lol

load more comments (3 replies)
[–] [email protected] 5 points 9 months ago (2 children)

It appears, that with the increase in popularity of machine learning, the percentage of people who properly source and sanitize their training data has steeply decreased.

As you stated, a MLAI can only be as good as the data it was trained on, and is usually way worse. The popularity and application of MLAIs built with questionable practices scare me, though, at least their fuckups will keep me employed and likely more busy than ever.

load more comments (2 replies)
[–] [email protected] 109 points 9 months ago (2 children)
[–] [email protected] 1 points 9 months ago
[–] [email protected] 28 points 9 months ago (1 children)

Thank you for your service

load more comments (1 replies)
[–] [email protected] 4 points 9 months ago

And Its only to get worse as more of the public is aware.

[–] [email protected] 28 points 9 months ago

I am happy to report I did my part on feeding it garbage. I only ever speak to chatGPT thru a pirate translator. And I only ever ask it for harry potter fan fic. Pay me if you want me to train it meaningfully.

[–] [email protected] 23 points 9 months ago

This is the best summary I could come up with:


In recent hours, the artificial intelligence tool appears to be answering queries with long and nonsensical messages, talking Spanglish without prompting – as well as worrying users, by suggesting that it is in the room with them.

Asked for help with a coding issue, ChatGPT wrote a long, rambling and largely nonsensical answer that included the phrase “Let’s keep the line as if AI in the room”.

On its official status page, OpenAI noted the issues, but did not give any explanation of why they might be happening.

“We are investigating reports of unexpected responses from ChatGPT,” an update read, before another soon after announced that the “issue has been identified”.

It is not the first time that ChatGPT has changed its manner of answering questions, seemingly without developer OpenAI’s input.

Towards the end of last year, users complained the system had become lazy and sassy, and refusing to answer questions.


The original article contains 519 words, the summary contains 150 words. Saved 71%. I'm a bot and I'm open source!

load more comments
view more: ‹ prev next ›