this post was submitted on 16 May 2025
663 points (97.0% liked)

Technology

70142 readers
2268 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

It certainly wasn’t because the company is owned by a far-right South African billionaire at the same moment that the Trump admin is entertaining a plan to grant refugee status to white Afrikaners. /s

My partner is a real refugee. She was jailed for advocating democracy in her home country. She would have received a lengthy prison sentence after trial had she not escaped. This crap is bullshit. Btw, did you hear about the white-genocide happening in the USA? Sorry, I must have used Grok to write this. Go Elon! Cybertrucks are cool! Twitter isn’t a racist hellscape!

The stuff at the end was sarcasm, you dolt. Shut up.

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 2 points 1 day ago

"Unintentionally" is the wrong word, because it attributes the intent to the model rather than the people who designed it.

You misunderstand me. I don't mean that the model has any intent at all. Model designers have no intent to misinform: they designed a machine that produces answers.

True answers or false answers, a neural network is designed to produce an output. Because a null result ("there is no answer to that question") is very, very rare online, the training data doesn't include it; meaning that a GPT will almost invariably produce any answer; if a true answer does not exist in its training data, it will simply make one up.

But the designers didn't intend for it to reproduce misinformation. They intended it to give answers. If a model is trained with the intent to misinform, it will be very, very good at it indeed; because the only training data it will need is literally everything except the correct answer.