663
this post was submitted on 16 May 2025
663 points (97.0% liked)
Technology
70142 readers
2241 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Unintentionally is the right word because the people who designed it did not intend for it to be bad information. They chose an approach that resulted in bad information because of the data they chose to train and the steps that they took throughout the process.
Honestly a lot of the issues result from null results only existing in the gaps between information (unanswered questions, questions closed as unanswerable, searches that return no results, etc), and thus being nonexistent in training data. Models are therefore predisposed toward giving an answer of any kind, and if one doesn't exist it'll "make one up."
Which is itself a misnomer, because it can't look for an answer and then decide to make one up when it can't find it. It just gives an answer that sounds plausible, and if the correct answer is most likely in its training data then that'll seem most plausible.
Incorrect. The people who designed it did not set out with a goal of producing a bot that reguritates true information. If that's what they wanted they'd never have used a neural network architecture in the first place.