this post was submitted on 14 Jul 2024
485 points (96.5% liked)

Technology

58975 readers
3671 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

AI is overhyped and unreliable -Goldman Sachs

https://www.404media.co/goldman-sachs-ai-is-overhyped-wildly-expensive-and-unreliable/

"Despite its expensive price tag, the technology is nowhere near where it needs to be in order to be useful for even such basic tasks"

@[email protected]

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 16 points 3 months ago

I think the main problem is applying LLM outside the domain of "complete this sentence". It's fine for what it is, and trained on huge datasets it obviously appears impressive, but it doesn't know if it's right or wrong, and evaluation metrics are different. In most traditional applications of neural networks, you have datasets with right and wrong answers, that's not how these are trained, as there is no "right" answer to "tell me a joke." So the training has to be based on what would likely fill in the blank. This could be an actual joke, a bad joke, a completely different topic, there's no difference in the training data. The biases, incorrect answers, all the faults of this massive dataset are inherent in the model, and there's no fixing that. They are fundamentally different in their application and evaluation (this extends to training) methods from other neural networks that are actually effective at what they do, like image processing and identification. The scope of what they're trying to do with a finite dataset is not realistic and entirely unconstrained, as compared to more "traditional" neural networks, which are very narrow in scope exactly because of this issue.