this post was submitted on 05 Jun 2025
593 points (97.6% liked)

People Twitter

7251 readers
766 users here now

People tweeting stuff. We allow tweets from anyone.

RULES:

  1. Mark NSFW content.
  2. No doxxing people.
  3. Must be a pic of the tweet or similar. No direct links to the tweet.
  4. No bullying or international politcs
  5. Be excellent to each other.
  6. Provide an archived link to the tweet (or similar) being shown if it's a major figure or a politician.

founded 2 years ago
MODERATORS
 
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 2 points 2 days ago (2 children)

LLMs have flat out made up functions that don't exist when I've used them for coding help. Was not useful, did not point me in a good direction, and wasted my time.

[–] [email protected] 1 points 2 days ago

Sure, they certainly can hallucinate things. But some models are way better than others at a given task, so it's important to find a good fit and to learn to use the tool effectively.

We have three different models at work, and they work a lot differently and are good at different things.

[–] [email protected] 2 points 2 days ago

You need to actively have the relevant code in context.

I use it to describe code from shitty undocumented libraries, and my local models can explain the code well enough in lieu of actual documentation.