this post was submitted on 05 Jun 2025
593 points (97.6% liked)

People Twitter

7251 readers
796 users here now

People tweeting stuff. We allow tweets from anyone.

RULES:

  1. Mark NSFW content.
  2. No doxxing people.
  3. Must be a pic of the tweet or similar. No direct links to the tweet.
  4. No bullying or international politcs
  5. Be excellent to each other.
  6. Provide an archived link to the tweet (or similar) being shown if it's a major figure or a politician.

founded 2 years ago
MODERATORS
 
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 0 points 2 days ago (1 children)

So asking family and friends about things you don't know isn't worth it? Reading personal blogs isn't worth it?

As long as you go in knowing what it offers, it can be a great tool, like those little summaries on search results. I use them to find information when the search engine isn't going to be a great option, or at my day job to generate code for a poorly documented library. I tend to use it a handful of times per week, if that, and only for things that I know how to verify (i.e. manually test, confirm w/ official sources, etc).

[–] [email protected] 4 points 2 days ago (1 children)

As I said in another comment, searching for what's right or wrong takes a lot of time. Using it as a tool that is untrustworthy isn't great. For simple shit, maybe? But then simple shit is covered by most other places.

[–] [email protected] -3 points 2 days ago (1 children)

Then you're using it wrong.

LLMs shouldn't be used to search for what's right or wrong, they should be used to quickly get a wide breadth of information on a given topic or provide a starting point on code or text projects that you can refine later.

For example, I wanted to use a new library in a project, and the documentation was a bit cryptic and examples weren't quite right either. So I asked an LLM tuned for coding tasks to generate some example code for our use case (described the features I wanted to use), and the code worked. I needed to tweak some stuff (like I would w/ any example), but it worked. I used the LLM because I knew there would be a bunch of public code projects using this library with a variety of use cases, and that the LLM would probably do a reasonable job of picking out a decent one to build from, and I was right.

On another topic, I needed to do research on an unfamiliar topic, so I asked the LLM to provide a few examples in that domain w/ brief descriptions. That generated a ton of useful keywords that I used to find more reputable sources (keywords that would've taken hours of searching to generate), so I was able to quickly find reliable information starting from a pretty vague notion of what I wanted to learn.

LLMs have a lot of limitations, but if they're used to accomplish common tasks quickly, they can be incredibly useful. I don't think they'll replace anyone's job (unless your job is already pointless), traditional search engines (as much as Google wants it to), or anything like that, but they are useful tools that can make some of the more annoying parts of my job more efficient.

[–] [email protected] 2 points 2 days ago (2 children)

LLMs have flat out made up functions that don't exist when I've used them for coding help. Was not useful, did not point me in a good direction, and wasted my time.

[–] [email protected] 1 points 2 days ago

Sure, they certainly can hallucinate things. But some models are way better than others at a given task, so it's important to find a good fit and to learn to use the tool effectively.

We have three different models at work, and they work a lot differently and are good at different things.

[–] [email protected] 2 points 2 days ago

You need to actively have the relevant code in context.

I use it to describe code from shitty undocumented libraries, and my local models can explain the code well enough in lieu of actual documentation.