this post was submitted on 13 May 2025
1 points (100.0% liked)

TechTakes

1873 readers
39 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 0 points 1 week ago (53 children)

In my workflow there is no difference between LLMs and fucking grep for me.

Well grep doesn't hallucinate things that are not actually in the logs I'm grepping so I think I'll stick to grep.

(Or ripgrep rather)

[–] [email protected] 0 points 1 week ago (41 children)

Hallucinations become almost a non issue when working with newer models, custom inference, multishot prompting and RAG

But the models themselves fundamentally can't write good, new code, even if they're perfectly factual

[–] [email protected] 0 points 1 week ago (3 children)

If LLM hallucinations ever become a non-issue I doubt I'll be needing to read a deeply nested buzzword laden lemmy post to first hear about it.

[–] [email protected] 0 points 1 week ago (2 children)

You need to run the model yourself and heavily tune the inference, which is why you haven't heard from it because most people think using shitGPT is all there is with LLMs. How many people even have the hardware to do so anyway?

I run my own local models with my own inference, which really helps. There are online communities you can join (won't link bcz Reddit) where you can learn how to do it too, no need to take my word for it

[–] [email protected] 0 points 1 week ago

You run CanadianGirlfriendGPT, got it.

[–] [email protected] 0 points 1 week ago

ah yes, the problem with ~~crypto~~LLMs is all the shit~~coins~~GPTs

did it sting when the crypto bubble popped? is that what made you like this?

load more comments (37 replies)
load more comments (48 replies)