this post was submitted on 13 May 2025
1 points (100.0% liked)

TechTakes

1883 readers
19 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 0 points 2 weeks ago (41 children)

Hallucinations become almost a non issue when working with newer models, custom inference, multishot prompting and RAG

But the models themselves fundamentally can't write good, new code, even if they're perfectly factual

[–] [email protected] 0 points 2 weeks ago (3 children)

If LLM hallucinations ever become a non-issue I doubt I'll be needing to read a deeply nested buzzword laden lemmy post to first hear about it.

[–] [email protected] 0 points 2 weeks ago (2 children)

You need to run the model yourself and heavily tune the inference, which is why you haven't heard from it because most people think using shitGPT is all there is with LLMs. How many people even have the hardware to do so anyway?

I run my own local models with my own inference, which really helps. There are online communities you can join (won't link bcz Reddit) where you can learn how to do it too, no need to take my word for it

[–] [email protected] 0 points 2 weeks ago

You run CanadianGirlfriendGPT, got it.

load more comments (1 replies)
load more comments (1 replies)
load more comments (38 replies)