this post was submitted on 07 May 2025
1 points (100.0% liked)

TechTakes

1834 readers
86 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 0 points 8 hours ago (1 children)

Nah we're up to running Qwen3 and Deepseek r1 locally with accessible hardware at this point so we have access to what you describe. Ollama is the app.

The problem continues to be that LLMs are not suitable for many applications, and where they are useful, they are sloppy and inconsistent.

My laptop is one of the ones they are talking about in the article. It has an AMD NPU, it's a 780M APU that also runs games about as well as an older budget graphics card. It handles running local models really well for its size and power draw. Running local models is still lame as hell, not how I end up utilizing the hardware. 😑

[–] [email protected] 0 points 6 hours ago

Does Ollama accept custom parameters now?

I wasn't talking about their effectiveness though. Yeah, they're sloppy as hell, but I'd rather trust a sloppy tool I set up at home and use myself than having someone I don't trust at home using their sloppy tools, tinkering with my property without permission when I'm not looking and changing their terms and prices each day.

But granted your point is a really good one. These AI ready laptops don't give the bang for your buck you'd expect. We're all better off taking good care of our older harware and waiting longer for components that are a true improvement to replace them.