this post was submitted on 07 May 2025
1 points (100.0% liked)
TechTakes
1834 readers
83 users here now
Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.
This is not debate club. Unless it’s amusing debate.
For actually-good tech, you want our NotAwfulTech community
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Gen AI should be private, secure, local and easier to train by it's users to fit their own needs. Closest thing to this at the moment seems to be Kobold.
Nah we're up to running Qwen3 and Deepseek r1 locally with accessible hardware at this point so we have access to what you describe. Ollama is the app.
The problem continues to be that LLMs are not suitable for many applications, and where they are useful, they are sloppy and inconsistent.
My laptop is one of the ones they are talking about in the article. It has an AMD NPU, it's a 780M APU that also runs games about as well as an older budget graphics card. It handles running local models really well for its size and power draw. Running local models is still lame as hell, not how I end up utilizing the hardware. 😑
I'm not sure what Kobold is? However there's loads of different models that you can run locally - hugging face has a good selection: https://huggingface.co/models
I agree that you really want to run AI for personal use yourself - which does require certain minimum hardware specs. If these new laptops with AI emblazoned all over don't offer specialised processing units for AI, then what do they offer? Surely they can't just be boasting about being able to call some third-party's API?
no thx