this post was submitted on 07 May 2025
1 points (100.0% liked)

TechTakes

1838 readers
66 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 0 points 2 days ago

Local AI kind of sucks right now, and everything is massively over branded as AI ready these days.

There aren’t a lot of compelling local use cases and the memory constraints of local mean you end up with fairly weak models.

You need a high end high memory local setup to get decent token rates, and I’m finding right now 30-70b models are the minimum viable size.

That doesn’t compare with speed of online models running on GPUs that cost more than luxury cars.