this post was submitted on 13 May 2025
1 points (100.0% liked)

TechTakes

1869 readers
23 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 0 points 6 days ago

Small scale models, like Mistral Small or Qwen series, are achieving SOTA performance with lower than 50 billion parameters. QwQ32 could already rival shitGPT with 32 billion parameters, and the new Qwen3 and Gemma (from google) are almost black magic.

Gemma 4B is more comprehensible than GPT4o, the performance race is fucking insane.

ClosedAI is 90% hype. Their models are benchmark princesses, but they need huuuuuuge active parameter sizes to effectively reach their numbers.

Everything said in this post is independently verifiable by taking 5 minutes to search shit up, and yet you couldn't even bother to do that.