this post was submitted on 15 Apr 2025
18 points (100.0% liked)

LocalLLaMA

2878 readers
39 users here now

Welcome to LocalLLaMA! Here we discuss running and developing machine learning models at home. Lets explore cutting edge open source neural network technology together.

Get support from the community! Ask questions, share prompts, discuss benchmarks, get hyped at the latest and greatest model releases! Enjoy talking about our awesome hobby.

As ambassadors of the self-hosting machine learning community, we strive to support each other and share our enthusiasm in a positive constructive way.

founded 2 years ago
MODERATORS
 

Let's go! Lossless CPU inference

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 5 points 6 days ago* (last edited 6 days ago)

It's a massive performance upgrade, which would make current sized models better and tiny phone-sized models viable. Only problem is that models need to be retrained to use it and afaik, no one significant has done it yet.