this post was submitted on 25 Mar 2025
140 points (100.0% liked)
Technology
38433 readers
358 users here now
A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.
Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.
Subcommunities on Beehaw:
This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.
founded 3 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Deepseek is an absolutely massive model, it's not the one people will be running. Rather, look at qwen/qwq, gemma and a number of other smaller ones
No, people who want something approaching chatgpt but local want to run at least deepseek V3 32B.
Qwen at least fares much worse for my usage as do deepseek V3 under 32B.
I run deepseek-r1:14b locally, though it needs to go into RAM and runs slower its still a reasonably good speed. Keeps up with reading it. Should try a larger one at some point, but its quite a bit to download when you get to the larger ones. Usually run ~7b size as that can fit in VRAM and runs way faster.
The hell is v3 32b. Are you talking about a distill
They probably confused the R1 Qwen distill with something else. Afaik there is no 32b model from DeepSeek directly.