this post was submitted on 12 Mar 2025
20 points (100.0% liked)

LocalLLaMA

2748 readers
8 users here now

Welcome to LocalLLama! This is a community to discuss local large language models such as LLama, Deepseek, Mistral, and Qwen.

Get support from the community! Ask questions, share prompts, discuss benchmarks, get hyped at the latest and greatest model releases! Enjoy talking about our awesome hobby.

As ambassadors of the self-hosting machine learning community, we strive to support eachother and share our enthusiasm in a positive constructive way.

founded 2 years ago
MODERATORS
 

GGUF quants are already up and llama.cpp was updated today to support it.

top 3 comments
sorted by: hot top controversial new old
[–] [email protected] 1 points 6 days ago

I tested these out and found they are really bad at longer context... at least in settings that can sanely fit on most GPUs.

Seems the Gemma family is mostly for short-context work, still.

[–] [email protected] 1 points 1 week ago

Im especially interested in its advanced OCR capabilities. Will be testing this one out on lm studio

[–] [email protected] 3 points 1 week ago

Im happy for the Gemma enjoyers who get something out of it. I hear the real world domain knowledge is good. I never tried the Gemma models myself. Apparently its very overly censored and anything google puts out just gives me the icky feeling through association. Anyone remember when they still had "Don't Be Evil" as a motto? Good times.