I find gemma to be too censored, I'm not using it until someone releases a cleaned up version.
Phi on the other hand outputs crazy stuff too often for my liking. maybe I need to tune some inference parameters.
Community to discuss about LLaMA, the large language model created by Meta AI.
This is intended to be a replacement for r/LocalLLaMA on Reddit.
I find gemma to be too censored, I'm not using it until someone releases a cleaned up version.
Phi on the other hand outputs crazy stuff too often for my liking. maybe I need to tune some inference parameters.
Gemma seems very heavily censored to avoid anything remotely controversial. I gave up on it after trying it a few times. Phi-2 has other issues, but overall seems much better.
Interesting but that's not what I'm getting at all from gemma and phi on ollama.
Then again on a second attempt I get wildly different resutls, for both of them. Might be a matter of advanced settings, like temperature, but single examples don't seem to be indicative of one being better than the other at this type of Qs.