this post was submitted on 10 Apr 2024
1 points (100.0% liked)
LocalLLaMA
2249 readers
1 users here now
Community to discuss about LLaMA, the large language model created by Meta AI.
This is intended to be a replacement for r/LocalLLaMA on Reddit.
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
That's huge, I'm guessing we'll need to use a giant swap file?
You're right, but the model is also not quantized so is likely to be in 16bit floats. If you quantize it you can get substantially smaller models which run faster though may be somewhat less accurate.
Knowing that the 4 bit quantized 8x7B model gets downscaled to 4.1GB, this might be roughly 3 times larger? So maybe 12GB? Let's see.