this post was submitted on 11 Jun 2024
1 points (100.0% liked)

AI

4142 readers
1 users here now

Artificial intelligence (AI) is intelligence demonstrated by machines, unlike the natural intelligence displayed by humans and animals, which involves consciousness and emotionality. The distinction between the former and the latter categories is often revealed by the acronym chosen.

founded 3 years ago
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 0 points 5 months ago (4 children)

That would actually be insane. Right now, I still need my GPU and about 8-10 gigs of VRAM to run a 7B model tho, so idk how that's supposed to work on a phone. Still, being able to run a model that's as good as a 70B model but with the speed and memory usage of a 7B model would be huge.

[–] [email protected] 0 points 3 weeks ago

Slowly, is how

[–] [email protected] 0 points 4 weeks ago* (last edited 4 weeks ago) (1 children)

I'm even more excited for running 8B models at the speed of 1B! Laughably fast ok-quality generations in JSON format would be crazy useful.

Also yeah, that 7B on mobile was not the best example. Again, probably 1B to 3B is the sweetspot for mobile (I'm running Qwen2.5 0.5B on my phone and it works tel real for simple JSON)

EDIT: And imagine the context lengths we would be ablentonrun on our GPUs at home! What a time to be alive.

[–] [email protected] 0 points 4 weeks ago (1 children)

Being able to run 7B quality models on your phone would be wild. It would also make it possible to run those models on my server (which is just a mini pc), so I could connect it to my Home Assistant voice assistant, which would be really cool.

[–] [email protected] 0 points 3 weeks ago (1 children)

Something similar to this already kinda exists on HF with the 1.58 bit quantisation which seem to get very similar performance to the original Llama 3 8B model. That's essentially a two bit quanitsation with reasonable performance!

[–] [email protected] 0 points 3 weeks ago

That's really interesting, gonna try out how well it runs

[–] [email protected] 0 points 5 months ago (1 children)

I have never worked on machine learning, what does the B stand for? Billion? Bytes?

[–] [email protected] 0 points 5 months ago

I think it's how many billion parameters the model has

[–] [email protected] 0 points 5 months ago (1 children)

I only need ~4 GB of RAM/VRAM for a 7B model, my GPU only has 6GB VRAM anyway. 7B models are smaller than you think, or you have a very inefficient setup.

[–] [email protected] 0 points 5 months ago (1 children)

That's weird, maybe I actually am doing something wrong. Is it because I'm using GGUF models maybe?

[–] [email protected] 0 points 5 months ago

llama2 gguf with 2bit quantisation only needs ~5gb vram. 8bits need >9gb. Anything inbetween is possible. There are even 1.5bit and even 1bit options (not gguf AFAIK). Generally fewer bits means worse results though.