this post was submitted on 20 Apr 2024
1 points (100.0% liked)

LocalLLaMA

2249 readers
1 users here now

Community to discuss about LLaMA, the large language model created by Meta AI.

This is intended to be a replacement for r/LocalLLaMA on Reddit.

founded 1 year ago
MODERATORS
 

Consider this hypothetical scenario: if you were given $100,000 to build a PC/server to run open-source LLMs like LLaMA 3 for single-user purposes, what would you build?

top 9 comments
sorted by: hot top controversial new old
[–] [email protected] 0 points 6 months ago

4 of whatever modern GPU has the most vram currently. (So I can run 4 personalities at the same time)

Whatever the best amd epyc cpu currently is.

As much ECC ram as possible.

Waifu themes all over the computer.

Linux, LTS edition.

A bunch of nvme SSDs configured redundantly.

And 2 RTX 4090s. (One for the host and one for me)

[–] [email protected] 0 points 7 months ago

I run it all locally on my laptop. Was about $30k new but you can get them used now years later for about $1k to $2k.

[–] [email protected] 0 points 7 months ago

A used minipc and a nice boat

[–] [email protected] 0 points 7 months ago

I'm not an expert in any of this, so just wildly speculating in the middle of the night about a huge hypothetical AI-lab for 1 person:

Super high-end equipment would probably quickly eat such a budget (2-5 * H100?), but a 'small' rack of 20-25 normal GPU's (p40) with 8gb+ vram, combined with a local petals.dev setup, would be my quick choice.

However, it's hard to compete with the cloud on power efficiency, so the setup would quickly expend all future power expenses. All non-sensitive traffic should probably go to something like groq cloud, and the rest on private servers.

An alternative solution is to go for a Npu setup (tpu,lpu, whatnotpu), and/or even a small power generator (wind, solar, digester/burner) to drive it. A cluster of 50 Opi5b (rk3588) 32gbram is within budget (50*6, 300Tops in theory, running with 1.6tb ram on 500w.). Afaik, the underlying software stack isn't there yet for small npu's, but more and more frameworks other than cuda pops up (cuda, rocm, metal, opencl, vulkan, ?) so one for Npu's will probably pop up soon.

Transformers use multiplications a lot, but bitnet doesn't (only addition), so perhaps models will move to a less power intensive hardware and model frameworks in the future?

Last on my mind atmo: You would probably also not spend all money on inference/training compute. Any descent cognitive architecture around a model (agent networks) need support functions. Tool servers, homeserved software for agents (fora/communication, scraping, modelling, codetesting, statistics etc). Basically versions of the tools we our selves use for different projects and communication/cooperation in an organization.

[–] [email protected] 0 points 7 months ago (3 children)

Why in the world would you need such a large budget? A mac studio can run the 70b variant just fine at $12k

[–] [email protected] 0 points 7 months ago* (last edited 7 months ago)

If possible, to run the upcoming llama 400B one. But this is just hypothetical.

[–] [email protected] 0 points 7 months ago

Depends on what you're doing with it, but prompt/context processing is a lot faster on Nvidia GPUs than on Apple chips, though if you are using the same prefix all the time it's a bit better.

The time to first token is a lot faster on datacenter GPUs, especially as context length increases, and consumer GPUs don't have enough vram.

[–] [email protected] 0 points 7 months ago (1 children)

So the answer would be "an alibi for the other $88k"

[–] [email protected] 0 points 7 months ago

I'll take 'Someone got seed funding and now needs progress to unlock the next part of the package' for $10 please Alex.