this post was submitted on 20 Apr 2024
1 points (100.0% liked)
LocalLLaMA
2249 readers
1 users here now
Community to discuss about LLaMA, the large language model created by Meta AI.
This is intended to be a replacement for r/LocalLLaMA on Reddit.
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
I'm not an expert in any of this, so just wildly speculating in the middle of the night about a huge hypothetical AI-lab for 1 person:
Super high-end equipment would probably quickly eat such a budget (2-5 * H100?), but a 'small' rack of 20-25 normal GPU's (p40) with 8gb+ vram, combined with a local petals.dev setup, would be my quick choice.
However, it's hard to compete with the cloud on power efficiency, so the setup would quickly expend all future power expenses. All non-sensitive traffic should probably go to something like groq cloud, and the rest on private servers.
An alternative solution is to go for a Npu setup (tpu,lpu, whatnotpu), and/or even a small power generator (wind, solar, digester/burner) to drive it. A cluster of 50 Opi5b (rk3588) 32gbram is within budget (50*6, 300Tops in theory, running with 1.6tb ram on 500w.). Afaik, the underlying software stack isn't there yet for small npu's, but more and more frameworks other than cuda pops up (cuda, rocm, metal, opencl, vulkan, ?) so one for Npu's will probably pop up soon.
Transformers use multiplications a lot, but bitnet doesn't (only addition), so perhaps models will move to a less power intensive hardware and model frameworks in the future?
Last on my mind atmo: You would probably also not spend all money on inference/training compute. Any descent cognitive architecture around a model (agent networks) need support functions. Tool servers, homeserved software for agents (fora/communication, scraping, modelling, codetesting, statistics etc). Basically versions of the tools we our selves use for different projects and communication/cooperation in an organization.