this post was submitted on 31 Mar 2024
1 points (100.0% liked)
LocalLLaMA
2249 readers
1 users here now
Community to discuss about LLaMA, the large language model created by Meta AI.
This is intended to be a replacement for r/LocalLLaMA on Reddit.
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Your best bet would probably be to get a used office PC to put the card in. You'll likely have to replace the power supply and maybe swap the storage but with how much proper external enclosures go for the price might not be too different. Some frameworks don't support direct GPU loading so make sure that you have more ram than vram.
An arm soc won't work in most cases due to a lack of bandwidth and software support. The only board I know of that can do it is the rpi5 and that's still mostly a poc.
In general I wouldn't recomend a titan x unless you already have one because it's been deprecated in cuda, so getting modern libraries to work will be a pain.
Omg I spent too much on this... thanks for the heads up, that is a major fuckup