this post was submitted on 19 May 2024
1 points (100.0% liked)

LocalLLaMA

2221 readers
1 users here now

Community to discuss about LLaMA, the large language model created by Meta AI.

This is intended to be a replacement for r/LocalLLaMA on Reddit.

founded 1 year ago
MODERATORS
 

So I I'm playing around with Open WebUI running llama3:70b and I like it a lot.

I used to have a chat GPT premium subscription with OpenAI, but since I swapped basically my entire internet services to self hosted alternatives, I also cancelled that.

The problem is that the way I run it (on a Lenovo Thinkcentre with 6 cores and 64 GB of RAM) it is painfully slow since the machine does not have a GPU (only an onboard chip). Otherwise it works great as a homelab server for all kinds of things running Ubuntu server.

Now I'm thinking about how to get a well performing local LLM going in the most cost efficient way possible. Should I get another computer with a graphics card running headless solely for Open WebUI (what would be "good" specs) of can I maybe hook up one of those external GPU cases with a card to the existing machine?

no comments (yet)
sorted by: hot top controversial new old
there doesn't seem to be anything here