this post was submitted on 29 May 2024
43 points (85.2% liked)

Selfhosted

40183 readers
733 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 1 year ago
MODERATORS
 

I've a minipc running an AMD 5700U where I host some services, including ollama and openwebui.

Unfortunately the support of rocm isn't quite there yet and not to mention that of mobile GPUs.

Surprisingly the prompts work when configured to use the CPU, but the speed is just... well, not good.

So, what'd be a cheap and energy efficient setup to run sone kind of LLM for personal use, but still get decent speed?

I was thinking about getting an e-gpu case, but I'm not sure about how solid this would end up.

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 2 points 5 months ago (2 children)

You could try llamaCPP, I think its configured to run better on cpu's

[–] [email protected] 2 points 5 months ago

You could also try the ROCm fork of KoboldCpp

Koboldcpp bundles an interface ontop of llamacpp. And generally it's relatively easy to get it running.

load more comments (1 replies)