this post was submitted on 24 Jun 2025
110 points (86.7% liked)

Selfhosted

48681 readers
2477 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 2 years ago
MODERATORS
 

I've just re-discovered ollama and it's come on a long way and has reduced the very difficult task of locally hosting your own LLM (and getting it running on a GPU) to simply installing a deb! It also works for Windows and Mac, so can help everyone.

I'd like to see Lemmy become useful for specific technical sub branches instead of trying to find the best existing community which can be subjective making information difficult to find, so I created [email protected] for everyone to discuss, ask questions, and help each other out with ollama!

So, please, join, subscribe and feel free to post, ask questions, post tips / projects, and help out where you can!

Thanks!

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 1 points 1 day ago (1 children)

My HomeAssistant is running on Unraid but I have an old NVIDIA Quadro P5000. I really want to run a vision model so that it can describe who is at my doorbell.

[–] [email protected] 1 points 10 hours ago* (last edited 6 hours ago)

Oh actually that's a great card for LLM serving!

Use the llama.cpp server from source, it has better support for Pascal cards than anything else:

https://github.com/ggml-org/llama.cpp/blob/master/docs/multimodal.md


Gemma 3 is a hair too big (like 17-18GB), so I'd start with InternVL 14B Q5K XL: https://huggingface.co/unsloth/InternVL3-14B-Instruct-GGUF

Or Mixtral 24B IQ4_XS for more 'text' intelligence than vision: https://huggingface.co/unsloth/Mistral-Small-3.2-24B-Instruct-2506-GGUF

I'm a bit 'behind' on the vision model scene, so I can look around more if they don't feel sufficient, or walk you through setting up the llama.cpp server. Basically it provides an endpoint which you can hit with the same API as ChatGPT.