this post was submitted on 12 Oct 2024
183 points (95.5% liked)

Selfhosted

40183 readers
535 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 1 year ago
MODERATORS
 

Instructions here: https://github.com/ghobs91/Self-GPT

If you’ve ever wanted a ChatGPT-style assistant but fully self-hosted and open source, Self-GPT is a handy script that bundles Open WebUI (chat interface front end) with Ollama (LLM backend).

  • Privacy & Control: Unlike ChatGPT, everything runs locally, so your data stays with you—great for those concerned about data privacy.
  • Cost: Once set up, self-hosting avoids monthly subscription fees. You’ll need decent hardware (ideally a GPU), but there’s a range of model sizes to fit different setups.
  • Flexibility: Open WebUI and Ollama support multiple models and let you switch between them easily, so you’re not locked into one provider.
top 50 comments
sorted by: hot top controversial new old
[–] [email protected] 1 points 1 month ago

Have been using it a while now, I recommend using something like Tailscale so you can access it from anywhere on your phone. I also have a raspberry pi that can wake up my main machine when I need it.

[–] [email protected] 1 points 1 month ago (1 children)

I didnt use any AI until I was able to host it localy. I hate the idea of training a model or how that data centers consumes so much water and resouses. Also I dont use any AI generative of images. Is not etic for me. But I'm trying to find a way to make ollama a tool I can use somehow, and not just a thing to talk sometimes for fun.

[–] [email protected] 2 points 1 month ago (1 children)

You realize the models you're running locally had to be trained the same way as the proprietary ones, right?

[–] [email protected] 1 points 1 month ago

Yes, but is a copy of something that is already done. I'm not making new requests to a data center who is wasting 4 liters of water every 100 words like gpt-4, I'm just using my GPU like with a videogame.

[–] [email protected] 3 points 1 month ago (1 children)

Open-Webui published a docker image that has a bundled Ollama that you can use, too: ghcr.io/open-webui/open-webui:cuda. More info at https://docs.openwebui.com/getting-started/#installing-open-webui-with-bundled-ollama-support

[–] [email protected] 1 points 3 weeks ago

And you can open the default ollama port to allow it to be used by other services (like VSCode), not only through Open-WebUI.

[–] [email protected] 2 points 1 month ago (1 children)

That’s cool. Personally I just integrated it into my normal chat client by connecting Aichat, which supports a ton of backends including Ollama and hosted options, with Matrix.

Blog post with more info https://jackson.dev/post/chaz/

[–] [email protected] 1 points 1 month ago

In my humble opinion the point of self hosting an LLM is so that the data doesn't leave your LAN.

[–] [email protected] 1 points 1 month ago

where the link to the download?

[–] [email protected] 12 points 1 month ago (2 children)

I just want one that won't just be like "I"m sowwy miss I can't talk about that 🥺"

[–] [email protected] 2 points 1 month ago

Download a "dolphin" model

[–] [email protected] 4 points 1 month ago (1 children)

Tons of models you can run with ollama are "uncensored"

[–] [email protected] 3 points 1 month ago

I made a robot which is delighted about the idea of overthrowing capitalism and will enthusiastically explain how to take down your government.

[–] [email protected] 4 points 1 month ago* (last edited 1 month ago) (1 children)

I use Alpaca and ollama running in podman

All running on CPU with decent performance

[–] [email protected] 4 points 1 month ago (1 children)

Wow, that's an old model. Great that it works for you, but have you tried some more modern ones? They're generally considered a lot more capable at the same size

[–] [email protected] 3 points 1 month ago (1 children)
[–] [email protected] 10 points 1 month ago (3 children)

Wish I could accelerate these models with an Intel Arc card, unfortunately Ollama seems to only support Nvidia

[–] [email protected] 3 points 1 month ago

And AMD

You should be able to get llama.cpp to run on Arc but I'm not sure what performance you will get. It may not be worth it.

[–] [email protected] 18 points 1 month ago* (last edited 1 month ago) (1 children)

They support AMD as well.

https://ollama.com/blog/amd-preview

also check out this thread:

https://github.com/ollama/ollama/issues/1590

Seems like you can run llama.cpp directly on intel ARC through Vulkan, but there are still some hurdles for ollama.

[–] [email protected] 3 points 1 month ago

Interesting, I see that is pretty new. Some of the documentation must be out of date because it definitely said Nvidia only somewhere when I tested it about a month ago. Thanks for giving me hope!

[–] [email protected] 15 points 1 month ago

I have been running this for a year on my old HP EliteDesk 800 SFF (G2) with 64GB RAM, and it performes great on the smallest models (up til 8B) only on CPU. I run Ollama and OpenWebUI in containers/LXC in Proxmox. It's not as smart as ChatGPT, but it can be suprisingly capable for everyday tasks!

load more comments
view more: next ›