this post was submitted on 19 May 2025
19 points (88.0% liked)
Free Open-Source Artificial Intelligence
3417 readers
1 users here now
Welcome to Free Open-Source Artificial Intelligence!
We are a community dedicated to forwarding the availability and access to:
Free Open Source Artificial Intelligence (F.O.S.A.I.)
More AI Communities
LLM Leaderboards
Developer Resources
GitHub Projects
FOSAI Time Capsule
- The Internet is Healing
- General Resources
- FOSAI Welcome Message
- FOSAI Crash Course
- FOSAI Nexus Resource Hub
- FOSAI LLM Guide
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
OpenWebUI is a superb front-end and supports just about any backend that you think of (including Ollama for locally hosted LLMs) and has some really nice features like pipelines that can extend out its functionality however you might need. Definitely has the “copy code” feature built-in and outputs markdown for regular documentation purposes.
Thanks for the tip about OpenWebUI. After watching this video about its features, I want to learn more.
Would you mind sharing a little bit about your setup? For example, do you have a home lab or do you just run OpenWebUI w/ Ollama on a spare laptop or something? I thought I saw some documentation suggesting that this stack can be run on any system, but I'm curious how other people run it in the real world. Thanks!
Sure, I run OpenWebUI in a docker container from my TrueNAS SCALE home server (it's one of their standard packages, so basically a 1-click install). From there I've configured API use with OpenAI, Gemini, Anthropic and DeepSeek (part of my job involves evaluating the performance of these big models for various in-house tasks), along with pipelines for some of our specific workflows and MCP via mcpo.
I previously had my ollama installation in another docker container but didn't like having a big GPU in my NAS box, so I moved it to its own box. I am mostly interested in testing small/tiny models there. I again have Ollama running in a Docker container (just the official Docker image), but this time on a Debian bare-metal server, and I configured another OpenWebUI pipeline to point to that (OpenWebUI lets you select which LLM(s) you want to use on a conversation-by-conversation basis, so there's no problem having a bunch of them hooked up at the same time).
Thank you, this is really helpful to inform my setup!
OpenWebUI is also my go-to. It works nicely with runpods vllm template, so I can run local models but also use heavier ones at minimal cost when it suits me.