this post was submitted on 19 May 2025
19 points (88.0% liked)

Free Open-Source Artificial Intelligence

3417 readers
1 users here now

Welcome to Free Open-Source Artificial Intelligence!

We are a community dedicated to forwarding the availability and access to:

Free Open Source Artificial Intelligence (F.O.S.A.I.)

More AI Communities

LLM Leaderboards

Developer Resources

GitHub Projects

FOSAI Time Capsule

founded 2 years ago
MODERATORS
 

I don't have many specific requirements, and GPT4All is working mostly well for me so far. That said, my latest use case for GPT4All is to help me plan a new Python-based project with examples as code snippets, and it lacks a specific quality of life feature, that is the "Copy Code" button.

There is an open issue on GPT4All's GitHub, but as there is no guarantee that feature will ever be implemented, I thought I'd take this opportunity to explore if there are any other tools out there like GPT4All that offer a ChatGPT-like experience in the local environment. I'm neither a professional developer nor a sysadmin, so a lot of self hosting guides go over my head, which is what drew me to GPT4All in the first place, as it's very accessible to non-developers like myself. That said, I'm open to suggestions and willing to learn new skills if that's what it takes.

I'm running on Linux w/ AMD hardware: Ryzen 7 5800X3D processor + Radeon RX 6750 XT.

Any suggestions? Thanks in advance!

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 14 points 3 days ago (2 children)

OpenWebUI is a superb front-end and supports just about any backend that you think of (including Ollama for locally hosted LLMs) and has some really nice features like pipelines that can extend out its functionality however you might need. Definitely has the “copy code” feature built-in and outputs markdown for regular documentation purposes.

[–] [email protected] 2 points 2 days ago* (last edited 2 days ago) (1 children)

Thanks for the tip about OpenWebUI. After watching this video about its features, I want to learn more.

Would you mind sharing a little bit about your setup? For example, do you have a home lab or do you just run OpenWebUI w/ Ollama on a spare laptop or something? I thought I saw some documentation suggesting that this stack can be run on any system, but I'm curious how other people run it in the real world. Thanks!

[–] [email protected] 5 points 2 days ago (1 children)

Sure, I run OpenWebUI in a docker container from my TrueNAS SCALE home server (it's one of their standard packages, so basically a 1-click install). From there I've configured API use with OpenAI, Gemini, Anthropic and DeepSeek (part of my job involves evaluating the performance of these big models for various in-house tasks), along with pipelines for some of our specific workflows and MCP via mcpo.

I previously had my ollama installation in another docker container but didn't like having a big GPU in my NAS box, so I moved it to its own box. I am mostly interested in testing small/tiny models there. I again have Ollama running in a Docker container (just the official Docker image), but this time on a Debian bare-metal server, and I configured another OpenWebUI pipeline to point to that (OpenWebUI lets you select which LLM(s) you want to use on a conversation-by-conversation basis, so there's no problem having a bunch of them hooked up at the same time).

[–] [email protected] 1 points 2 days ago

Thank you, this is really helpful to inform my setup!

[–] [email protected] 5 points 3 days ago

OpenWebUI is also my go-to. It works nicely with runpods vllm template, so I can run local models but also use heavier ones at minimal cost when it suits me.