this post was submitted on 09 Jul 2025
67 points (95.9% liked)

Selfhosted

49446 readers
839 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 2 years ago
MODERATORS
 

Hey! I have been using Ansible to deploy Dockers for a few services on my Raspberry Pi for a while now and it's working great, but I want to learn MOAR and I need help...

Recently, I've been considering migrating to bare metal K3S for a few reasons:

  • To learn and actually practice K8S.
  • To have redundancy and to try HA.
  • My RPi are all already running on MicroOS, so it kind of make sense to me to try other SUSE stuff (?)
  • Maybe eventually being able to manage my two separated servers locations with a neat k3s + Tailscale setup!

Here is my problem: I don't understand how things are supposed to be done. All the examples I find feel wrong. More specifically:

  • Am I really supposed to have a collection of small yaml files for everything, that I use with kubectl apply -f ?? It feels wrong and way too "by hand"! Is there a more scripted way to do it? Should I stay with everything in Ansible ??
  • I see little to no example on how to deploy the service containers I want (pihole, navidrome, etc.) to a cluster, unlike docker-compose examples that can be found everywhere. Am I looking for the wrong thing?
  • Even official doc seems broken. Am I really supposed to run many helm commands (some of them how just fails) and try and get ssl certs just to have Rancher and its dashboard ?!

I feel that having a K3S + Traefik + Longhorn + Rancher on MicroOS should be straightforward, but it's really not.

It's very much a noob question, but I really want to understand what I am doing wrong. I'm really looking for advice and especially configuration examples that I could try to copy, use and modify!

Thanks in advance,

Cheers!

top 50 comments
sorted by: hot top controversial new old
[–] [email protected] 11 points 3 days ago* (last edited 3 days ago) (2 children)

You have a lot of responses here, but I'll tell what k8s actually is, since a lot of people seem to get this wrong.

Just like k8s, docker has many tools. Although docker is packaged in a way, that it looks like it's just 1 tool. This is docker desktop. Under the hood there is docker engine that is really a runtime and image management service and API. You can look at this more if you wanted. There is containerd, runc, cri-o. These were all created so that different implementations can all talk to this API in a standard way and work.

Moving on to k8s. K8s is a way to scale these containers to run in different ways and scale horizontally. There are ways to even scale nodes vertically and horizontally to allow for more or less resources to place these containers on. This means k8s is very event driven and utilizes a lot of APIs to communicate and take action.

You said that you are doing kubectl apply constantly and you say feels wrong. In reality, this is correct. Under the hood you are talking with the k8s control plane and it's taking that manifest and storing it. Other services are communicating with the control plane to understand what they have to do. In fact you can apply a directory of manifests, so you don't have to specify each file individually.

Again there are many tools you can use to manage k8s. It is an orchestration system to manage pods and run them. You get to pick what tool you want to use. If you want something you can do from a git repo, you can use something like argocd or flux. This is considered to be gitops and more declarative. If you need a templating implementation, there are many, like helm, json net, and kustomize (although not a full templating language). These can help you define your manifests in a more repeatable and meaningful way, but you can always apply these using the same tools (kubectl, argocd, flux, etc...)

There are many services that can run in k8s that will solve one problem or another and these tools scale themselves, since they mostly all use the same designs that keep scalability in mind. I kept things very simple, but try out vanilla k8s first to understand what is going on. It's great that you are questioning these things as it shows you understand there is probably something better that you can do. Now you just need to find the tools that are right for you. Ask what you hate or dislike about what you are doing and find a way to solve that and if there are any tools that can help. https://landscape.cncf.io/ is a good place to start to see what tools exist.

Anyway, good luck on your adventure. K8s is an enterprise tool after all and it's not really meant for something like a home lab. It's an orchestration system and NOT a platform that you can just start running stuff on without some effort. Getting it up and running is day 1 operations. Managing it and keeping it running is day 2 operations.

[–] [email protected] 1 points 3 days ago

I see, that makes sense actually! Thanks for the message!

I saw the landscape website before, that's a LOT of projects! =O

[–] [email protected] 2 points 3 days ago

I would add that you can run kubectl apply on directories and/or have multiple yaml structure in the same taml file (separated with ---, it's a yaml standard).

[–] [email protected] 2 points 4 days ago (1 children)

I use Kube everyday for work but I would recomend you to not use it. It's complicated to answer problems you don't care about. How about docker swarm, or podman services ?

[–] [email protected] 3 points 3 days ago (2 children)

I disagree, it is great to use. Yes, some things are more difficult but as OP mentioned he wants to learn more, and running your own cluster for your services is an amazing way to learn k8s.

[–] [email protected] 4 points 3 days ago

If possible: do that on company time. Let the boss pay for it.

[–] [email protected] 3 points 3 days ago

The more I think about it the more I think you are right.

[–] [email protected] 9 points 4 days ago

You're right to be reluctant to apply everything by hand. K3s has a built-in feature that watches a directory and applies the manifests automatically: https://docs.k3s.io/installation/packaged-components

This can be used to install Helm charts in a declarative way as well: https://docs.k3s.io/helm

If you want to keep your solution agnostic to the kubernetes environment, I would recommend that you try ArgoCD (or FluxCD, but I never tried it so YMMV).

[–] [email protected] 5 points 4 days ago

I've thought about k8s, but there is so much about Docker that I still don't fully know.

[–] [email protected] 5 points 4 days ago (1 children)

And this is why I do not like K8s at all. The only reason to use it is to have something on your CV. Besides that, Docker Swarm and Hashicorp Nomad feel a lot better and are a lot easier to manage.

[–] [email protected] 6 points 3 days ago (1 children)

I personally feel like K8s has a purpose but not in homelab since our infrastructure is usually small. I don't need clever load-balancing or autoscaling for most of my work.

[–] [email protected] 4 points 3 days ago* (last edited 3 days ago)

Of course it is overkill for a homelab. The other features you mentioned, can be achieved by Nomad or Swarm as well. And with Nomad you don’t even have to use the Docker engine.

Just ask yourself the following question: why is helm so popular? Why do I need a third party scripting language just for K8s?

You clearly will feel that K8s did many things right. 10 years ago. But we learned from that. And operations cost are exploding everywhere I see K8s in use (with or without Helm). Weird side effects, because at this layer you almost have an indefinite amount of edge cases.

That’s why I move away from K8s. To make very large and complex platforms manageable for a small operations team. The DevOps Engineers don’t like that obviously, because it is a major skill on the job market. In the end, I have to prioritize and all I can do is spread awareness, that K8s was great at some point, as was Windows 98 SE.

[–] [email protected] 18 points 4 days ago* (last edited 4 days ago) (1 children)

Ive actually been personally moving away from kubernetes for this kind of deployment and I am a big fan of using ansible to deploy containers using podman systemd units, you have a series of systemd .container files like the one below

[Unit]
Description=Loki

[Container]
Image=docker.io/grafana/loki:3.4.1

# Use volume and network defined below
Volume=/mnt/loki-config:/mnt/config
Volume=loki-tmp:/tmp/loki
PublishPort=3100:3100
AutoUpdate=registry

[Service]
Restart=always
TimeoutStartSec=900

[Install]
# Start by default on boot
WantedBy=multi-user.target default.target

You use ansible to write these into your /etc/containers/systemd/ folder. Example the file above gets written as /etc/containers/systemd/loki.container.

Your ansible script will then call systemctl daemon-reload and then you can systemctl start loki to finish the example

[–] [email protected] 1 points 3 days ago

Never heard about this way to use podman before! Thanks for letting me know!

[–] [email protected] -4 points 4 days ago (1 children)
[–] [email protected] 1 points 3 days ago (1 children)

How about I'll do anyway? <3

[–] [email protected] 2 points 2 days ago

If you're genuinely interested then fair enough. Just saying it's not the only option, as a lot of people seem to think these days, and for personal projects I think it's bonkers.

[–] [email protected] 9 points 4 days ago (1 children)

I’m the creator of ansible k3s playbooks, event if I not more active maintener. But there’s a big community : give a try and contrib https://github.com/k3s-io/k3s-ansible

[–] [email protected] 2 points 3 days ago

I'll check it! Thanks!

[–] [email protected] 4 points 4 days ago (1 children)

Firstly, I want to say that I started with podman (alternative to docker) and ansible, but I quickly ran into issues. The last issue I encountered, and the last straw, was that creating a container, I was frustrated because Ansible would not actually change the container unless I used ansible to destroy and recreate it.

Without quadlets, podman manages it’s own state, which has issues, and was the entire reason I was looking into alternatives to podman for managing state.

More research: https://github.com/linux-system-roles/podman: I found an ansible role to generate podman quadlets, but I don’t really want to include ansible roles in my existing ansible roles. Also, it intakes kubernetes yaml, which is very complex for what I am trying to do. At that point, why not just use a single node kubernetes cluster and let kubernetes manage state?

So I switched to Kubernetes.

To answer some of your questions:

Am I really supposed to have a collection of small yaml files for everything, that I use with kubectl apply -f ?? It feels wrong and way too “by hand”! Is there a more scripted way to do it? Should I stay with everything in Ansible ??

So what I (and the industry) uses is called "GitOps". It's essentially you have a git repo, and the software automatically pulls the git repo and applies the configs.

Here is my gitops repo: https://github.com/moonpiedumplings/flux-config. I use FluxCD for GitOps, but there are other options like Rancher's Fleet or the most popular ArgoCD.

As a tip, you can search github for pieces of code to reuse. I usually do path:*.y*ml keywords keywords to search for appropriate pieces of yaml.

I see little to no example on how to deploy the service containers I want (pihole, navidrome, etc.) to a cluster, unlike docker-compose examples that can be found everywhere. Am I looking for the wrong thing?

So the first issue is that Kubernetes doesn't really have "containers". Instead, the smallest controllable unit in Kubernetes is a "pod", which is a collection of containers that share a network device. Of course, pods for selfhosted services like the type this community is interested in will rarely have more than one container in them.

There are ways to convert a docker-compose to a kubernetes pod.

But in general, Kubernetes doesn't use compose files for premade services, but instead helm charts. If you are having issues installing specific helm charts, you should ask for help here so we can iron them out. Helm charts are pretty reliable in my experience, but they do seem to be more involved to set up than docker-compose.

Even official doc seems broken. Am I really supposed to run many helm commands (some of them how just fails) and try and get ssl certs just to have Rancher and its dashboard

So what you're supposed to do is deploy an "ingress", (k3s comes with traefik by default), and then use cert-manager to automatically apply get letsencrypt certs for ingress "objects".

Actually, traefik comes with it's own way to get SSL certs (in addition to ingresses and cert manager), so you can look into that as well, but I decided to use the standardized ingress + cert-manager method because it was also compatible with other ingress software.

Although it seems complex, I've come to really, really love Kubernetes because of features mentioned here. Especially the declarative part, where all my services can be code in a git repo.

[–] [email protected] 1 points 3 days ago

Thanks for the detailed reply! You're not the first to mention gitops for k8s, it seems interesting indeed, I'll be sure to check it!

[–] [email protected] 2 points 4 days ago* (last edited 4 days ago)

For question 1: You can have multiple resource objects in a single file, each resource object just needs to be separated by ---. The small resource definitions help keep things organized when you're working with dozens of precisely configured services. It's a lot more readable than the other solutions out there.

For question 2, unfortunately Docker Compose is much more common than Kubernetes. There are definitely some apps that provide kubernetes documentation, especially Kubernetes operators and enterprise stuff, but Docker-Compose definitely has bigger market share for self-hosted apps. You'll have to get experienced with turning a docker compose example into deployment+service+pvc.

Kubernetes does take a lot of the headaches out of managing self-hosted clusters though. The self-healing, smart networking, and batteries-included operators for reverse-proxy/database/ACME all save so much hassle and maintenance. Definitely Install ingress-nginx, cert-manager, ArgoCD, and CNPG (in order of difficulty).

Try to write yaml resources yourself instead of fiddling with Helm values.yaml. Usually the developer experience is MUCH nicer.

Feel free to take inspiration/copy from my 500+ container cluster: https://codeberg.org/jlh/h5b/src/branch/main/argo

In my repo, custom_applications are directories with hand-written/copy-pasted yaml files auto-synced via ArgoCD Operator, while external_applications are helm installations, managed via ArgoCD Operator Applications.

load more comments
view more: next ›