this post was submitted on 12 Dec 2024
46 points (81.9% liked)

Selfhosted

40677 readers
297 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 2 years ago
MODERATORS
 

Hello! πŸ˜€
I want to share my thoughts on docker and maybe discuss about it!
Since some months I started my homelab and as any good "homelabing guy" I absolutely loved using docker. Simple to deploy and everything. Sadly these days my mind is changing... I recently switch to lxc containers to make easier backup and the xperience is pretty great, the only downside is that not every software is available natively outside of docker πŸ™ƒ
But I switch to have more control too as docker can be difficult to set up some stuff that the devs don't really planned to.
So here's my thoughts and slowly I'm going to leave docker for more old-school way of hosting services. Don't get me wrong docker is awesome in some use cases, the main are that is really portable and simple to deploy no hundreds dependencies, etc. And by this I think I really found how docker could be useful, not for every single homelabing setup, and it's not my case.

Maybe I'm doing something wrong but I let you talk about it in the comments, thx.

(page 2) 31 comments
sorted by: hot top controversial new old
[–] [email protected] 2 points 1 week ago* (last edited 1 week ago)

Docker is good when combined with gVisor runtime for better isolation.

What is gVisor?gVisor is an application kernel, written in memory safe Golang, that emulates most system calls and massively reduces the attack surface of the kernel. This is important since the host and guest share the same kernel, and Docker runs rootful. Root inside a Docker container is the same as root on the host, as long as a sandbox escape is used. This could arise if a container image requires unsafe permissions like Docker socket access. gVisor protects against privilege escalation by only using root at the start and never handing root over to the guest.

Sydbox OCI runtime is also cool and faster than gVisor (both are quick)

[–] [email protected] 31 points 1 week ago

I'm actually doing the opposite :)

I've been using vms, lxc containers and docker for years. In the last 3 years or so, I've slowly moved to just docker containers. I still have a few vms, of course, but they only run docker :)

Containers are a breeze to update, there is no dependency hell, no separate vms for each app...

More recently, I've been trying out kubernetes. Mostly to learn and experiment, since I use it at work.

[–] [email protected] 2 points 1 week ago

I like reminding people that with every new technology, the old one is still around. The new gets most of the attention, but the old is still kicking. (We still have wire wrapped programs kicking around.)

You are all good. Spend your limited attention on other things.

[–] [email protected] 7 points 1 week ago (2 children)

I use podman using home-manager configs, I could run the services natively but currently I have a user for each service that runs the podman containers. This way each service is securely isolated from each other and the rest of the system. Maybe if/when NixOS supports good selinux rules I'll switch back to running it native.

load more comments (2 replies)
[–] [email protected] 9 points 1 week ago (1 children)

Are you using docker compose scripts? Backup should be easy, you have your compose scripts to configure the containers, then the scripts can easily be commited somewhere or backed up.

Data should be volume mounted into the container, and then the host disk can be backed up.

The only app that I've had to fight docker on is Seafile, and even that works quite well now.

[–] [email protected] 1 points 1 week ago (2 children)

using docker compose yeah. I find hard to tweak the network and the apps settings it's like putting obstacles on my road

[–] [email protected] 3 points 1 week ago* (last edited 1 week ago) (1 children)

Docker as a technology is a misguided mess but it is an effective tool.

Podman is a much better design that solves the same problem.

Containers can be used well or very poorly.

Docker makes it easy to ship something without knowing anything about System Engineering which some see as an advantage, but I don't.

At my shop, we use almost no public container images because they tend to be a security nightmare.

We build our own images in-house with strict rules about what can go inside. Otherwise it would be absolute chaos.

load more comments (1 replies)
[–] [email protected] 9 points 1 week ago* (last edited 1 week ago)

Its networking is a bit hard to tweak, but I also dont find I need to most of the time. And when I do, its usually just setting the network to host and calling it done.

[–] [email protected] 2 points 1 week ago (2 children)

Yeah, when I got started I initially put everything in Docker because that's what I was recommended to do, but after a couple years I moved everything out again because of the increased complexity, especially in terms of the networking, and that you now have to deal with the way Docker does things, and I'm not getting anything out of it that would make up for that.

When I moved it out back then I was running Gentoo on my servers, by now it's NixOS because of the declarative service configuration, which shines especially in a server environment. If you want easy service setup, like people usually say they like about Docker, I think it's definitely worth a try. It can be as simple as "services.foo.enable = true".

(To be fair NixOS has complexity too, but most of it is in learning how the configuration language which builds your operating system works, and not in the actual system itself, which is mostly standard except for the store. A NixOS service module generates a normal systemd service + potentially other files in the file system.)

[–] [email protected] 3 points 1 week ago

nixos definitely gives a try

load more comments (1 replies)
[–] [email protected] 1 points 1 week ago (3 children)

I don’t like docker. It’s hard to update containers, hard to modify specific settings, hard to configure network settings, just overall for me I’ve had a bad experience. It’s fantastic for quickly spinning things up but for long term usecase and customizing it to work well with all my services, I find it lacking.

I just create Debian containers or VMs for my different services using Proxmox. I have full control over all settings that I didn’t have in docker.

[–] [email protected] 9 points 1 week ago (4 children)

What do you mean it's hard to update containers?

load more comments (4 replies)
[–] [email protected] 1 points 1 week ago

the old good way is not that bad

load more comments (1 replies)
[–] [email protected] 3 points 1 week ago (1 children)

I love docker, and backups are a breeze if you're using ZFS or BTRFS with volume sending. That is the bummer about docker, it relies on you to back it up instead of having its native backup system.

[–] [email protected] 2 points 1 week ago (4 children)

What are you hosting on docker? Are you configuring your apps after? Did you used the prebuild images or build yourself?

[–] [email protected] 3 points 1 week ago (4 children)

I use the *arr suite, a project zomboid server, a foundry vtt server, invoice ninja, immich, next cloud, qbittorrent, and caddy.

I pretty much only use prebuilt images, I run them like appliances. Anything custom I'd run in a vm with snapshots as my docker skills do not run that deep.

load more comments (4 replies)
load more comments (3 replies)
[–] [email protected] 30 points 1 week ago (5 children)

It's hard for me to tell if I'm just set in my ways according to the way I used to do it, but I feel exactly the same.

I think Docker started as "we're doing things at massive scale, and we need to have a way to spin up new installations automatically and reliably." That was good.

It's now become "if I automate the installation of my software, it doesn't matter that the whole thing is a teetering mess of dependencies and scripted hacks, because it'll all be hidden inside the container, and also people with no real understanding can just push the button and deploy it."

I forced myself to learn how to use Docker for installing a few things, found it incredibly hard to do anything of consequence to the software inside the container, and for my use case it added extra complexity for no reason, and I mostly abandoned it.

[–] [email protected] 2 points 1 week ago

I agree with it, docker can be simple but can be a real pain too. The good old scripts are the way to go in my opinion, but I kinda like the lxc containers in docker, this principle of containerization is surely great but maybe not the way docker does... (maybe distrobox could be good too 🀷 )

Docker is absolutely a good when having to scale your env but I think that you should build your own images and not use prebuild ones

load more comments (4 replies)
[–] [email protected] 6 points 1 week ago (3 children)

I can recommend NixOS. It's quite simple if your wanted application is part of NixOS already. Otherwise it requires quite some knowledge to get it to work anyways.

[–] [email protected] 15 points 1 week ago* (last edited 1 week ago) (1 children)

Yeah, It's either 4 lines and you got some service running... Or you need to learn a functional language, fight the software project and make it behave on an immutable filesystem and google 2 pages of boilerplate code to package it... I rarely had anything in-between. πŸ˜†

[–] [email protected] 7 points 1 week ago

Hey now, you can also spend 20 pages of documentation and 10 pages of blogs/forums/github^1^ and you can implement a whole nix module such that you only need to write a further 3 lines to activate the service.

1 Your brain can have a little source code, as a threat.

[–] [email protected] 1 points 1 week ago

One day I will try, this project seems interesting!

load more comments (1 replies)
[–] [email protected] 10 points 1 week ago* (last edited 1 week ago) (2 children)

Honestly after using docker and containerization for more than a decade, my home setups are just yunohost or baremetal (a small pi) with some periodic backups. I care more about my own time now than my home setup and I want things to just be stable. Its been good for a couple of years now, without anything other than some quick updates. You dont have to deal with infa changes with updates, you dont have to deal with slowdowns, everything works pretty well.

At work its different Docker, Kubernetes, etc... are awesome because they can deal gracefully with dependencies, multiple deploys per day, large infa. But ill be the first to admit that takes a bit more manpower and monitoring systems that are much better than a small home setup.

[–] [email protected] 5 points 1 week ago (1 children)

I tend to also agree with your opinion,but lately Yunohost have quite few broken apps, they're not very fast on updates and also not many active developers. Hats off to them though because they're doing the best they can !

[–] [email protected] 4 points 1 week ago (1 children)

I have to agree, the community seems to come and go. Some apps have daily updates and some have been updated only once. If I were to start a new server, I would probably still pick yunohost, but remove some of the older apps as one offs. The lemmy one for example is stuck on a VERY old version. However the GotoSocial app is updated every time there is an update in the main repo.

Still super good support for something that is free and open source. Stable too :) but sometimes stability means old.

[–] [email protected] 4 points 1 week ago (1 children)

Didn't really tried YunoHost it's basically a simple selfhostable cloud server?

[–] [email protected] 3 points 1 week ago (1 children)

Basically. It's just Ubuntu server with some really good niceties.

load more comments (1 replies)
[–] [email protected] 3 points 1 week ago (2 children)

yeah I think that at the end even if it seems a bit "retro" the "normal install" with periodic backups/updates on default vm (or even lxc containers) are the best to use, the most stable and configurable

[–] [email protected] 1 points 1 week ago (1 children)

Do you use any sort of RAID? Recently, ive been using an old SSD, but back 9ish years ago, I used to backup everything with a RAID system, but it took too much time to keep up.

[–] [email protected] 4 points 1 week ago (1 children)

I have a RAID 1 on the proxmox host to backup vms and their datas

[–] [email protected] 1 points 1 week ago

nice.

I need to get something dead simple/no cloud etc... Just shopping around.

load more comments (1 replies)
load more comments
view more: β€Ή prev next β€Ί