Almost none now that i automated updates and a few other things with kestra and ansible. I need to figure out alerting in wazuh and then it will probably drop to none.
Selfhosted
A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.
Rules:
-
Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.
-
No spam posting.
-
Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.
-
Don't duplicate the full text of your blog or github here. Just post the link for folks to click.
-
Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).
-
No trolling.
Resources:
- selfh.st Newsletter and index of selfhosted software and apps
- awesome-selfhosted software
- awesome-sysadmin resources
- Self-Hosted Podcast from Jupiter Broadcasting
Any issues on the community? Report it using the report flag.
Questions? DM the mods!
Very little. Thanks to Docker + Watchtower I don't even have to check for updates to software. Everything is automatic.
Very little. I have enough redundancy through regular snapshots and offsite backups that I'm confident enough to let Watchtower auto-update most of my containers once a week - the exceptions being pihole and Home Assistant. Pihole gets very few updates anyway, and I tend to skip the mid-month Home Assistant updates so that's just a once a month thing to check for breaking changes before pushing the button.
Meanwhile my servers' host OSes are stable LTS distros that require very little maintenance in and of themselves.
Ultimately I like to tinker, but once I'm done tinkering I want things to just work with very little input from me.
i've got a RPI and other SBC, once month, make a copy of the MicroSD card, as the data is in the HD
Mostly nothing, except for Home Assistant, which seems to shit the bed every few months. My other services are Docker containers or Proxmox LXCs that just work.
It's as much or as little as you want to. If you don't want to change anything, you can use something like debian and only maintain once every 5 years (and you could even skip that).
I personally spend a little more, by choice, because I use gentoo. But if I'm busy, I can avoid maintenance by only running routine updates every couple of weeks or so.
For my local media server? Practically none. Maybe restart the system once a month if it starts getting slow. Clear the cache, etc.
When I hosted game servers: Depending on the game, you may have to fix something every few hours. Arma 3 is, by far, the worst. Which really sucks because the games can last really long, and it can be annoying to save and load with the GM tool thing.
When I hosted game servers: Depending on the game, you may have to fix something every few hours. Arma 3 is, by far, the worst. Which really sucks because the games can last really long, and it can be annoying to save and load with the GM tool thing.
Was that a mix of games being more involved and the way their server software was set up, from what you could tell, or...?
A bit of both. It really depends on the game. Some games are super simple, just launch an executable and hand out the IP. Others are needlessly complicated or just horribly coded. My example game is just an absolute mess all around even just as a player; running a server is no different. And since the actual game is all user-made, sometimes the problem is the server software, and sometimes it's how the mission you're running was coded. Sometimes it's both.
Depends what are you doing. Something like keep base os patched is pretty much nil efforts. Some apps more problematic than others. Home Assistant is always a pain to upgrade and something like postfix is requires nearly 0 maintenance.
Sometimes its real easy and I‘m taking a month off and nothing breaks. Then I have times where I want to add new services or optimize stuff. This can take forever. Right now I‘m building object storage behind a vpn.
If you’re not publicly exposing things? I can go months without touching it. Then go through and update everything in an hour or so on the weekend.
Like 1 hour every two months or so, I just run an ansible playbook and check everything is working ok
I spend a huge amount of time configuring and setting up stuff as it's my biggest hobby. But I got good enough that when I set something up it can stay for months without any mainainence. Most I do for keeping it up is adding more storage if it turn out to be used more than planned.
It's very minimal in normal use, maybe like an hour or two a month at most.
Too much, just, too much
Maybe 1 hr every month or two to update things.
Thinks like my opnsense router are best updated when no one else is using the network.
The docker containers I like to update manually after checking the release logs. Doesn't take long and I often find out about cool new features perusing the release notes.
Projects will sometimes have major updates that break things and I strongly prefer having everything super stable until I have time to sit down and update.
11 stacks, 30+ containers. Borg backups runs automatically to various repositories. Zfs auto snap snot also runs automatically to create rapid backups.
I use unraid as a nas and proxmox for dockers and VMs.
Maybe 1-2 hours a week for ~23 docker containers, 3 LXCs and proxmox, so not much. Most of that time is spend SSH-ing doing minor updates. Running Debian on everything has been amazing. Stability is just phenomenal.
That must be why it stopped working ;-)
Does 48 hours not getting a reverse proxy working count?
It’s FreeNAS and I don’t really hoast anything but the plex server… so 48 hours.
If deleting files counts 10 days a year, if not 1 day a year.
Minimal, I have to force myself to check the servers for updates atleast once a week.
Main problem for me is I automated podman and docker updates with their respective autoupdate mechanisms and use ntfy for push notifications so I know if a service stops working and I had an update recently on it that it's an update issue.
Also have uptime monitor wih uptime kuma to monitor state of my services to catch them not working before I do, also ntfy for push notifications.
Also have grafana+prometheus seted up on my biggest server for monitoring and alerting with alertmanager+mail to get notifications on even more errors.
So in general I only have to worry about occasional once every few months error and updates of the host system (debian).
30 docker stacks
5mins a day involving updates and checking github for release notes
15 minutes a day "acquiring" stuff for the server
My mini-pc with Debian runs RunTipi 24/7 with Navidrome, Jellyfin and Tailscale. Once every 2-3 weeks I plug in the monitor to run updates and add/remove some media.
As a complete noob trying to make A TrueNAS server, none and then suddenly lots when idk how to fix something that broke
As others said, the initial setup may consume some time, but once it's running, it just works. I dockerize almost everything and have automatic backups set up.
A lot less since I started using docker instead of running separate vms for everything. Less systems to update is bliss.
Not much for myself, like many others. But my backups are manual. I have an external drive I backup to and unplug as I intentionally want to keep it completely isolated from the network in case of a breach. Because of that, maybe 10 minutes a week? Running gentoo with tons of scripts and docker containers that I have automatically updating. The only time I need to intervene the updates is when my script sends me a push notification of an eselect news item (like a major upcoming update) or kernel update.
I also use a custom monitoring software I wrote that ties into a MySQL db that's connected to with grafana for general software, network alerts (new devices connecting to network, suspicious DNS requests, suspicious ports, suspicious countries being reached out to like china, etc) or hardware failures (like a raid drive failing).... So yeah, automate if you know how to script or program, and you'll be pretty much worry free most of the time.
@[email protected] Not much tbh, I host email, a git server, activitypub, change detector, healthchecks, libreddit and another dozen of services in 3 different servers.
Every now and then I check manually the backups, because it is the sane thing to do. Also I try some new services on docker, but that is less and less common tbh.
I run two local physical servers, one production and one dev (and a third prod2 kept in case of a prod1 failure), and two remote production/backup servers all running Proxmox, and two VPSs. Most apps are dockerised inside LXC containers (on Proxmox) or just docker on Ubuntu (VPSs). Each of the three locations runs a Synology NAS in addition to the server.
Backups run automatically, and I manually run apt updates on everything each weekend with a single ansible playbook. Every host runs a little golang program that exposes the memory and disk use percent as a JSON endpoint, and I use two instances of Uptime Kuma (one local, and one on fly.io) to monitor all of those with keywords.
So -
- weekly: 10 minutes to run the update playbook, and I usually ssh into the VPS's, have a look at the Fail2Ban stats and reboot them if needed. I also look at each of the Proxmox GUIs to check the backs have been working as expected.
- Monthly: stop the local prod machine and switch to the prod2 machine (from backups) for a few days. Probably 30 minutes each way, most of it waiting for backups.
- From time to time (if I hear of a security update), but generally every three months: Look through my container versions and see if I want to update them. They're on docker compose so the steps are just backup the LXC, docker down, pull, up - probs 5 minutes per container.
- Yearly: consider if I need to do operating systems - eg to Proxmox 8, or a new Debian or Ubuntu LTS
- Yearly: visit the remotes and have a proper check/clean up/updates
love fly.io
fun fact, lemdro.id is hosted entirely on fly.io