this post was submitted on 24 Apr 2024
110 points (99.1% liked)

Selfhosted

40006 readers
552 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 1 year ago
MODERATORS
 

I recognize this will vary depending on how much you self-host, so I'm curious about the range of experiences from the few self-hosted things to the many self-hosted things.

Also how might you compare it to other maintenance of your other online systems (e.g. personal computer/phone/etc.)?

top 50 comments
sorted by: hot top controversial new old
[–] [email protected] 2 points 4 months ago

Almost none now that i automated updates and a few other things with kestra and ansible. I need to figure out alerting in wazuh and then it will probably drop to none.

[–] [email protected] 0 points 6 months ago

Very little. Thanks to Docker + Watchtower I don't even have to check for updates to software. Everything is automatic.

[–] [email protected] 2 points 6 months ago

Very little. I have enough redundancy through regular snapshots and offsite backups that I'm confident enough to let Watchtower auto-update most of my containers once a week - the exceptions being pihole and Home Assistant. Pihole gets very few updates anyway, and I tend to skip the mid-month Home Assistant updates so that's just a once a month thing to check for breaking changes before pushing the button.

Meanwhile my servers' host OSes are stable LTS distros that require very little maintenance in and of themselves.

Ultimately I like to tinker, but once I'm done tinkering I want things to just work with very little input from me.

[–] [email protected] 2 points 6 months ago

i've got a RPI and other SBC, once month, make a copy of the MicroSD card, as the data is in the HD

[–] [email protected] 5 points 6 months ago

Mostly nothing, except for Home Assistant, which seems to shit the bed every few months. My other services are Docker containers or Proxmox LXCs that just work.

[–] [email protected] 6 points 6 months ago

It's as much or as little as you want to. If you don't want to change anything, you can use something like debian and only maintain once every 5 years (and you could even skip that).

I personally spend a little more, by choice, because I use gentoo. But if I'm busy, I can avoid maintenance by only running routine updates every couple of weeks or so.

[–] [email protected] 3 points 6 months ago (1 children)

For my local media server? Practically none. Maybe restart the system once a month if it starts getting slow. Clear the cache, etc.

When I hosted game servers: Depending on the game, you may have to fix something every few hours. Arma 3 is, by far, the worst. Which really sucks because the games can last really long, and it can be annoying to save and load with the GM tool thing.

[–] [email protected] 1 points 6 months ago (1 children)

When I hosted game servers: Depending on the game, you may have to fix something every few hours. Arma 3 is, by far, the worst. Which really sucks because the games can last really long, and it can be annoying to save and load with the GM tool thing.

Was that a mix of games being more involved and the way their server software was set up, from what you could tell, or...?

[–] [email protected] 2 points 6 months ago

A bit of both. It really depends on the game. Some games are super simple, just launch an executable and hand out the IP. Others are needlessly complicated or just horribly coded. My example game is just an absolute mess all around even just as a player; running a server is no different. And since the actual game is all user-made, sometimes the problem is the server software, and sometimes it's how the mission you're running was coded. Sometimes it's both.

[–] [email protected] 2 points 6 months ago

Depends what are you doing. Something like keep base os patched is pretty much nil efforts. Some apps more problematic than others. Home Assistant is always a pain to upgrade and something like postfix is requires nearly 0 maintenance.

[–] [email protected] 2 points 6 months ago

Sometimes its real easy and I‘m taking a month off and nothing breaks. Then I have times where I want to add new services or optimize stuff. This can take forever. Right now I‘m building object storage behind a vpn.

[–] [email protected] 4 points 6 months ago (1 children)

If you’re not publicly exposing things? I can go months without touching it. Then go through and update everything in an hour or so on the weekend.

[–] [email protected] 3 points 6 months ago

Like 1 hour every two months or so, I just run an ansible playbook and check everything is working ok

[–] [email protected] 5 points 6 months ago

I spend a huge amount of time configuring and setting up stuff as it's my biggest hobby. But I got good enough that when I set something up it can stay for months without any mainainence. Most I do for keeping it up is adding more storage if it turn out to be used more than planned.

[–] [email protected] 3 points 6 months ago

It's very minimal in normal use, maybe like an hour or two a month at most.

[–] [email protected] 4 points 6 months ago

Too much, just, too much

[–] [email protected] 4 points 6 months ago* (last edited 6 months ago)

Maybe 1 hr every month or two to update things.

Thinks like my opnsense router are best updated when no one else is using the network.

The docker containers I like to update manually after checking the release logs. Doesn't take long and I often find out about cool new features perusing the release notes.

Projects will sometimes have major updates that break things and I strongly prefer having everything super stable until I have time to sit down and update.

11 stacks, 30+ containers. Borg backups runs automatically to various repositories. Zfs auto snap snot also runs automatically to create rapid backups.

I use unraid as a nas and proxmox for dockers and VMs.

[–] [email protected] 3 points 6 months ago

Maybe 1-2 hours a week for ~23 docker containers, 3 LXCs and proxmox, so not much. Most of that time is spend SSH-ing doing minor updates. Running Debian on everything has been amazing. Stability is just phenomenal.

[–] [email protected] 1 points 6 months ago

That must be why it stopped working ;-)

Does 48 hours not getting a reverse proxy working count?

It’s FreeNAS and I don’t really hoast anything but the plex server… so 48 hours.

If deleting files counts 10 days a year, if not 1 day a year.

[–] [email protected] 5 points 6 months ago* (last edited 6 months ago)

Minimal, I have to force myself to check the servers for updates atleast once a week.

Main problem for me is I automated podman and docker updates with their respective autoupdate mechanisms and use ntfy for push notifications so I know if a service stops working and I had an update recently on it that it's an update issue.

Also have uptime monitor wih uptime kuma to monitor state of my services to catch them not working before I do, also ntfy for push notifications.

Also have grafana+prometheus seted up on my biggest server for monitoring and alerting with alertmanager+mail to get notifications on even more errors.

So in general I only have to worry about occasional once every few months error and updates of the host system (debian).

[–] [email protected] 2 points 6 months ago* (last edited 6 months ago)

30 docker stacks

5mins a day involving updates and checking github for release notes

15 minutes a day "acquiring" stuff for the server

[–] [email protected] 3 points 6 months ago

My mini-pc with Debian runs RunTipi 24/7 with Navidrome, Jellyfin and Tailscale. Once every 2-3 weeks I plug in the monitor to run updates and add/remove some media.

[–] [email protected] 5 points 6 months ago

As a complete noob trying to make A TrueNAS server, none and then suddenly lots when idk how to fix something that broke

[–] [email protected] 7 points 6 months ago

As others said, the initial setup may consume some time, but once it's running, it just works. I dockerize almost everything and have automatic backups set up.

[–] [email protected] 8 points 6 months ago

A lot less since I started using docker instead of running separate vms for everything. Less systems to update is bliss.

[–] [email protected] 3 points 6 months ago

Not much for myself, like many others. But my backups are manual. I have an external drive I backup to and unplug as I intentionally want to keep it completely isolated from the network in case of a breach. Because of that, maybe 10 minutes a week? Running gentoo with tons of scripts and docker containers that I have automatically updating. The only time I need to intervene the updates is when my script sends me a push notification of an eselect news item (like a major upcoming update) or kernel update.

I also use a custom monitoring software I wrote that ties into a MySQL db that's connected to with grafana for general software, network alerts (new devices connecting to network, suspicious DNS requests, suspicious ports, suspicious countries being reached out to like china, etc) or hardware failures (like a raid drive failing).... So yeah, automate if you know how to script or program, and you'll be pretty much worry free most of the time.

[–] [email protected] 1 points 6 months ago

@[email protected] Not much tbh, I host email, a git server, activitypub, change detector, healthchecks, libreddit and another dozen of services in 3 different servers.

Every now and then I check manually the backups, because it is the sane thing to do. Also I try some new services on docker, but that is less and less common tbh.

[–] [email protected] 4 points 6 months ago* (last edited 6 months ago) (1 children)

I run two local physical servers, one production and one dev (and a third prod2 kept in case of a prod1 failure), and two remote production/backup servers all running Proxmox, and two VPSs. Most apps are dockerised inside LXC containers (on Proxmox) or just docker on Ubuntu (VPSs). Each of the three locations runs a Synology NAS in addition to the server.

Backups run automatically, and I manually run apt updates on everything each weekend with a single ansible playbook. Every host runs a little golang program that exposes the memory and disk use percent as a JSON endpoint, and I use two instances of Uptime Kuma (one local, and one on fly.io) to monitor all of those with keywords.

So -

  • weekly: 10 minutes to run the update playbook, and I usually ssh into the VPS's, have a look at the Fail2Ban stats and reboot them if needed. I also look at each of the Proxmox GUIs to check the backs have been working as expected.
  • Monthly: stop the local prod machine and switch to the prod2 machine (from backups) for a few days. Probably 30 minutes each way, most of it waiting for backups.
  • From time to time (if I hear of a security update), but generally every three months: Look through my container versions and see if I want to update them. They're on docker compose so the steps are just backup the LXC, docker down, pull, up - probs 5 minutes per container.
  • Yearly: consider if I need to do operating systems - eg to Proxmox 8, or a new Debian or Ubuntu LTS
  • Yearly: visit the remotes and have a proper check/clean up/updates
[–] [email protected] 2 points 6 months ago

love fly.io

fun fact, lemdro.id is hosted entirely on fly.io

load more comments
view more: next ›