this post was submitted on 17 Feb 2025
74 points (100.0% liked)

Selfhosted

46685 readers
617 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 2 years ago
MODERATORS
 

Basically title. I'm in the process of setting up a proper backup for my configured containers on Unraid and I'm wondering how often I should run my backup script. Right now, I have a cron job set to run on Monday and Friday nights, is this too frequent? Whats your schedule and do you strictly backup your appdata (container configs), or is there other data you include in your backups?

(page 2) 19 comments
sorted by: hot top controversial new old
[–] hendrik@palaver.p3x.de 3 points 3 months ago* (last edited 3 months ago)

Most backup software allow you to configure backup retention. I think I went with some pretty standard once per day for a week. After that they get deleted, and it keeps just one per week of the older ones, for one or two months. And after that it's down to monthly snapshots. I think that aligns well with what I need. Sometimes I find out something broke the day before yesterday. But I don't think I ever needed a backup from exactly the 12th of December or something like that. So I'm fine if they get more sparse after some time. And I don't need full backups more than necessary. An incremental backup will do unless there's some technical reason to do full ones.

But it entirely depends on the use-case. Maybe for a server or stuff you work on, you don't want to lose more than a day. While it can be perfectly alright to back up a laptop once a week. Especially if you save your documents in the cloud anyway. Or you're busy during the week and just mess with your server configuration on weekends. In that case you might be alright with taking a snapshot on fridays. Idk.

(And there are incremental backups, full backups, filesystem snapshots. On a desktop you could just use something like time machine... You can do different filesystems at different intervals...)

[–] Dagamant@lemmy.world 3 points 3 months ago

Weekly full backup, nightly incremental

[–] metaStatic@kbin.earth 2 points 3 months ago

Thanks for reminding me to validate.

Daily here also.

[–] Jozav@lemmy.world 1 points 3 months ago

Using Kopia, backups are made multiple times per day to Google drive. Only changes are transferred.

Configurations are backed up once per week and manually, stored 4 weeks. Websites and NextCloud data is backed up every hour and stored for a year (although I'm doing this only 7 months now).

Kopia is magic, recommended!

[–] slazer2au@lemmy.world 18 points 3 months ago (1 children)
[–] metaStatic@kbin.earth 4 points 3 months ago (2 children)
[–] slazer2au@lemmy.world 34 points 3 months ago (5 children)

That is what the B in RAID stands for.

load more comments (5 replies)
load more comments (1 replies)
[–] savvywolf@pawb.social 12 points 3 months ago

Daily backups here. Storage is cheap. Losing data is not.

[–] Darkassassin07@lemmy.ca 33 points 3 months ago* (last edited 3 months ago) (2 children)

I run Borg nightly, backing up the majority of the data on my boot disk, incl docker volumes and config + a few extra folders.

Each individual archive is around 550gb, but because of the de-duplication and compression it's only ~800mb of new data each day taking around 3min to complete the backup.

Borgs de-duplication is honestly incredible. I keep 7 daily backups, 3 weekly, 11 monthly, then one for each year beyond that. The 21 historical backups I have right now RAW would be 10.98tb of data. After de-duplication and compression it only takes up 407.98gb on disk.

With that kind of space savings, I see no reason not to keep such frequent backups. Hell, the whole archive takes up less space than one copy of the original data.

load more comments (2 replies)
[–] nichtburningturtle@feddit.org 3 points 3 months ago (1 children)

Timeshift creates a btrfs snapshot on each boot for me. And my server gets nightly borg backups.

[–] QuizzaciousOtter@lemm.ee 5 points 3 months ago (2 children)

Just a friendly reminder that BTRFS snapshots are not backups.

[–] nichtburningturtle@feddit.org 1 points 3 months ago (1 children)

Yes. That's why I sync my important files to my nextcloud.

load more comments (1 replies)
[–] tal@lemmy.today 3 points 3 months ago (2 children)

You're correct and probably the person you're responding to is treating one as an alternative as another.

However, theoretically filesystem snapshotting can be used to enable backups, because they permit for an instantaneous, consistent view of a filesystem. I don't know if there are backup systems that do this with btrfs today, but this would involve taking a snapshot and then having the backup system backing up the snapshot rather than the live view of the filesystem.

Otherwise, stuff like drive images and database files that are being written to while being backed up can just have a corrupted, inconsistent file in the backup.

[–] vividspecter@lemm.ee 3 points 3 months ago (1 children)

btrbk works that way essentially. Takes read-only snapshots on a schedule, and uses btrfs send/receive to create backups.

There's also snapraid-btrfs which uses snapshots to help minimise write hole issues with snapraid, by creating parity data from snapshots, rather than the raw filesystem.

[–] tal@lemmy.today 2 points 3 months ago* (last edited 3 months ago)

and uses btrfs send/receive to create backups.

I'm not familiar with that, but if it permits for faster identification of modified data since a given time than scanning a filesystem for modified files, which a filesystem could potentially do, that could also be a useful backup enabler, since now your scan-for-changes time doesn't need to be linear in the number of files in the filesystem. If you don't do that, your next best bet on Linux -- and this way would be filesystem-agnostic -- is gonna require something like having a daemon that runs and uses inotify to build some kind of on-disk index of modifications since the last backup, and a backup system that can understand that.

looks at btrfs-send(1) man page

Ah, yeah, it does do that. Well, the man page doesn't say what time it runs in, but I assume that it's better than linear in file count on the filesystem.

load more comments (1 replies)
[–] JASN_DE@lemmy.world 5 points 3 months ago

Nextcloud data daily, same for the docker configs. Less important/rarely changing data once per week. Automatic sync to NAS and online storage. Irregular and manual sync to an external disk.

7 daily backups, 4 weekly backups, "infinite" monthly backups retained (until I clean them up by hand).

[–] scrubbles@poptalk.scrubbles.tech 5 points 3 months ago

Boils down to how much are you willing to lose? Personally I do weekly

[–] darklamer@lemmy.dbzer0.com 21 points 3 months ago (1 children)
[–] IsoKiero@sopuli.xyz 4 points 3 months ago

Yep. Even if the data I'm backing up doesn't really change that often. Perhapas I should start to back up files from my laptop and workstation too. Nothing too important is stored only on those devices, but reinstalling and reconfiguring everything back is a bit of a chore.

load more comments