this post was submitted on 17 Feb 2025
74 points (100.0% liked)

Selfhosted

46126 readers
513 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 2 years ago
MODERATORS
 

Basically title. I'm in the process of setting up a proper backup for my configured containers on Unraid and I'm wondering how often I should run my backup script. Right now, I have a cron job set to run on Monday and Friday nights, is this too frequent? Whats your schedule and do you strictly backup your appdata (container configs), or is there other data you include in your backups?

top 50 comments
sorted by: hot top controversial new old
[–] [email protected] 2 points 2 months ago* (last edited 2 months ago)

I'm always backing up with SyncThing in realtime, but every week I do an off-site type of tarball backup that isn't within the SyncThing setup.

[–] [email protected] 3 points 2 months ago

rsync from ZFS to an off-site unraid every 24 hours 5 times a week. on the sixth day it does a checksum based rsync which obviously means more stress so only do it once a week. the seventh day is reserved for ZFS scrubbing every two weeks.

[–] [email protected] 3 points 2 months ago

Backup all of my proxmox-LXCs/VMs to a proxmox backup server every night + sync these backups to another pbs in another town. A second proxmox backup every noon to my nas. (i know, 3-2-1 rule is not reached...)

[–] [email protected] 2 points 2 months ago

Assuming it is on: Daily

[–] [email protected] 3 points 2 months ago

I have

  • Unraid back up it's USB
  • Unraid appears gets backed up weekly by a community applications (CA app backup) and I use rclone to back it up to an old box account (100GB for life..) I did have it encrypted but seems I need to fix that..
  • Parity drive on my Unraid (8TB)
  • I am trying to understand how to use Rclone to back up my photos to Proton Drive so that's next.

Music and media is not too important yet but I would love some insight

[–] [email protected] 4 points 2 months ago* (last edited 2 months ago)

Right now, I have a cron job set to run on Monday and Friday nights, is this too frequent?

Only you can answer this. How many days of data are you prepared to lose? What are the downsides of running your backup scripts more frequently?

[–] [email protected] 10 points 2 months ago* (last edited 2 months ago) (1 children)

Proxmox servers are mirrored zpools, not that RAID is a backup. Replication between Proxmox servers every 15 minutes for HA guests, hourly for less critical guests. Full backups with PBS at 5AM and 7PM, 2 sets apiece with one set that goes off site and is rotated weekly. Differential replication every day to zfs.rent. I keep 30 dailies, 12 weeklys, 24 monthly and infinite annuals.

Periodic test restores of all backups at various granularities at least monthly or whenever I'm bored or fuck something up.

Yes, former sysadmin.

[–] [email protected] 2 points 2 months ago (1 children)

This is very similar to how I run mine, except that I use Ceph instead of ZFS. Nightly backups of the CephFS data with Duplicati, followed by staggered nightly backups for all VMs and containers to a PBS VM on a the NAS. File backups from unraid get sent up to CrashPlan.

Slightly fewer retention points to cut down on overall storage, and a similar test pattern.

Yes, current sysadmin.

[–] [email protected] 1 points 2 months ago* (last edited 2 months ago)

I would like to play with ceph but I don't have a lot of spare equipment anymore, and I understand ZFS pretty well, and trust it. Maybe the next cluster upgrade if I ever do another one.

And I have an almost unhealthy paranoia after see so many shitshows in my career, so having a pile of copies just helps me sleep at night. The day I have to delve into the last layer is the day I build another layer, but that hasn't happened recently. PBS dedup is pretty damn good so it's not much extra to keep a lot of copies.

[–] [email protected] 2 points 2 months ago

Daily toward all my three locations:

  • local on the server
  • in-house but on a different device
  • offsite

But not all three destinations backup the same amount of data due to storage limitations.

[–] [email protected] 3 points 2 months ago

I classify the data according to its importance (gold, silver, bronze, ephemeral). The regularity of the zfs snapshots (15 minutes to several hours) and their retention time (days to years) on the server depends on this. I then send the more important data that I cannot restore or can only restore with great effort (gold and silver) to another server once a day. For bronze, the zfs snapshots and a few days of storage time on the server are enough for me, as it is usually data that I can restore (build artifacts or similar) or is simply not that important. Ephemeral is for unimportant data such as caches or pipelines.

[–] [email protected] 2 points 2 months ago (2 children)

I continuous backup important files/configurations to my NAS. That's about it.

IMO people who redundant/backup their media are insane... It's such an incredible waste of space. Having a robust media library is nice, but there's no reason you can't just start over if you have data corruption or something. I have TB and TB of media that I can redownload in a weekend if something happens (if I even want). No reason to waste backup space, IMO.

[–] [email protected] 2 points 2 months ago (1 children)

Maybe for common stuff but some dont want 720p YTS or yify releases.
There are also some releases that don't follow TVDB aired releases (which sonarr requires) and matching 500 episodes manually with deviating names isn't exactly what I call 'fun time'.
Amd there are also rare releases that just arent seeded anymore in that specific quality or present on usenet.

So yes: Backup up some media files may be important.

[–] [email protected] 1 points 2 months ago (1 children)

Data hoarding random bullshit will never make sense to me. You're literally paying to keep media you didn't pay for because you need the 4k version of Guardians of the Galaxy 3 even though it was a shit movie...

Grab the YIFY, if it's good, then get the 2160p version... No reason to datahoard like that. It's frankly just stupid considering you're paying to store this media.

[–] [email protected] 1 points 2 months ago (1 children)

This may work for you and please continue doing that.

But I'll get the 1080p with a moderate bitrate version of whatever I can aquire because I want it in the first place and not grab whatever I can to fill up my disk.

And as I mentioned: Matching 500 episodes (e.g. Looney Tunes and Disney shorts) manually isnt fun.
Much less if you also want to get the exact release (for example music) of a certain media and need to play detective on musicbrainz.

[–] [email protected] 0 points 2 months ago (1 children)

Matching 500 episodes (e.g. Looney Tunes and Disney shorts) manually isnt fun.

With tools like TinyMediaManager, why in the absolute fuck would you do it manually?

At this point, it sounds like you're just bad at media management more than anything. 1080p h265 video is at most between 1.5-2GB per video. That means with even a modest network connection speed (500Mbps lets say) you can realistically download 5TB of data over 24 hours... You can redownload your entire media library in less than 4-5 days if you wanted to.

So why spend ~$700 on 2 20TB drives, one to be used only as redundancy, when you can simply redownload everything you previously had (if you wanted to) for free? It'll just take a little bit of time.

Complete waste of money.

[–] [email protected] 0 points 2 months ago* (last edited 2 months ago)

I prefer Sonarr for management.
Problem is the auto matching.
It just doesnt always work.
Practical example: Looney. Tunes.and.Merrie.Melodies.HQ.Project.v2022

Some episodes are either not in the correct order or their name is deviating from how tvdb sorts it.
Your best regex/automatching can do nothing about it if Looney.Tunes.Shorts.S11.E59.The.Hare.In.Trouble.mkv should actually be named Looney.Tunes.Shorts.S1959.E11.The.Hare.In.A.Pickle.mkv to be automatically imported.

At some point fixing multiple hits becomes so tedious it's easier to just clear all auto-matches and restart fresh.

[–] [email protected] 2 points 2 months ago (1 children)

It becomes a whole different thing when you yourself are a creator of any kind. Sure you can retorrent TBs of movies. But you can't retake that video from 3 years ago. I have about 2 TB of photos I took. I classify that as media.

[–] [email protected] 1 points 2 months ago

It becomes a whole different thing when you yourself are a creator of any kind.

Clearly this isn't the type of media I was referencing....

[–] [email protected] 7 points 2 months ago

I use Duplicati for my backups, and have backup retention set up like this:

Save one backup each day for the past week, then save one each week for the past month, then save one each month for the past year.

That way I have granual backups for anything recent, and the further back in the past you go the less frequent the backups are to save space

[–] [email protected] 3 points 2 months ago

No backup for my media. Only redundacy.

For my nextcloud data, anytime i made major changes.

[–] [email protected] 2 points 2 months ago* (last edited 2 months ago)

Longest interval is every 24 hours. With some more frequent like every 6 hours or so, like the ones for my game servers.

I have multiple backups (3-2-1 rule), 1 is just important stuff as a file backup, the other is a full bootable system image of everything.

With proper backup software incremental backups don't use any more space unless files are changed, so no real downside to more frequent backups.

[–] [email protected] 2 points 2 months ago

I honestly don't have too much to back up, so I run one full backup job every Sunday for different directories I care about. They run a check on the directory and only back up any changes or new files. I don't have the space to backup everything, so I only take the smaller stuff and most important. The backup software also allows live monitoring if I enable it, so some of my jobs I have that turned on since I didn't see any reason not to. I reuse the NAS drives that report errors that I replace with new ones to save on money. So far, so good.

Backup software is Bvckup2, and reddit was a huge fan of it years ago, so I gave it a try. It was super cheap for a lifetime license at the time, and it's super lightweight. Sorry, there is no Linux version.

[–] [email protected] 12 points 2 months ago

I do not as I cannot afford the extra storage required to do so.

[–] [email protected] 2 points 2 months ago

Depends on the application. I run a nightly backup of a few VM's because realistically they dont change much. I have containers on the other hand that run critical (to me) systems like my photo backup and they are backed up twice a day.

[–] [email protected] 4 points 2 months ago

If you haven't tested your backups, you ain't got a backup.

[–] [email protected] 2 points 2 months ago

Daily backups. Currently using restic on my NixOS servers. To avoid data corruption, I make a zfs snapshot at 2am, and after that restic does a backup of my mutable data dirs both to my local Nas and CloudFlare r3. The Nas backup folder is synced to backblaze nightly as well for a more cold store.

[–] [email protected] 4 points 2 months ago (1 children)

Local zfs snap every 5 mins.

Borg backups everything hour to 3 different locations.

I've blown away docker folders of config files a few times by accident. So far I've only had to dip into the zfs snaps to bring them back.

[–] [email protected] 0 points 2 months ago* (last edited 2 months ago) (1 children)

Try ZFS send if you have ZFS on the other side. It's insane. No file IO, just snap and time for the network transfer of the delta.

[–] [email protected] 2 points 2 months ago

I would but the other side isn't zfs so I went with borg instead

[–] [email protected] 2 points 2 months ago* (last edited 2 months ago)

Every hour. Could do it more frequently if needed.

It depends on how resource intensive the backup process is.

Consider an 800GB Immich instance.

Using Duplicity or rsync takes 1 hour per backup. 99% of the time is spent in traversing the directory structure and checking which files have changed. 1% is spent into transferring the difference to the backup. Any backup system that operates on top of the file system would take this much. In addition, unless you're using something that can take snapshots of the filesystem, you have to stop Immich during the backup process in order to prevent backing up an invalid app state.

Using ZFS send on the other hand (with syncoid) takes less than 5 seconds to discover the differences and the rest of the time is spent on the data transfer, at 100MB/s in my case. Since ZFS send is based on snapshots, I don't have to stop the service either.

When I used Duplicity to backup, I would backup once week because the backup process was long and heavy on the disk array. Since I switched to ZFS send, I do it once an hour because there's almost no visible impact.

I'm now in the process of migrating my laptop to ZFS on root in order to be able to utilize ZFS send for regular full system backups. If successful, eventually I'll move all my machines to ZFS on root.

[–] [email protected] 2 points 2 months ago

And equally important, how do you do your backups? What system and to where?

[–] [email protected] 2 points 2 months ago

I tried Kopia but it was unstable and janky, so now it's whenever I remember to manually run a bunch of rsync. I backup my desktop to cold storage on the first of the month, so I should get in the habit of backing up my server to the NAS then also.

[–] [email protected] 6 points 2 months ago (1 children)

Every hour, automatically

Never on my Laptop, because I'm too lazy to create a mechanism that detects when it's possible.

[–] [email protected] 4 points 2 months ago

I just tell it to back up my laptops every hour anyway. If it’s not on, it just doesn’t happen, but it’s generally on enough to capture what I need.

[–] [email protected] 1 points 2 months ago

@Sunny Backups are done weekly, using Restic (and with '--read-data-subset=9%' to verify that the backup data is still valid).

But that's also in addition to doing nightly Snapraid syncs for larger media, and Syncthing for photos & documents (which means I have copies on 2+ machines).

[–] [email protected] 3 points 2 months ago

I have a cron job set to run on Monday and Friday nights, is this too frequent?

Only you can answer that - what is your risk tolerance for data loss?

[–] [email protected] 2 points 2 months ago

Depends on the system but weekly at least

load more comments
view more: next ›