this post was submitted on 03 Nov 2024
31 points (80.4% liked)

Selfhosted

40133 readers
544 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 1 year ago
MODERATORS
 

I know how RAID work and prevent data lost from disks failures. I want to know is possible way/how easy to recover data from unfunctioned remaining RAID disks due to RAID controller failure or whole system failure. Can I even simply attach one of the RAID 1 disk to the desktop system and read as simple as USB disk? I know getting data from the other RAID types won't be that simple but is there a way without building the whole RAID system again. Thanks.

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 2 points 1 week ago (2 children)

Just for drive redundancy it's awesome. One drive fails you just pull it out, put in a new one and let the array rebuild. I guess the upside of hardware RAID is that some even allow you to swap a disk without powering down. Either way, you have minimal downtime.

I guess a better way would be to have multiple servers. Though with features like checksums in BTRFS I guess a RAID is still better because it can protect against bitrot. And with directly connected systems in a RAID it is generally easier to ensure consistency.

[–] [email protected] 2 points 1 week ago* (last edited 1 week ago)

Btw: With the regular Linux software mdraid, you can also swap drives without powering down. That all works fine while running. Unless your motherbard SATA controller craps out. But the mdraid itself will handle it just fine.

[–] [email protected] 3 points 1 week ago (1 children)

Yeah, that's generally my consensus as well. Just curious if someone had a better way that maybe I didn't know about.

[–] [email protected] 1 points 1 week ago (1 children)

A tool I've actually found way more useful than actual raid is snapraid.

It just makes a giant parity file which can be used to validate, repair, and/or restore your data in the array without needing to rely on any hardware or filesystem magic. The validation bit being a big deal, because I can scrub all the data in the array and it'll happily tell me if something funky has happened.

It's been super useful on my NAS, where it's the only thing standing between my pile of random drives and data loss.

There's a very long list of caveats as to why this may not be the right choice for any particular use case, but for someone wanting to keep their picture and linux iso collection somewhat protected (use a 321 backup strategy, for the love of god), it's a fairly viable option.

[–] [email protected] 1 points 1 week ago (2 children)

Very cool, this is actually the sort of thing I was interested in. I'm looking at building a fairly heavy NAS box before long and I'd love to not have to deal with the expense of a full raid setup.

For stuff like shows/movies, how do they perform after recovery?

[–] [email protected] 2 points 1 week ago (1 children)

If you're doing it from scratch, I'd recommend starting with a filesystem that has parity checks and filesystem scrubs built in: eg BTRFS or ZFS.

The benefit of something like BRTFS is that you can always add disks down the line and turn it into a RAID cluster with a couple commands.

[–] [email protected] 1 points 1 week ago

Yeah, it's been a long time since I've looked at and kind of RAID/Storage/data preservation stuff... like 256GB spinning platters were the "hot new thing" last time I did.

I'm starting from scratch...in more ways than one lol

[–] [email protected] 1 points 1 week ago

I mean, recovery from parity data is how all of this works, this just doesn't require you to have a controller, use a specific filesystem, have matching sized drives or anything else. Recovery is mostly like any other raid option I've ever used.

The only drawback is that the parity data is mostly equivalent in size to the actual data you're making parity data of, and you need to keep a couple copies of indexes since if you lose the index or the parity data, no recovery for you.

In my case, I didn't care: I'm using the oldest drives I've got as the parity drives, and the newer, larger drives for the data.

If i were doing the build now and not 5 years ago, I might pick a different solution but there's something to be said for an option that's dead simple (looking at you, zfs) and likely to be reliable because it's not doing anything fancy (looking at you, btrfs).

From a usage (not technical) standpoint, the most equivalent commercial/prefabbed solution would probably be something like unraid.