this post was submitted on 30 Jan 2024
77 points (100.0% liked)

Linux

47341 readers
1156 users here now

From Wikipedia, the free encyclopedia

Linux is a family of open source Unix-like operating systems based on the Linux kernel, an operating system kernel first released on September 17, 1991 by Linus Torvalds. Linux is typically packaged in a Linux distribution (or distro for short).

Distributions include the Linux kernel and supporting system software and libraries, many of which are provided by the GNU Project. Many Linux distributions use the word "Linux" in their name, but the Free Software Foundation uses the name GNU/Linux to emphasize the importance of GNU software, causing some controversy.

Rules

Related Communities

Community icon by Alpár-Etele Méder, licensed under CC BY 3.0

founded 5 years ago
MODERATORS
 

cross-posted from: https://programming.dev/post/9319044

Hey,

I am planning to implement authenticated boot inspired from Pid Eins' blog. I'll be using pam mount for /home/user. I need to check integrity of all partitions.

I have been using luks+ext4 till now. I am ~~hesistant~~ hesitant to switch to zfs/btrfs, afraid I might fuck up. A while back I accidently purged '/' trying out timeshift which was my fault.

Should I use zfs/btrfs for /home/user? As for root, I'm considering luks+(zfs/btrfs) to be restorable to blank state.

top 50 comments
sorted by: hot top controversial new old
[–] [email protected] 3 points 7 months ago

Been using BTRFS on a couple NAS servers for 4+ years. Also did raid1 BTRFS over two USB hard drives connected to a Pi4 (yes this should be absolutely illegal).

The USB raid1 had a couple checksum errors that were easily fixed via scrub last year and the other two servers have been running without any issues. I assume it's been fine since they're all connected to a UPS and since I run weekly scrubs.

I enjoyed CoW and snapshots so much that I've been using it on my main Arch install's (I use Arch btw) root drive and storage drives (in BTRFS raid1) for the last 4 months without issue.

[–] [email protected] 1 points 7 months ago

I've been running ZFS in the form of FreeNAS/TrueNAS in production environments for the past 12 years or so. Started with around 5TB and currently have nearly 300TB across several servers. Mostly NFS nowadays, but have shared out SMB and iSCSI.

No data loss. Drives have been easy to replace and re-silver. We have had a couple instances where a failing ZIL or L2ARC has crashed a storage server and taken storage offline, but removing/replacing the log device got us up and running without data loss.

Btrfs I only have experience on home systems. It has reliably stored my data for several years now, but I'm about to put it to the test this weekend. I plan on adding 4x8TB disks to a 4TB mirror to turn it into a 20TB RAID10. Wish me luck!

[–] [email protected] 8 points 7 months ago

I think zfs is a pretty cool guy. Eh copy on write and doesn't afraid of anything

[–] [email protected] 4 points 7 months ago* (last edited 7 months ago)

I have had great luck with my users' home directories on ZFS. No issues in years. Used to have issues, and on those days I was glad root was on ext3.

I had issues with btrfs about 10 years ago. It is much better now.

Both experiences with Linux.

A different ZFS partition per user is really helpful for quota and migration.

[–] [email protected] 18 points 7 months ago

Been using Btrfs for a year, I once had an issue that my filesystem was read only, I went to the Btrfs reddit and after some troubleshooting it turned out that my ssd was dying, I couldn't believe it at first because my SMART report was perfectly clean and the SSD was only 2 years old, then a few hours later SMART began reporting thousands of dead sectors.

The bloody thing was better than smart at detecting a dying ssd lol.

[–] [email protected] 8 points 7 months ago

At some, long ago, the Ubuntu installer was offering to use zfs for the boot and root partitions. That sounded like a good idea and worked great for a long time, automatic snapshots, options to restore state at boot etc.

Until my generous boot partition started to run out if space with all the snapshots (which were setup automatically and no obvious way to configure) OK no big deal, write a bash script that finds the old snapshots and delete them manually whenever boot is full again.

Then one day recently my laptop wouldn't boot anymore, Grub could no longer read the zfs on boot. Managed to boot with USB installation image, read zsf and chroot. Tried alot of things but in the end killed zfs and replace with ext4. Then made it boot again.

Apparently I'm not the only one with this issue.

[–] [email protected] 1 points 7 months ago* (last edited 7 months ago)

There's no reason you couldn't; btrfs is pretty stable.

Edit: Going on five years of using btrfs on production servers (storing and processing data on a 24x7 basis).

[–] [email protected] 7 points 7 months ago (2 children)

I did my first BTRFS setup over the weekend. I followed the Arch wiki to set up what I thought was RAID 1 only to find out nearly a TB of copying later that it was splitting the data between the drives, not mirroring them (only the metadata was in R1.) One command later and I'd converted the filesystem to true RAID 1. I feel like any other system would require a total redo of the entire FS, but BTRFS did it flawlessly.

I'm still confused, however, as it seems RAID 1 only works with two drives from what I've read. Is that true? Why?

[–] [email protected] 1 points 7 months ago* (last edited 7 months ago)

Iirc they added raid1c3 raid1c4 etc to make raid 1 work with more copies.

[–] [email protected] 7 points 7 months ago

That is not the case. In the context of btrfs, RAID-1 means "ensure that two copies of every data block are available in the running volume," not "ensure that every bit of both of these drives is identical at all times." For example, I have a btrfs volume in my server with six drives in it (14 TB each) set up as a RAID-1/1 (both data and metadata are mirrored). It doesn't really matter which two drives of the six have copies of a given data block, only that two copies exist at all.

Compare it to... three RAID-1 metadevices (mdadm), with LVM over top, and ext4 (let's say) on top of that. When a file is created in the filesystem (ext4), LVM ensures that it doesn't matter on which pair of drives it was written, and mdadm's RAID-1 functionality ensures that there are always two identical copies of the file (on two identical copies of a drive).

[–] [email protected] 1 points 7 months ago (1 children)

Btrfs is good for small systems with 1-2 disks. ZFS is good for many disks and benefits heavily from ram. ZFS also has specially disks.

[–] [email protected] 1 points 7 months ago (1 children)

BTRFS is running just fine for my 8 disk home server.

[–] [email protected] 1 points 7 months ago (1 children)

That is not a recommended setup. Raid5 is not stable yet.

[–] [email protected] 1 points 7 months ago (1 children)

I never said anything about RAID5. I'm running RAID1.

[–] [email protected] 1 points 7 months ago (1 children)
[–] [email protected] 1 points 7 months ago (1 children)
[–] [email protected] 1 points 7 months ago (1 children)
[–] [email protected] 1 points 7 months ago

Oh, I misremembered... It's only 7 disks in BTRFS RAID1.

I have:

  • 12 TB
  • 8 TB
  • 6 TB
  • 6 TB
  • 3 TB
  • 3 TB
  • 2 TB

For a combined total of 40 TB raw storage, which in RAID1 turns into 20 TB usable.

[–] [email protected] 3 points 7 months ago* (last edited 7 months ago)

Linux does not support ZFS as well as operating systems like OpenBSD or OpenIndiana, but I do use it on my Ubuntu box for my backup array. It is not the best setup: RAID-Z over USB is not at all guaranteed to keep your data safe, but it was the most economical thing I was able to build myself, and it gets the job done well enough with regular scrubbing to give me piece of mind about at least having one other reliable copy of my data. And I can write files to it quickly, and take snapshots of the state of the filesystem if need be.

I used to use Btrfs on my laptop and it worked just fine, but I did have trouble once when I ran out of disk space. A Btrfs filesystem puts itself into read-only mode when that happens, and that makes it tough to delete files to free-up space. There is a magic incantation that can restore read-write functionality, but I never learned what it was, I just decided to stop using it because Btrfs is pretty clearly not for home PC use. Freezing the filesystem in read-only mode makes sense in a data-center scenario, but not for a home user who might want to try to erase data so one can keep using it normally. I might consider using Btrfs in place of ZFS on a file server, though ZFS does seem to provide more features and seems to be somewhat better tested and hardened.

There is also BCacheFS now as an alternative to Btrfs, but it is still fairly new, and not widely supported by default installations. I don't know how stable it is or how well it compares to Btrfs, but I thought I would mention it.

[–] [email protected] -1 points 7 months ago (1 children)
[–] [email protected] 2 points 7 months ago (1 children)

Ext4 is bad for data integrity and has slow performance. ext3 is just dated.

[–] [email protected] 1 points 7 months ago (1 children)

I'm running ext2/ext3/ext4 since 2002(?)... Never had a problem! But I've lost lots of data using reiser4, xfs & xfs, specially when blackout happens. If you don't have a no-break/not using a notebook, and you have important data for yourself, I'd stick with ext4. I actually didn't notice thaaaat much of performance boost, when using fast HDDs, SSDs & Nvme between any of these formats!

[–] [email protected] 1 points 7 months ago* (last edited 7 months ago)

Like you say, ext4 is absolutely ancient at this point. I still use it for VMs as it has low overhead and no compression but for bare metal ext4 feels old.

XFS can't really be compared to btrfs or ZFS as it is closer to ext4. If your curious Wikipedia as a table of filesystems and the features they provide. As far as XFS's reliability goes I can't really say as I just use ext4, btrfs or ZFS.

[–] [email protected] 15 points 7 months ago

After 4 years on btrfs I haven't had a single issue, I never think about it really. Granted, I have a very basic setup. Snapper snapshots have saved me a couple of times, that aspect of it is really useful.

[–] [email protected] 4 points 7 months ago* (last edited 7 months ago) (1 children)

I haven't used them professionally but I've been using ZFS on my home router (OPNsense) and NAS (TrueNAS with RAID-Z2) for many years without problem. I've used Btrfs on laptops and desktops with OpenSUSE Tumbleweed for the past year and a bit, also without problem. Btrfs snapshots have saved me a couple of times when I messed something up. Both seem like solid filesystems for everyday use.

[–] [email protected] 2 points 7 months ago (1 children)
[–] [email protected] 2 points 7 months ago (1 children)

The two options are UFS and ZFS, and their documentation recommends that ZFS is more reliable. I had UFS before and after a power outage the router wouldn't reboot, so I switched to ZFS. That was two or three years ago and the router has stayed up since then (except one time when an SSD died, but that was a hardware failure).

[–] [email protected] 1 points 7 months ago

Honestly I'm surprised UFS is still a thing. I guess its useful for read only flash.

load more comments
view more: next ›