this post was submitted on 30 Jan 2024
77 points (100.0% liked)

Linux

48069 readers
758 users here now

From Wikipedia, the free encyclopedia

Linux is a family of open source Unix-like operating systems based on the Linux kernel, an operating system kernel first released on September 17, 1991 by Linus Torvalds. Linux is typically packaged in a Linux distribution (or distro for short).

Distributions include the Linux kernel and supporting system software and libraries, many of which are provided by the GNU Project. Many Linux distributions use the word "Linux" in their name, but the Free Software Foundation uses the name GNU/Linux to emphasize the importance of GNU software, causing some controversy.

Rules

Related Communities

Community icon by Alpár-Etele Méder, licensed under CC BY 3.0

founded 5 years ago
MODERATORS
 

cross-posted from: https://programming.dev/post/9319044

Hey,

I am planning to implement authenticated boot inspired from Pid Eins' blog. I'll be using pam mount for /home/user. I need to check integrity of all partitions.

I have been using luks+ext4 till now. I am ~~hesistant~~ hesitant to switch to zfs/btrfs, afraid I might fuck up. A while back I accidently purged '/' trying out timeshift which was my fault.

Should I use zfs/btrfs for /home/user? As for root, I'm considering luks+(zfs/btrfs) to be restorable to blank state.

(page 2) 25 comments
sorted by: hot top controversial new old
[–] [email protected] 35 points 9 months ago (1 children)

Btrfs is default on OpenSUSE, has worked great for me for 7 years. No issues.

[–] [email protected] 13 points 9 months ago (2 children)

Same here, but for only 1 year on my main machine and 6 years on my laptop. I looove snapper. It saved my ass so many times

load more comments (2 replies)
[–] [email protected] 4 points 9 months ago (4 children)

Many many years ago I set up btrfs for the disks I write my backups to with a raid 1 config for them. Unfortunately one of those disks went bad and ended up corrupting the whole array. Makes me wonder if I set it up correctly or not.

Nowadays, I have the following disks in my system set up as btrfs:

  • My backups disk because of compression.
  • My OS drive because of Timeshift.
  • My home folder because it feels safer. COW feels like it'll handle power failures better, whilst there's also checksumming so I can identify corrupted files.
  • My SSD Steam library over two drives because life is short and I cba managing the two ssds independently.

It's going fine, but it feels like I need to manually run a balance every one in a while when the disk fills up.

I also like btrfs-assistant for managing the devices.

Out of interest, since I've not used the "recommended partion setup" for any install for a while now, is ext4 still the default on most distros?

[–] [email protected] 3 points 9 months ago (2 children)

My SSD Steam library over two drives because life is short and I cba managing the two ssds independently.

You do know that Steam handles multiple libraries transparently, even on removable drives?

load more comments (2 replies)
[–] [email protected] 3 points 9 months ago (1 children)

Out of interest, since I’ve not used the “recommended partion setup” for any install for a while now, is ext4 still the default on most distros?

I recently installed Nobara Linux on an additional drive, because after 20 years, I wanted to give Linux gaming another shot (works a lot better than I had hopes for, btw), and it defaulted to btrfs. I'll assume so does Fedora, because I cannot imagine Nobara changed that part over the Fedora base for gaming purposes.

[–] [email protected] 4 points 9 months ago

Fedora does, with compression enabled. It's one of the largest divergences from Red Hat since Red Hat doesn't support it at all. openSUSE does also.

load more comments (2 replies)
[–] [email protected] 11 points 9 months ago* (last edited 9 months ago) (6 children)

My experiences:

ZFS: never even tried because it's not integrated (license).

Btrfs: iirc I've tried it three times. Several years ago now. On at least two of those tries, after maybe a month or some of daily driving, suddenly the fs goes totally unresponsive and because it's the entire system, could only reboot. FS is corrupted and won't recover. There is no fsck. There is no recovery. Total data loss. Start again from last backup. Haven't seen that since reiserfs around 2000. Found lots of posts with similar error message. Took btrfs off the list of things I'll be using in production.

I like both from a distance, but still use ext*. Never had total data loss that wasn't a completely electrically dead drive with any version I've used since 1995.

[–] [email protected] 8 points 9 months ago (1 children)

Ouch, that must have been a pain to recover from...

I've had almost the opposite experience to yours funnily. Several years ago my HDDs would drop out at random during heavy write loads, after a while I narrowed down the cause to some dodgy SATA power cables, which sadly I could not replace at the time. Due to the hardware issue I could not scrub the filesystem successfully either. However I managed to recover all my data to a separate BTRFS filesystem, using some "restore" utility that was mentioned in the docs, and to the best of my knowledge all the recovered data was intact.

While that past error required a separate filesystem to perform the recovery, my most recent hardware issue with drives dropping out didn't need any recovery at all - after resolving the hardware issue (a loose power connection) BTRFS pretty much fixed itself during a scheduled scrub and spat out all the repairs in dmesg.

I would suggest enabling some kind of monitoring on BTRFS's counters if you haven't, because the fs will do whatever it can to prevent interruption to operations. In my previous two cases, performance was pretty much unaffected, and I only noticed the hardware problems due to the scheduled scrub & balance taking longer or failing.

Don't run a fsck - BTRFS essentially does this to itself during filesystem operations, such as a scrub or a file read. The provided btrfs check tool (fsck) is for the internal B-tree structure specifically AFAIK, and irreversably modifies the filesystem internally in a way that can cause unrecoverable data loss if the user does not know what they are doing. Instead of running fsck, run a scrub - it's an online operation that can be done while the filesystem is still mounted

load more comments (1 replies)
[–] [email protected] 4 points 9 months ago (2 children)

There is btrfs-check --repair to fix corruption

load more comments (2 replies)
load more comments (4 replies)
[–] [email protected] 3 points 9 months ago (3 children)

My only complaint with btrfs when I used to run it, is that kvm disk performance was abysmal on it. Otherwise I had no issues with the fs.

[–] [email protected] 2 points 9 months ago (4 children)

Most of the tools now should be setting nocow for virtual drives, performance these days isn't bad.

load more comments (4 replies)
load more comments (2 replies)
[–] [email protected] 11 points 9 months ago* (last edited 9 months ago)

As a home user I'd recommend btrfs. It has main line kernel support and is way easier to get operational than zfs. I'd you don't need the more advance raid types of zfs or deduplication, btrfs can do everything you want. Also btrfs is a lot more resource friendly. Zfs, especially with deduplication, takes a ton of RAM.

[–] [email protected] 9 points 9 months ago

Can't vouch for ZFS, but btrfs is great!

You can mount root, log, and home on different subvolumes, they'd practically be on different partitions while still sharing the size limit.

I would also take system snapshots while the system is still running with one command. No need to exclude the home or log directories, nor the pseudo fs (e.g. proc, sys, tmp, dev).

[–] [email protected] 7 points 9 months ago (2 children)

@unhinge I run a simple 48TiB zpool, and I found it easier to set up than many suggest and trivial to work with. I don't do anything funky with it though, outside of some playing with snapshots and send/receive when I first built it.

I think I recall reading about some nuance around using LUKS vs ZFS's own encryption back then. Might be worth having a read around comparing them for your use case.

[–] [email protected] 1 points 9 months ago

if you happen to find the comparison, could you link it here

[–] [email protected] 3 points 9 months ago

afaik openzfs provides authenticated encryption while luks integrity is marked experimental (as of now in man page).

openzfs also doesn't reencrypt dedup blocks if dedup is enabled Tom Caputi's talk, but dedup can just be disabled

[–] [email protected] 6 points 9 months ago (1 children)

My experience with btrfs is "oh shit I forgot to set up subvolumes". Other than that, it just works. No issues whatsoever.

[–] [email protected] 3 points 9 months ago

oh shit I forgot to set up subvolumes

lol

I'm also planning on using its subvolume and snapshot feature. since zfs also supports native encryption, it'll be easier to manage subvolums for backups

[–] [email protected] 16 points 9 months ago (1 children)

Luks+btrfs with Arch as daily driver for 3 years now, mostly coding and browsing. Not a single problem so far :D

[–] [email protected] 4 points 9 months ago (2 children)

that sounds good.

Have you used luks integrity feature? though it's marked experimental in man page

[–] [email protected] 2 points 9 months ago

I have the same use-case as @[email protected]. I didn’t test the integrity feature because it is my work machine and I am not fond of doing experimental stuff on it.

load more comments (1 replies)
load more comments
view more: ‹ prev next ›