this post was submitted on 05 Feb 2024
46 points (97.9% liked)

Selfhosted

40133 readers
545 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 1 year ago
MODERATORS
46
submitted 9 months ago* (last edited 9 months ago) by [email protected] to c/[email protected]
 

A year ago I set up Ubuntu server with 3 ZFS pools on my server, normally I don't make copies of very large files but today I was making a copy of a ~30GB directory and I saw in rsync that the transfer doesn't exceed 3mb/s (cp is also very slow).

What is the best file system that "just works"? I'm thinking of migrating everything to ext4

EDIT: I really like the automatic pool recovery feature in ZFS, has saved me from 1 hard drive failure so far

top 50 comments
sorted by: hot top controversial new old
[–] [email protected] 2 points 9 months ago

Use zfs sync instead of rsync. If it's still slow, it's probably SMR drives.

[–] [email protected] 5 points 9 months ago (1 children)

ZFS is by far the best just use TrueNAS, Ubuntu is crap at supporting ZFS, also only set your pool's VDEV 6-8 wide.

[–] [email protected] 1 points 9 months ago

I was thinking about switching to debian (all that I host is in docker so that's why), but the weird thing is that it was working perfectly 1 month ago

[–] [email protected] 6 points 9 months ago

I host my array of HDD drives with btrfs, works well and is Linux native.

[–] [email protected] 5 points 9 months ago (2 children)

Make sure you don't have SMR drives, if they are spinning drives. CMR drives are the I ly ones that should be used in a NAS, especially with ZFS. https://vermaden.wordpress.com/2022/05/08/zfs-on-smr-drives/

[–] [email protected] 1 points 9 months ago

It's an SSD, that's what worries me the most

[–] [email protected] 2 points 9 months ago (1 children)

From the article it looks like zfs is the perfect file system for smr drives as it would try to cache random writes

[–] [email protected] 2 points 9 months ago

Possibly, with tuning. Op would just have to be careful about reslivering. In my experience SMR drives really slow down when the CMR buffer is full.

[–] [email protected] 4 points 9 months ago (1 children)

Where are you copying to / from?

Duplicating a folder on the same NAS on the same filesystem? Or copying over the network?

For example, some devices have a really fast file transfer until a buffer files up and then it crawls.

Rsync might not be the correct tool either if you're duplicating everything to an empty destination...?

[–] [email protected] 1 points 9 months ago (1 children)

Same NAS, same filesystem on an SSD without redundancy

[–] [email protected] 1 points 9 months ago

Still the same, or has it solved itself?

If it's lots of small files, rather than a few large ones? That'll be the file allocation table and / or journal...

A few large files? Not sure... something's getting in the way.

[–] [email protected] 8 points 9 months ago* (last edited 9 months ago)

Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I've seen in this thread:

Fewer Letters More Letters
LVM (Linux) Logical Volume Manager for filesystem mapping
NAS Network-Attached Storage
PSU Power Supply Unit
SSD Solid State Drive mass storage
ZFS Solaris/Linux filesystem focusing on data integrity

5 acronyms in this thread; the most compressed thread commented on today has 9 acronyms.

[Thread #486 for this sub, first seen 5th Feb 2024, 15:05] [FAQ] [Full list] [Contact] [Source code]

[–] [email protected] 4 points 9 months ago (1 children)

ZFS should have better performance if you set it up correctly.

[–] [email protected] 4 points 9 months ago (1 children)

That's exactly their gripe: out of the box performance.

[–] [email protected] -1 points 9 months ago (2 children)

If you set it up correctly

[–] [email protected] 1 points 9 months ago

I'll try to know more about ZFS and I'll do it better next time, I see a lot of people pro ZFS so it should be good

[–] [email protected] 7 points 9 months ago

That's, by the very definition, not out of the box.

[–] [email protected] 10 points 9 months ago (2 children)

Most filesystems should "just work" these days.

Why are you blaming the filesystem here when you haven't ruled out other issues yet? If you have a drive failing a new FS won't help. Check out "smartctl" to see if it reports errors in your drives.

[–] [email protected] -1 points 9 months ago

That ive learnt the hard way it dosent 😅 have a Ubuntu server with unifi network in it, thats now full in inodes 😅 the positive thing, im forced to learn a lot in Linux 😂

[–] [email protected] 3 points 9 months ago

they may be using really slow hard drives or an SSD without DRAM.

or maybe a shitty network switch?

maybe the bandwidth is used up by a torrent box?

there's a lot of possible causes.

load more comments
view more: next ›