unRAID

1119 readers
1 users here now

A community for unRAID users to discuss their projects.

founded 1 year ago
MODERATORS
26
 
 

I’m new to the Unraid scene, after putting off doing something other than Windows-based serving and sharing for about.. oh, about 14 years. By “new to the scene”, I mean: “Trial expires in 28 days 22 hours 6 minutes” :-)

Anywho, I ran into an issue with a disabled drive. The solution was to rebuild it. I solved it thanks to a post by u/Medical_Shame4079, on Reddit.

That made me think about the whole “losing stuff on Reddit” maybe problem of the future. While this post isn’t much, maybe it will be helpful to someone else, sometime else.

The issue? A drive ha a status of disabled, and it has a message of “device is disabled contents emulated unraid.”

The fix:

Stop the array, unassign the disk, start the array in maintenance mode, stop it again, reassign the drive to the same slot. The idea is to start the array temporarily with the drive “missing” so it changes from “disabled” to “emulated” status, then to stop it and “replace” the drive to get it back to “active” status.

Looking forward to more time with Unraid. It’s been easy to pick up so far.

27
1
submitted 1 year ago* (last edited 1 year ago) by [email protected] to c/[email protected]
28
29
 
 

Hey,

I have a SSD pool with two relative small SSDs:

Now I started to notice that one of my SSDs started to fail. So I thought, why not use this opportunity to upgrade the pool. This is how I expect it to work:

  1. Buy two new bigger SSDs
  2. Restart server in safe mode
  3. Remove failing SSD
  4. Install one of the new SSDs
  5. Add the new SSD to the pool
  6. Start the Array?
  7. The pool should regenerate???
  8. Start with Step 3 again and replace the second small SSD
  9. Profit ???

Is this how it works or do I really first need to use the mover and move all data back to the hard drives and replace the pool all at once?

30
 
 

Got some extreme warm weather coming and I'm going to be out of town for a while. Can't trust the inlaws staying here to do anything server related.

Anyone know of a plugin or script to automatically shutdown if the system temp is too high?

31
 
 

cross-posted from: https://discuss.tchncs.de/post/464987

If you aren't already using the mover tuning plug-in, now is a good time to have a look at it.

The latest update allows per-share settings override for detailed control over how your caches are used.

I use this plug-in to keep files in cache based on their age, so for example in a media server, you can have the last 14 days of TV shows kept in cache, while still running the mover regularly.

It can also do a full move if the disk is above a certain threshold value, so if your cache is getting full, it can dump all files to the array as per normal.

So you always keep the most important recent files on the cache, with a greatly reduced risk of running into a full cache issue and the problems that causes.

Now, with the latest update, you can tune these settings PER SHARE, rather than being across the whole system.

32
 
 

I’m running the binhex Mullvad container but now they Mullvad is removing port forwarding I have 2 questions.

Is there a setting that can fix this issue in the container?

If not, I assume I would need to do a new container using a different VPN?

33
 
 
34
1
submitted 1 year ago* (last edited 1 year ago) by [email protected] to c/[email protected]
 
 

My favorite new feature of 6.12 is the "Exclusive Shares" concept. Here's a rundown:

Background

Unraid user shares are a FUSE, which allows data on multiple drives to be presented as a single file system. This idea is at the heart of Unraid's Array, as well as the concept of "Cache Pools" (now known as Named Pools). Any time you see a path that starts with /mnt/user in Unraid, that's a FUSE (prior to 6.12, that is).

FUSE is great for giving us this transparent way to view our files without having to worry about which physical drive those files reside on. However, this comes at a cost... and that cost is reduced performance for applications running on an SSD Named Pool.

This performance penalty wasn't always noticeable, but it would sometimes rear it's ugly head in unexpected ways (exampes: graylog and duplicati). There was a workaround, assuming your appdata user share was stored entirely on one Named Pool: you could update your docker bind mounts to /mnt/[poolname]/appdata instead of /mnt/user/appdata. This bypassed the FUSE.

Exclusive Shares

With Unraid 6.12, Limetech introduced "Exclusive Shares" as part of the Share Storage Conceptual Change. This gives us a built-in way to bypass FUSE on an entire user share.

In order for a share to be designated an Exclusive Share, the following must be true:

  • "Primary Storage" must be a Named Pool
  • "Secondary Storage" must be set to none
  • Files for that share must exist entirely on the Primary Storage device

Setup

In order to use Exclusive Shares, you first have to enable them. Go to Settings > Global Share Settings and change the Permit Exclusive Shares setting to Yes. You'll have to stop your array in order to make this change.

Next, make sure that your appdata share is stored entirely on the Named Pool. Go to Shares and click Compute in the Size column for your appdata share. This will tell you how much data for this share is saved on each drive. If the only drive reported is your Named Pool, you're all set. If you've got more than one drive, you'll need to disable docker in settings, and then run the mover.

Once your sure that your entire appdata share is saved on your Named Pool, you need to change your appdata share settings. On the Shares tab, click on appdata to bring up the settings. Change the Secondary Storage option to None.

If you did it correctly, after you Apply the changes you will see the Exclusive Access field on the appdata share change from No to Yes.

Finally, if any of your docker container bind mounts use mnt/[poolname]/appdata/..., you can change those to mnt/user/appdata/....

I hope this helps anybody who might have been frustrated with the appdata FUSE performance in the past! If you have any questions, let me know!-

35
 
 

Changes vs. 6.12.1

This is mainly a bug fix release, including also a minor security update. Other highlights:

  • We reverted docker from v23.0.6, introduced during Unraid OS 6.12 development, to v20.10.24, which is the latest patch release of docker used in Unraid OS 6.11. This to address increased memory usage and other issues discovered with docker.
  • A small necessary change to invoke our 'update_services' script whenever a WireGuard tunnel starts or stops is automatically applied to all 'config/wireguard/*.conf' files when you update via Update OS page. For manual update or if you downgrade, it is necessary to make a "dummy change" in a setting on the Settings/VPN Manager page and then click Apply.

Bug fixes and improvements

  • email notifications: add line in /etc/php.ini: 'mail.mixed_lf_and_crlf=On' to workaround change in PHP8 CRLF handling
  • emhttpd: Fix regression: emulated ZFS volumes not recognized
  • emhttpd: Fix regression: format fails if diskFsType==auto and defaultFsType specifies encryption
  • emhtptd: Fix regression: mount fails if diskFsType==auto
  • htop: remove predefined /root/.config/htop/htoprc file
  • network: RC services update:
    • NFS - fix service reload
    • RPC - fix service reload
    • NGINX - remove HTTPS port in FQDN redirect when default 443
    • All services - register IPv4 Link local assignment (169.254.xxx.xxx)
    • All services - make lock file programmable
    • WireGuard: delayed service update to avoid race condition
    • rc.library: do not allow duplicates in bind list
  • webgui: Dashboard updates:
    • Re-introduce show/hide tile content
    • Add new icon function to show/hide all content at once
    • Reduce gap between columns
    • description -> model
    • ZFS: fix percentage value to max 100%
    • Use prototype function: This makes it easier for 3rd party developers to automatically hide dynamic content
    • Handle duplicate IP assignments and give warning
    • change header MEMORY to SYSTEM
  • webgui: OS Update: add checkbox to confirm reading release notes before upgrading
  • webgui: diagnostics: include list of duplicate assignments
  • webgui: NFS: for Security/Private increase Rule field from 256 to 512 characters.

Linux kernel

  • version 6.1.36

Base Distro

  • bind: version -9.16.42 (CVE-2023-2911)
  • docker: 20.10.24 (revert from v23.0.6)
36
 
 

Bug fixes

emhttpd: remove "unraid" from reserved names list
emhttpd: properly handle "ERROR" strings in 'btrfs filesystem show' command output
emhttpd: prevent cmdStart if already Started
network: Revised service reload functionality: ensures the services are only reloaded once
network: rc.library: read IP addresses directly from interfaces instead of file
network: NTP: fix listening interfaces
network: NTP: exclude WG tunnels and user defined interfaces
network: NTP: add interface name in config
network: SSH: add interface name in config
webgui: fix PHP8 warning in UPS Settings page
webgui: Dashboard: show ZFS percentage based on c_max value
webgui: Dashboard: suppress coloring of ZFS utilization bar
webgui: Dashboard: olther misc fixes

Linux kernel

version 6.1.34

Base Distro

ttyd: version 1.7.3 (fixes issue of invisible underbar characters with certain FireFox versions)

Security updates

ca-certificates: version 20230506
curl: version 8.1.2 (CVE-2023-28322 CVE-2023-28321 CVE-2023-28320 CVE-2023-28319)
git: version 2.35.8 (CVE-2023-25652 CVE-2023-25815 CVE-2023-29007)
ntp: version 4.2.8p17 (CVE-2023-26551 CVE-2023-26552 CVE-2023-26553 CVE-2023-26554 CVE-2023-26555)
openssl: version 1.1.1u (CVE-2023-2650)
openssh: version 9.3p1
php: version 8.2.7
libX11: version 1.8.6 (CVE-2023-3138)
libssh: version 0.10.5 (CVE-2023-1667 CVE-2023-2283)
zstd: version 1.5.5
37
38
39
 
 

tldr; moving from a Dell r420 to a Dell m640 in a Dell VRTX, where should change any of my settings configurations

I currently am running my Unraid on a Dell r420 off a cheap USB stick. The r420 has an LSI 9206-16e with 2 Dell MD1000s attached. I'm not using any of the r420 drive bays as I couldn't get Unraid to detect them after trying to flash the controller. The storage is:

  • 1x 6TB (Parity)
  • 11x 4TB
  • 3x 3TB
  • 14x 2TB
  • 1x 2TB (cache)

I also have several other 2TB and smaller drives left some 10K and 15K rpm drives and a 12TB SAS drive I stumbled upon for cheap.

The Dell VRTX is the sff format version and I have 8 drives for that, but heard I will struggle with getting those to show up in Unraid. The m640 has 2x 300TB SSDs, that I might be able to use with Unraid easily.

This weekend I plan on moving the LSI Card to the VRTX, figuring out how to pass it through to the m640, and getting everything up and running. It's going to take me minutes, so I've been led to believe as Unraid, should just work!

But I'm wondering how I can better utilize my storage. I'm wasting the 12TB Currently as it's not being used. My gut reaction is take out one of the 2tb and replace with the 12TB and run it in it's own pool since I can't parity it and I feel like using it for Parity would be a waste as well.

Am I missing anything? Should I be doing something completely different? Is using a dual Xeon Gold 6134 128GB RAM Machine a complete waste for Unraid? Everything was well loved before it got to me, but I'm giving it a good retirement home running an *arr network of tools, AMP Game Server for a bunch of Teens, and learning about things like NGINX and PiHole for myself (and failing.. stupid 502 gateway NGINX error!)

EDIT: Here's the plugins I'm currently using that I've seen mentioned on the other place that we shall not go back to.

  • Community Applications
  • Disk Location
  • Dynamix File Manager
  • Dynamix System Buttons
  • Fix Common Problems
  • Parity Check Tuning
  • Unassigned Devices and Unassigned Device Plus
  • and now unBALANCE from https://lemmy.world/u/zehty recommendation
40
 
 

Hi all, so I have about 6 nvme drives in my server, I use them in pairs for download cache, appdata, and vm's. Would there be a benefit or would it act worse if I combined all 6 drives into one ZFS pool to have the bitrot protection on my VM drives, and the increased storage space.

41
 
 

I am not sure how to self-host docker compose environments. Maybe somebody can help out.

I'm a software dev and not afraid of some sysadmin work.

Options I see:

  1. I see a docker-compose community app that lets you start setting things up, but haven't investigated very much.
  2. Spin up a linux vm, pretend its not unraid and follow instructions. This is heavier-weight so I want to investigate the docker approach at least some.
  3. Individual dockers, sorta extracting each item in the docker-compose file and running them one-by-one.

I already have nginx proxy manager setup with other services, so that should be fine to expose things to the wider internet when I get that far.

42
43
1
submitted 1 year ago* (last edited 1 year ago) by [email protected] to c/[email protected]
 
 

So just to get some content going on Lemmy, and get contributing here, thought I'd write a bit about going to 6.12 RC with a ZFS pool and what I've done on my server to try and make use of that newfound ability...

Original configuration (pre 6.12):

  • 17 unRAID array drives in XFS format
  • dual parity
  • 2 NVMEs (cache and appdata are separate) in XFS format
  • XFS formatted
  • Backed-up daily with rsync to a second unRAID server on my LAN.

New configuration 6.12 (currently RC8)

  • 13 unRAID array drives in XFS format
  • dual parity
  • 4 x 8TB drives in a ZFS raidz1 pool
  • 2 NVMEs (cache and appdata are separate) in ZFS format with compression enabled.
  • Backed-up hourly with ZFS snapshots

Why the change?

  • Going to ZFS for my "important data", which is to say, personal documents, family photos (yay babies!)
  • Enables snapshots to help aid in the event of a "soft" data error (file being accidentally deleted, overwritten, or maliciously damaged by software, etc, bitrot, etc). Also enables extremely quick replications to my backup server.
  • Faster access to those personal documents with data striped across 4 drives.
  • Keeping main array as unRAID array drives for "easily replaceable data" (mostly media files, linux ISOs, etc.) so I can expand it easily by chucking another drive in my server or up-sizing an older drive easily.

Enhanced backups through ZFS:

  • ZFS has some rather remarkable options for data backups that are enabled by the snapshot capability of the filesystem. Rather than sending individual files across the network and having to laboriously calculate the differences between each file on the dataset (part of the ZFS volume), you can essentially just send the "difference" between snapshots which can stream between servers in a very short time (usually only a couple of seconds in my case).

This means I have my system continually backed-up on an hourly basis, with saved snapshots every hour, and every day/month for half a year.

Plugins in use

The current unraid RC8 supports ZFS pools, however GUI support for managing ZFS pools is lacking. I'm using the following plugins and tools to accomplish everything (available through App installs):

  • ZFS Master for Unraid, makes most ZFS operations a GUI interaction rather than terminal. I've heard rumblings that unRAID may acquire/in-house this plugin to add the functionality to the GUI. It would be worthwhile.
  • Sanoid, automatically handles ZFS snapshots, as well as rotating snapshots based on the number of required snapshots per month and/or day. Enables sending ZFS snapshots to a backup server and rotating those snapshots as well to ensure continuity of data. Requires a bit of config file editing by hand to make it work, and setting-up a cron script but nothing difficult (it's well-documented) and was about 5 min to set up successfully.

Backup thoughts

RAID (of any type) is not backup. That said, I have part of the "3-2-1" backup strategy automatically enabled here, with my main server backing up the "important stuff" to a separate backup server also running unRAID. That covers having 2 copies of my data on separate devices, however it does not cover keeping one copy off-site as well.

I do have a removable drive in my backup system (currently in XFS format) that's mounted through unassigned devices that I will insert and sync my ZFS pools to twice a year, then go and put in a safe deposit box off-site to ensure it's reliably protected. I currently use XFS for this as it's easy to just plug into any system and get at my files. ZFS is still not as well supported on Windows and Mac systems, but I may go there in the future.

44
1
submitted 1 year ago* (last edited 1 year ago) by [email protected] to c/[email protected]
45
 
 

Great to see. Hopefully the community finds their way here…

46
 
 

Would love to see these get added to CA.