this post was submitted on 17 Dec 2024
418 points (99.1% liked)

Technology

60251 readers
3528 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 2 years ago
MODERATORS
(page 4) 38 comments
sorted by: hot top controversial new old
[–] [email protected] 10 points 2 weeks ago (1 children)

I thought I read somewhere that larger drives had a higher chance of failure. Quick look around and that seems to be untrue relative to newer drives.

[–] [email protected] 19 points 2 weeks ago* (last edited 2 weeks ago) (4 children)

One problem is that larger drives take longer to rebuild the RAID array when one drive needs replacing. You're sitting there for days hoping that no other drive fails while the process goes. Current SATA and SAS standards are as fast as spinning platters could possibly go; making them go even faster won't help anything.

There was some debate among storage engineers if they even want drives bigger than 20TB. The potential risk of data loss during a rebuild is worth trading off density. That will probably be true until SSDs are closer to the price per TB of spinning platters (not necessarily the same; possibly more like double the price).

[–] [email protected] 6 points 2 weeks ago

Yep. It’s a little nerve wracking when I replace a RAID drie in our NAS, but I do it before there’s a problem with a drive. I can mount the old one back in, or try another new drive. I’ve only ever had one new DOA, here’s hoping those stay few and far between.

load more comments (3 replies)
[–] [email protected] 65 points 2 weeks ago (3 children)

30/32 = 0.938

That’s less than a single terabyte. I have a microSD card bigger than that!

;)

load more comments (3 replies)
[–] [email protected] 7 points 2 weeks ago* (last edited 2 weeks ago) (9 children)

How can someone without programming skills make a cloud server at home for cheap?

Lemmy’s Spoiler Doesn’t Make Sense(Like connected to WiFi and that’s it)

[–] [email protected] 4 points 2 weeks ago* (last edited 2 weeks ago)

Debian, virtualmin, podman with cockpit, install these on any cheap used pc you find, after initial setup all other is gui managed

[–] [email protected] 4 points 2 weeks ago

Raspberry Pi or an old office PC are the usual methods. It's not so much programming as Linux sysadmin skills.

Beyond that, you might consider OwnCloud for an app-like experience, or just Samba if all you want is local network files.

[–] [email protected] 10 points 2 weeks ago

Yes. You'll have to learn some new things regardless, but you don't need to know how to program.

What are you hoping to make happen?

load more comments (6 replies)
[–] [email protected] 4 points 2 weeks ago

Here i am still rocking 6TB.

[–] [email protected] 38 points 2 weeks ago (2 children)

My first HDD had a capacity of 42MB. Still a short way to go until factor 10⁶.

[–] [email protected] 21 points 2 weeks ago (1 children)

My first HD was a 20mb mfm drive :). Be right back, need some “just for men” for my beard (kidding, I’m proud of it).

[–] [email protected] 17 points 2 weeks ago (3 children)

So was mine, but the controller thought it was 10mb so had to load a device driver to access the full size.

Was fine until a friend defragged it and the driver moved out of the first 10mb. Thereafter had to keep a 360kb 5¼" drive to boot from.

That was in an XT.

[–] [email protected] 9 points 2 weeks ago

Was fine until a friend defragged it and the driver moved out of the first 10mb

Oh noooo 😭

load more comments (2 replies)
load more comments (1 replies)
[–] [email protected] 21 points 2 weeks ago* (last edited 2 weeks ago) (5 children)

This is for cold and archival storage right?

I couldn't imagine seek times on any disk that large. Or rebuild times....yikes.

[–] [email protected] 8 points 2 weeks ago

For a full 32GB at the max sustained speed(275MB/s), 32ish hours to transfer a full amount, 36 if you assume 250MB/s the whole run. Probably optimistic. CPU overhead could slow that down in a rebuild. That said in a RAID5 of 5 disks, that is a transfer speed of about 1GB/s if you assume not getting close to the max transfer rate. For a small business or home NAS that would be plenty unless you are running greater than 10GiBit ethernet.

[–] [email protected] 15 points 2 weeks ago

Definitely not for either of those. Can get way better density from magnetic tape.

They say they got the increased capacity by increasing storage density, so the head shouldn't have to move much further to read data.

You'll get further putting a cache drive in front of your HDD regardless, so it's vaguely moot.

load more comments (3 replies)
[–] [email protected] 12 points 2 weeks ago (1 children)

Just one would be a great backup, but I’m not ready to run a server with 30TB drives.

[–] [email protected] 9 points 2 weeks ago (1 children)

I'm here for it. The 8 disc server is normally a great form factor for size, data density and redundancy with raid6/raidz2.

This would net around 180TB in that form factor. Thats would go a long way for a long while.

[–] [email protected] 7 points 2 weeks ago (5 children)

I dunno if you would want to run raidz2 with disks this large. The resilver times would be absolutely bazonkers, I think. I have 24 TB drives in my server and run mirrored vdevs because the chances of one of those drives failing during a raidz2 resilver is just too high. I can't imagine what it'd be like with 30 TB disks.

[–] [email protected] 4 points 2 weeks ago

A few years ago I had a 12 disk RAID6 array and the power distributor (the bit between the redundant PSUs and the rest of the system) went and took 5 drives with them, lost everything on there. Backup is absolutely essential but if you can't do that for some reason at least use RAID1 where you only lose part of your data if you lose more than 2 drives.

load more comments (4 replies)
[–] [email protected] 92 points 2 weeks ago (2 children)

I can't wait for datacenters to decommission these so I can actually afford an array of them on the second-hand market.

[–] [email protected] 16 points 2 weeks ago (11 children)

Exactly, my nas is currently made up of decommissioned 18tb exos. Great deal and I can usually still get them rma’d the handful of times they fail

load more comments (11 replies)
[–] [email protected] 37 points 2 weeks ago (2 children)

Home Petabyte Project here I come (in like 3-5 years 😅)

load more comments (2 replies)
[–] [email protected] 56 points 2 weeks ago (2 children)
[–] [email protected] 25 points 2 weeks ago

sonarr goes brrrrrr…

[–] [email protected] 21 points 2 weeks ago (1 children)
[–] [email protected] 10 points 2 weeks ago

...dum tss!

[–] [email protected] 156 points 2 weeks ago (5 children)

It never ceases to amaze me how far we can still take a piece of technology that was invented in the 50s.

That's like developing punch cards to the point where the holes are microscopic and can also store terabytes of data. It's almost Steampunk-y.

[–] [email protected] 55 points 2 weeks ago (5 children)

Solid state is kinda like a microscopic punch card.

load more comments (5 replies)
load more comments (4 replies)
load more comments
view more: ‹ prev next ›