Purchased 5 renewed drives from amazon, 10 months in 3 have had to be replaced because of escalating bad sectors, all three were outside of the refurbish guarantee… one by only a week. Save your money and go with the new drives.
datahoarder
Who are we?
We are digital librarians. Among us are represented the various reasons to keep data -- legal requirements, competitive requirements, uncertainty of permanence of cloud services, distaste for transmitting your data externally (e.g. government or corporate espionage), cultural and familial archivists, internet collapse preppers, and people who do it themselves so they're sure it's done right. Everyone has their reasons for curating the data they have decided to keep (either forever or For A Damn Long Time). Along the way we have sought out like-minded individuals to exchange strategies, war stories, and cautionary tales of failures.
We are one. We are legion. And we're trying really hard not to forget.
-- 5-4-3-2-1-bang from this thread
RAID is your friend. If you can't afford to lose one, you might have a bad time (applies to all drives anyway). Manufacturer refurbs are your best bet.
I've been using renewed (refurbished) 8TB drives off of Ebay - SAS 8TB for $50-60 each. Not a single failure in over a year on the dozen or so drives I'm running right now. I'm running unRAID with a combination of unRAID's native array drives (for media and "disposable" stuff) in a dual parity config, and ZFS (with snapshots replicated to a live backup on a secondary server) for important personal stuff (and backed-up off-site a few times a year).
Even if something were to perish, I have enough spares to just chuck one in and let it resilver without worrying at all. I'm content with this as a homelabber and when I'm not supplying critical service for a business, etc.
I've not heard any out-and-out horror stories, but I've got no first hand experience.
I'm planning on picking up 3x manufacturer recertified 18TB drives from SPD when money allows, but for now I'm running 6x ancient (minimum 4 years old) 3TB WD Reds in RAID 6. I keep a close eye on SMART stats, and can pick up a replacement within a day if something starts to look iffy. My plan is to treat the 18TBs the same; hard drives are consumables, they wear out over time, and you have to be ready to replace them when they do
I'm running several used ("renewed") enterprise SAS HDDs and enterprise SATA SSDs. They've been solid so far.
The HDDs came with about 30k hours each which is not bad at all, and the SSDs only had around 100 TB written out of the total 6.2 PB rating.
I'm not sure I would do used with standard consumer HDDs, they typically don't last as long and are likely abused a lot more in a desktop PC vs a datacenter server.
As always have proper backups in place, all drives fail eventually no matter where you buy them.
Depends on how much you value your data and how much redundancy you have. I bought a 20tb “manufacturer certified” drive from SPD the other day and it tests fine, but I’m not going to put valuable data on it. Maybe if this drive outlives my shucked easy stores I’ll buy more. But for now my main raid array is new drives only that I’ve throughly tested before installing.
What is a renewed drive? Do they have a datasheet with MTBF defined?
Spinning disks, or consumable flash?
What is the use case? RAID 5? Ceph? JBOD?
What is your human capital cost of monitoring and replacing bad disks?
Let's say you have a data-lake with Ceph or something, it costs you $2-5 a month to monitor all your disks for errors, predictive failure, debug slow io, etc. The human cost of identifying a bad disk, pulling it, replacing it, then destroying it - Something like 15-30m. The cost of destroying a drive $5-50 (depending on your vendor, onsite destruction, etc)
A higher predictive failure rate of "Used" drives, has to factor in your fixed costs, and human costs. If the drive only lasts 70% as long as a new drive, then the math is fairly easy.
If the drive gets progressively slower (i.e. older SSDs) then the actual cost of the used drive becomes more difficult to model (you have to have a metric for service responsiveness, etc).
-
if its a hobby project, and your throwing drives into a self-healing system, then take any near free disks you can get, and just watch your power bill.
-
If you make money from this, or the downside of losing data is bad, then model out the higher failure rate into your cost model.