Hard Drive reliability over the long term

Originally Posted by robertcope
Backblaze has pretty fun statistics to look at: https://www.backblaze.com/b2/hard-drive-test-data.html


The Backblaze articles on hard drives are fantastic, if you are into that sort of thing (which I am !! ). I have had 3 of the famous Seagate 2 TB drives that have the 50% failure rate. 2 of them have failed, one was replaced under warranty but Seagate failed to replace the other drive. The replacement is still working, but I only use it for temp work area.

I would strongly suggest looking into Carbonite, iDrive, Mozy etc. Good article although might be a bit dated- some services offer first year for like $10 which is a steal. Obviously you need a decent internet connection. I use Carbonite for my personal Behemoth , and iDrive for the wifes AMD 'Vette. https://www.pcworld.com/article/3211435/best-online-backup.html

Unless you have a built in M.2 connection, trying a M.2 with an adapter generally causes lots of issues, so I would avoid that for now. Also, note the differences in Samsung QVO, EVO and Pro SSD. I have had both Pro and EVO, but the reliability of QVO and related products is not good enough for my use. All of my Samsung Pro's and EVO's dating back 3-5 years are still in service with no failures.
 
Just a note.

Every external LaCie hard drive I have looked at that failed was a seagate. I suspect that the drive may have been dropped. Every one had the inner area of the platter ground down by the read write heads. One was mine, I have torn apart five others from friends with the same problem, all 1 TB drives.

I forgot to mention in my previous post, the externals I use for backup are only attached to the computer when backing up the system. They are WD 1 TB externals and are about three years old. If they are running in another two years, they will be replaced with new units.

Also a good high quality UPS unit is a very good idea. Preferably an online unit,so switch over is less than a few milliseconds. I have the seen the cheap 60.00 APC bricks not even sense loss of voltage.

Sorry about the previous post being doubled. Got distracted and hit post reply again.
 
Last edited:
Also in case this was not mentioned, externals drives are the lowest quality. When the hard drive manufactures build and test the drives, the best ones go into retail boxed units, 2nd best go into oem white box for dell / hp / lenovo, and the worst go into external enclosures. Im sure now there are other methods but be assured externals are the absolute worst.
 
Originally Posted by Fordiesel69
Also in case this was not mentioned, externals drives are the lowest quality. When the hard drive manufactures build and test the drives, the best ones go into retail boxed units, 2nd best go into oem white box for dell / hp / lenovo, and the worst go into external enclosures. Im sure now there are other methods but be assured externals are the absolute worst.


Thanks all.

I'm not at all interested in external drives. The desktop unit is really on my desktop in a gaming case, and the side panel comes off with one big latch. So it's incredibly easy to hook up and disconnect the internal drive.

I think I'll simply go with a spinning WD Gold drive or two. I like the idea that it will keep the data for 5 years without trouble. I probably won't upgrade this computer for a while.
 
Last edited:
Originally Posted by MrMoody
How's this for long term?

[Linked Image]


I think it's getting to the end of its life but I can't complain. It's running PVR software 24/7 ... last reboot was in September. Power outage.

Last year I swapped out a few of those same drives from a couple of HMIs that had been going for 10 years straight in a raid. One had started to fail. Had the exact same setup at a different site and the tech over there didn't get it swapped in time so we couldn't clone the disk. He knew it was failing because it kept locking up. It was in a server and they were using win terminals so the notification that the drives were dying didn't show up. Luckily there was a copy from when it was put into service that we could use that was close enough to the current setup to work.

I also changed out an ide drive from 2004 in a different HMI that had been going 24/7 in a single drive setup. Had to compress everything to get enough room to install the cloning software. I set it up to make a full backup to a flash drive and just plan to monitor it in case the flash drive dies.

I used cheap PNY SSDs in all these applications. I figured a cheap new drive being backed up was better than old ones on borrowed time.
 
I'm closing in on 90,000 hours on some 2TB Hitachi Deskstar 7K2000's.

They're in a 4-drive RAID-10, so I don't see why I shouldn't just "run to fail".
 
Can't remember where I've read it, but the suggestion was not go go beyond 3TB on spinning HDDs in applications where the drive will spin up and down constantly, like a desktop environment. The premise was that the motor and head mechanisms are exactly the same as in smaller drives, so in larger drives with more heads and plates, they deal with greater loads. Not sure how to verify this though.

But the above could have a grain of truth to it since drives that don't spin up and down, or park the heads constantly seem to last a very long time. Like in surveillance applications.
 
Originally Posted by pitzel
I'm closing in on 90,000 hours on some 2TB Hitachi Deskstar 7K2000's.

They're in a 4-drive RAID-10, so I don't see why I shouldn't just "run to fail".

I have also had excellent luck with HGST drives. I have not had one fail on me yet, even with huge hours on them.
WD Blacks are just as good.
Stick with 2TB or less as the larger sizes appear to be less reliable.
 
Thanks all!

I'm still looking at purchasing 2 drives. But I just can't seem to determine which ones would be best for my needs.

I used the crystal disc program and it shows my failing drive to have 44,234 hours. However, that's just not possible, as it's been COMPLETELY unplugged the entire time. That's time since first power up, and not run time. I promise, it's been unplugged.
 
Last edited:
the seagate drives of the era all my dead lacie externals come from were godawful.
the lacie enclosures are quite good.
nice shock mounts and solid construction.
even the power bricks had good caps.
i get a few every 6 months from a client i service.
their e waste is my easily repaired and sold merchandise.
as in reload the malware killed laptops and replace the drive in the lacie.
Originally Posted by 03cvpi
Just a note.

Every external LaCie hard drive I have looked at that failed was a seagate. I suspect that the drive may have been dropped. Every one had the inner area of the platter ground down by the read write heads. One was mine, I have torn apart five others from friends with the same problem, all 1 TB drives.

I forgot to mention in my previous post, the externals I use for backup are only attached to the computer when backing up the system. They are WD 1 TB externals and are about three years old. If they are running in another two years, they will be replaced with new units.

Also a good high quality UPS unit is a very good idea. Preferably an online unit,so switch over is less than a few milliseconds. I have the seen the cheap 60.00 APC bricks not even sense loss of voltage.

Sorry about the previous post being doubled. Got distracted and hit post reply again.
 
Originally Posted by wag123
Originally Posted by pitzel
I'm closing in on 90,000 hours on some 2TB Hitachi Deskstar 7K2000's.

They're in a 4-drive RAID-10, so I don't see why I shouldn't just "run to fail".

I have also had excellent luck with HGST drives. I have not had one fail on me yet, even with huge hours on them.
WD Blacks are just as good.
Stick with 2TB or less as the larger sizes appear to be less reliable.


I had a few early life failures. And a refurb they sent me to replace one of the early life failures also died (but they replaced it with a fresh new drive). But the fleet has been solid. Lets hope it stays that way for another few years.
 
Originally Posted by Cujet
I'm not at all interested in external drives. The desktop unit is really on my desktop in a gaming case, and the side panel comes off with one big latch. So it's incredibly easy to hook up and disconnect the internal drive.

I think I'll simply go with a spinning WD Gold drive or two. I like the idea that it will keep the data for 5 years without trouble. I probably won't upgrade this computer for a while.

I've got a question about using a WD Gold or Red NAS HDD: The following quote claims that using a NAS type HDD (WD Gold or WD Red)in a single drive system is a bad idea. Can someone confirm this please?

"Using a NAS drive in a non-NAS (more specifically a non-RAID) environment is a really bad idea. One of the key differences is that NAS drives by default will have TLER (Time-Limited Error Recovery) enabled.
This is important because in a RAID environment you don't want the whole array pausing while one drive struggles to read one faulty sector. You want the drive to immediately give up and the array will recreate the damaged data from the other copy (RAID1) or parity (RAID5). In a single drive desktop environment you absolutely want the drive to keep trying to get that data back because there is no redundancy. So if you use a WD Red as a stand alone drive in a desktop and it has any issue then it is just going to give up and your data will be lost."


Source: https://forums.tomshardware.com/threads/wd-red-nas-drive-for-normal-pc-use.3100440/
 
@doitmyself - yeah, a Red paired with an enclosure that can make use of TLER, is the only place they should be used.

That's why they made the label red, so as to give you pause when using it.

Then, the kiddies picked up on it as if it was something "special". Endless explaining.
 
Hmmmm, now I am confused. I looked at the WD Ultrastar DC HA120 manual and it says this: "Though TLER is designed for RAID environments, it is fully compatible with and will not be detrimental when used in non-RAID environments."

Page 21: https://documents.westerndigital.co...es/product-manual-ultrastar-dc-ha210.pdf

Can't find this detailed documentation for WD Gold or Red. Other internet aficionados claim that the variable speed systems in these drives (Intelliseek, Intellipower) might not be optimal for use in a single drive PC to boot up, etc..

My hypothesis is that doing the "human thing" of classifying HDD's as good, better, best regarding reliability might mess you up if you use these in the wrong application, i.e., a single PC drive.

Just learning and doing the OCD thing here. Fascinating.....sometimes.
 
Last edited:
Here's a TLER response from the Western Digital forums:

Quote
Drives with TLER are specifically designed for redundant RAID arrays. TLER is a drive function, where if the drive has difficulty accessing a portion of data, it will give up quickly and report an unreadable condition to the host controller. This timing is usually around 6-7 seconds maximum. The reasoning for this is that in a redundant array, all the data can be accessed or reconstructed from parity using other drives. The host controller then uses the remap function on the hard drive to mark those sectors bad, and writes the reconstructed data to spare sectors. This whole process happens seamlessly and without interruption or degradation of the RAID array. If it happens so many times that it runs out of spare sectors, the drive will be dropped from the array and marked as bad.

If you use a drive configured with TLER in a non-redundant configuration, such as a single drive or in a RAID0, the drive still acts the same - it quickly gives up reading the data. This will usually cause CRC or other errors to display in the operating system, and the data will not be able to be read. Naturally, this results in a loss of that data.

In contrast, a drive designed for use in desktop systems as single drives don't have this timeout. Without TLER, the drive will literally try forever to get that data (and more often than not will eventually succeed). The operating system will appear to be extremely slow or frozen at this time, as it is waiting for the drive to become responsive again. If a drive without TLER is used in a redundant RAID, it will essentially cause the RAID to be degraded immediately upon hitting a single unreadable sector since the drive will appear to have become unresponsive to the host controller.

The short answer of it is…
To avoid unnecessary down time and headaches, don't use a drive without TLER in a redundant RAID array.
It's OK to use a drive with TLER in a non-redundant configuration, just be aware it will be more difficult to recover data from sectors that develop issues.


I'd like to note about the bold sentence: if a drive has failed to that point, time to ditch it and get a new one.
 
Thank you Pew and Spakard.

Wishing everyone a pleasant day tomorrow. I'm salivating looking at the Dearborn ham awaiting the oven tommorrow!
 
Originally Posted by Pew
Here's a TLER response from the Western Digital forums:

Quote
Drives with TLER are specifically designed for redundant RAID arrays. TLER is a drive function, where if the drive has difficulty accessing a portion of data, it will give up quickly and report an unreadable condition to the host controller. This timing is usually around 6-7 seconds maximum. The reasoning for this is that in a redundant array, all the data can be accessed or reconstructed from parity using other drives. The host controller then uses the remap function on the hard drive to mark those sectors bad, and writes the reconstructed data to spare sectors. This whole process happens seamlessly and without interruption or degradation of the RAID array. If it happens so many times that it runs out of spare sectors, the drive will be dropped from the array and marked as bad.

If you use a drive configured with TLER in a non-redundant configuration, such as a single drive or in a RAID0, the drive still acts the same - it quickly gives up reading the data. This will usually cause CRC or other errors to display in the operating system, and the data will not be able to be read. Naturally, this results in a loss of that data.

In contrast, a drive designed for use in desktop systems as single drives don't have this timeout. Without TLER, the drive will literally try forever to get that data (and more often than not will eventually succeed). The operating system will appear to be extremely slow or frozen at this time, as it is waiting for the drive to become responsive again. If a drive without TLER is used in a redundant RAID, it will essentially cause the RAID to be degraded immediately upon hitting a single unreadable sector since the drive will appear to have become unresponsive to the host controller.

The short answer of it is…
To avoid unnecessary down time and headaches, don't use a drive without TLER in a redundant RAID array.
It's OK to use a drive with TLER in a non-redundant configuration, just be aware it will be more difficult to recover data from sectors that develop issues.


I'd like to note about the bold sentence: if a drive has failed to that point, time to ditch it and get a new one.


^^ THIS. I'd rather any drive give up and report errors right away rather than keep on trying to get data. If it reports errors immediately you know right then to get your restore ready and get another drive. If it keeps trying you have a false sense of security that the drive may be OK or may live a little while longet before it dies. And that would be the worst possible outcome.
 
Back
Top