newer hard drives less reliable?

Status
Not open for further replies.
Joined
May 4, 2003
Messages
6,619
Location
southeast US
Here is the deal. I have/had a number of old PCs with hard drives as old as 10 years or older that work/worked fine. The only exceptions were maxtor (I had 2 failures of 2) and IBM deathstar (1 failure of 1). Now I had a number of failures in PC or laptop drives with newer drives with perpendicular technology (seagate, samsung, etc). The worst part is I search newegg reviews and count the number of failures in the reviews and sounds the failures are very frequent (10-20% or more) in all types of drives, maybe except for scorpion black series of WD when it's less than 10%. It almost sounds like drives are programmed to fail after 1-2 years. Interestingly, the warranty coverage goes down recently, in some cases only to one year: http://www.tomshardware.com/news/seagate-western-digital-HDD-warranty-Thailand,14322.html Is perpendicular recording more unreliable? Are the drives made cheap? This is annoying especially as drives are now expensive with the shortage.
 
This is why I only buy drives that have good warranties (WD black series) or SSD which have no moving parts. Back in the day I used to have the best luck with Seagate, followed by WD. Maxtor are just pure garbage, they failed more than they worked when I was buying them back in the day.
 
I still use some of the older 200-250GB seagate and hitachi drives and they are highly reliable after several years of hard work. I researched SSD couple of months ago and a pattern of failures emerged for all MLC (but not for the expensive SLC models) and decided not to pursue it. I think you are right. Black series WD is the only bet these days and has it's premium price.
 
In the old days, hard drive companies buy components from suppliers and they all buy from multiple sources. When one of them had a bad design, they would get disqualified and limit the damage. Later the industry merge vertically, and now each company own one of the supplier and if they make mistake, they push forward. Head fly height is also reduced to get more sensitive for larger capacity, but all the temperature controlled fly height did is remove the safety margin available for shocks. As a result the head crashes a lot more often and drives become less reliable. There is a reason higher end storage is still very small in capacity: they cannot afford down time.
 
HDDs have always had failures over the 25+ years we've been using them. Head crashes are nothing new. The old Colorado drives would take all night backing up 50 or 100mb for a reason even then. I did notice a bit of a jump across the board in failure rates when perpendicular recording drives first came out a few years back. And not "soft-landing" failures, but catastrophic total data loss failures with no warning. My understanding is that the industry has since mastered the difficulties with the technology. But with more production coming out of China, we still see some bad outcomes more than we like. That was reasonably tolerable (not that any data-related failure is welcomed), as the mean replacement drive cost had plummeted, with good TB units in the $50-60 range. No biggie in a typical 4 or 5 drive array. Now with the recent price spike, that makes for an unhappy strategy on occasion. Drives with longer warranties and enterprise class drives fail too. Some WD REs were freaking time bombs, and hearing their clicks of death still bring back bad memories. Best practice for anything holding data is to maintain an array, a local backup for quick recovery, and an offsite backup strategy for disasters. SSDs fail too. Save the SSD for the boot drives to impress the guests, but keep the data on a good old HW RAID stack with a decent controller.
 
I hope not. I just bought a new WD 2TB Green yesterday for my desktop. I had a full 500GB, a half full 640GB, and two SSDs for other stuff. I replaced the 500GB Samsung. Hopefully it's reliable, I have 23,500 hours on the 640GB Blue.
 
I have two (2) 160GB Seagates with over 65,000 hours on them and no signs of them having any issues. These have gotten the most use of any drives I've used in the past...although some old WD Caviar drives (less than 20gb) ran for 6-7yrs back in the day too without trouble. Your best bet is to purchase drives with a long production history run (like the current WD Caviar line) and avoid the latest/largest sized drives.
 
I ignore any newegg reviews where ownership is less than a couple months. New drive failures are common, more so where shipping is involved. Last I bought hard drives, it was 5 Samsung F3Rs for my RAID array. I split them between three orders to get separate packages. The two that came in the first box failed within a month. One within a week and the other after 3-4 weeks. Now, I wonder why that would be. Somebody dropped that box is my guess. What are the chances that the two in the first box just happened to be factory defects?
 
Originally Posted By: Colt45ws
I ignore any newegg reviews where ownership is less than a couple months. New drive failures are common, more so where shipping is involved. Last I bought hard drives, it was 5 Samsung F3Rs for my RAID array. I split them between three orders to get separate packages. The two that came in the first box failed within a month. One within a week and the other after 3-4 weeks. Now, I wonder why that would be. Somebody dropped that box is my guess. What are the chances that the two in the first box just happened to be factory defects?
I've heard that a lot of drives have weird firmware issues in RAID that cause failure. that seemed to be fairly common with the WD20EARS/EARX that I purchased. It seems kind of strange to me.
 
Thats TLER, or lack of it. All TLER is is a fancy acronym that means the drive will stop trying to recover an error after so many seconds (typically 7) and kick it back to the RAID controller (READ Failure). Standard Desktop drives will keep trying to read for basically ever. The problem is, while it is working on that it will not respond to commands. After 8 seconds the RAID controller will kick it out because it wasnt responding to commands. Whether it was actually bad. That is why I bought F3Rs as they have TLER.
 
Originally Posted By: Colt45ws
New drive failures are common, more so where shipping is involved.
I didn't think of this, shipping could be a major variable here I guess. Some newer drives monitor acceleration values in their SMART. Not sure if it works when drives are off. That would account for the early death problems. My recent failures were year or two into ownerships, well beyond the infancy mortality.
 
Originally Posted By: friendly_jacek
Originally Posted By: Colt45ws
New drive failures are common, more so where shipping is involved.
I didn't think of this, shipping could be a major variable here I guess. Some newer drives monitor acceleration values in their SMART. Not sure if it works when drives are off. That would account for the early death problems. My recent failures were year or two into ownerships, well beyond the infancy mortality.
Most of the newer drives park their head on a ramp when power is off and only unload the head back onto the media when the speed up is done. It is the dropping when powered on (laptop) that really crashes the head bad. Some drives would pre-emptively park the head when it detects free fall. All manufactured goods have infant mortality, and a lot of them die when they are first used, then the survived ones stabilize, then they die of old ages.
 
The best drives I ever used are 200GB Seagate 7200.7 and 250GB Hitachi 7k250. 2 of them are at 40,000 hrs and I have a few more with lower hrs. The only problem is they are slow by today's standards and noisy (hitachi). BTW, I don't run HD 24/7. I turn PCs down for nights and windows turns HDs down if idle after 1hr. It's sad that neither Seagate or 250GB Hitachi are reliable now.
 
I found this Google research - Faluire Trends in a Large Disk Drive Population From the Conclusion:
Quote:
One of our key findings has been the lack of a consistent pattern of higher failure rates for higher temperature drives or for those drives at higher utilization levels. Such correlations have been repeatedly highlighted by previous studies, but we are unable to confirm them by observing our population. Although our data do not allow us to conclude that there is no such correlation, it provides strong evidence to suggest that other effects may be more prominent in affecting disk drive reliability in the context of a professionally managed data center deployment.
 
Originally Posted By: friendly_jacek
The best drives I ever used are 200GB Seagate 7200.7 and 250GB Hitachi 7k250. 2 of them are at 40,000 hrs and I have a few more with lower hrs. The only problem is they are slow by today's standards and noisy (hitachi). BTW, I don't run HD 24/7. I turn PCs down for nights and windows turns HDs down if idle after 1hr. It's sad that neither Seagate or 250GB Hitachi are reliable now.
-I have a 250GB Seagate ST250410AS with 36,276 hours on it -I have a 500GB WD WD5000AAKS with 43,279 hours on it -I have a 74GB WD WD740GD (Raptor) with 53,291 hours on it I'm sure I have some even higher hour drives around here somewhere smile
 
It simply stands to reason that as you make drives and heads spin faster as well as pack data more densely on a drive, you will get more failures. Think of the case of more dense data storage. If any head crash occurs, the odds of damaging large numbers of sectors is higher. Faster movement CAN mean more wear. Likewise for higher heat, components are more likely to fail. On the other hand, I think things get better as the technology gets better. I realize my sample space at home certainly is not scientific. But it seems drives are quieter and last longer in my personal machines. But I change a boatload of drives in data centers every week. Five years seems to be the magic age. I just replaced 3 73gb 10K rpm FCAL drives in three different storage arrays for a customer. All of them went into service in 2007 according to the S/N on the arrays and drives. The date codes on both point to these being the drives they shipped with from manufacturing.
 
My current longest running drive is a WD6400AAKS with 30,581 hours. The AAKS series of WD Blue drives seem to be extremely reliable. Though the 1TB and 2TB green drives I've had since January and March have been humming away perfectly since their initial replacements after I turned off the head park timer that functions after 8 seconds and gives a ridiculous load/unload count.
 
Last edited:
Status
Not open for further replies.
Top