newer hard drives less reliable?

Status
Not open for further replies.
Just a cursory glance saw one major problem with the study. They use the mere act of replacing a drive because it is said to be failing/failed as evidence of drive failure.

I've seen enough SysAdmins who don't understand the difference between a hardware and software error to question this practice. Many an administrator has tried to sweep a mistake under the rug by claiming "hardware failure." (And to be fair to sysadmins, many a field engineer has tried to blame his mistakes on the SA as well!)

A drive returned to the vendor with NTF certainly deserves some consideration. I simply don't believe treating it equal with a verified failure is the most accurate means of addressing the issue.

Originally Posted By: NJC
I found this Google research - Faluire Trends in a Large Disk Drive Population

From the Conclusion:

Quote:
One of our key findings has been the lack of a consistent pattern of higher failure rates for higher temperature drives or for those drives at higher utilization levels. Such correlations have been repeatedly highlighted by previous studies, but we are unable to confirm them by observing our population. Although our data do not allow us to conclude that there is no such correlation, it provides strong evidence to suggest that other effects may be more prominent in affecting disk drive reliability in the context of a professionally managed data center deployment.
 
The old Quantum fireballs - slow as hades - seem to last a long time. As did the WD400BB/JB drives found in the many of the corporate machines at work. If I recall, at least one of them had close to 40K hrs. I can't check any longer as not admin access now on workstation.

And the WD 320GB Blue, and 640GB / 1TB Black drives I have are excellent. The 320GB is beyond 3yrs, but I don't have power on hrs.
 
Originally Posted By: friendly_jacek
I still use some of the older 200-250GB seagate and hitachi drives and they are highly reliable after several years of hard work.

I researched SSD couple of months ago and a pattern of failures emerged for all MLC (but not for the expensive SLC models) and decided not to pursue it.

I think you are right. Black series WD is the only bet these days and has it's premium price.


I've had a 500 Gb Hitachi drive in my laptop since May 2009, shortly after they were introduced to market. It had a 3 year warranty, but I've never had a problem with it.

Pity Hitachi aren't producing disk drive any longer as WD bought that division about a year ago.

BTW, one way to ensure quality is to buy drive with longest warranty.
 
Originally Posted By: javacontour
Just a cursory glance saw one major problem with the study. They use the mere act of replacing a drive because it is said to be failing/failed as evidence of drive failure.

I've seen enough SysAdmins who don't understand the difference between a hardware and software error to question this practice.

Indeed - from the report:
Quote:
Definition of Failure. Narrowly defining what constitutes a failure is a difficult task in such a large operation. Manufacturers and end-users often see different statistics when computing failures since they use different definitions for it. While drive manufacturers often quote yearly failure rates below 2% [2], user studies have seen rates as high as 6% [9]. Elerath and Shah [7] report between 15-60% of drives considered to have failed at the user site are found to have no defect by the manufacturers upon returning the unit. Hughes et al. [11] observe between 20-30% “no problem found” cases after analyzing failed drives from their study of 3477 disks.

Quote:
Therefore, the most accurate definition we can present of a failure event for our study is: a drive is considered to have failed if it was replaced as part of a repairs procedure.

And also from Conclusion:
Quote:
Our results confirm the findings of previous smaller population studies that suggest that some of the SMART parameters are well-correlated with higher failure probabilities. We find, for example, that after their first scan error, drives are 39 times more likely to fail within 60 days than drives with no such errors. First errors in reallocations, offline reallocations, and probational counts are also strongly correlated to higher failure probabilities.
 
back in the day I worked at compusa Repairing computers

and we would get random pallets where the whole pallet would be bad

WD 10gb's mostly.
 
Outside of mobile and server use, how are you guys getting hard drives to fail?

seriously...
like, dude....
how?
 
They fail. Sometime manufacturing defect, other just wear out.

Back in mid 2008 I bought 4 Seagate 7200.11 500GB HDDs. I installed them into a RAID 5 on a Highpoint 3510 controller.
Maybe 3 months later I bought another for to get my scrounged P4 machine going.
Not too long after that one of the drives in the RAID failed.

That was the end of the honeymoon. Its funny, the drive that got me to buy Seagates, a 7200.8 250GB is still running. It has over 50k hours now with no issues. By comparison, none of the .11s are in service.
err, sorry, I have one stuffed between my tower and the file cabinet its sitting on to keep the cabinet from resonating at around 80hz when I have music going at war volume.
Thats about all they are good for.
They have a 5 year warranty which isnt up until January, but I have long since given up on them. At one point I had them failing as fast as I could fill out RMAs and send them back. When I got the warranty replacements, they would already have reallocated sectors. At that point, I decided the money spent on paying for shipping was better spent on a new set of drives.
I dont think any of them ever went over 10k hours. I think the oldest one I still have is just under 9k. The other 4 are 3-8k and half of those are bad.
So that is why I now have 5 Samsung F3R 1TB. 4 in RAID and another cold spare.
 
Last edited:
Originally Posted By: yonyon
Outside of mobile and server use, how are you guys getting hard drives to fail?

I've only had a few fail outright, and others with SMART errors. The click of death is a terrible sound when trying to access data. As per the study, there doesn't seem to any predictive failure mode except the correlation to SMART errors.
 
Originally Posted By: yonyon
Outside of mobile and server use, how are you guys getting hard drives to fail?

seriously...
like, dude....
how?


Seriously, dude, you never saw a mechanical part fail?
 
Since the thread was revived after a year ago, an update:
I sold the RMA replacement drive from Samsung (they send me 2TB desktop drive instead of a 640GB laptop drive, idiots), and purchased WD black scorpio 500GB with 5 years of warranty. This is the best drive I could find. Works like a charm.
 
Originally Posted By: friendly_jacek
Originally Posted By: yonyon
Outside of mobile and server use, how are you guys getting hard drives to fail?

seriously...
like, dude....
how?


Seriously, dude, you never saw a mechanical part fail?


Naw, dude, hard drives are made by aliens with alien technology and do not fail for any reason whatsoever. Like, really, man, didnt you know this?
 
And speaking of large data centers, here are pics of some of Google's.
crazy2.gif


http://www.firstpost.com/tech/images-look-into-the-amazing-world-of-googles-data-centres-494649.html
 
PC or Laptop? Please be clear on that.

I suspect laptop drives fail more due to the abuse inflicted..ie moving from place to place while the laptop is still running. There is something called shock and vibration!!!
 
Originally Posted By: yonyon
Outside of mobile and server use, how are you guys getting hard drives to fail?


It's easier than you think: Simply put a bunch of really important stuff on it and make sure you DON'T back it up! It should be dead within the week.
 
Originally Posted By: Rand
And we would get random pallets where the whole pallet would be bad


The whole pallet was probably mishandled, especially if they are bulk drives rather than retail package with lots of foams.

Originally Posted By: yonyon
Outside of mobile and server use, how are you guys getting hard drives to fail?

seriously...
like, dude....
how?


You drop it, overheat it, static shock it, nuke it with bad power supply...

or they are just designed wrong like the Seagate 7200.11 and IBM / Hitachi Deathstar.

I was at Maxtor before the Seagate merger, and the most common design failures are pushing the limit of a component (usually the read write head) too far, flying the head too low (i.e. below 10nm), making the platters too flexible (too thin to fit 5 platters, or use aluminum instead of glass but did not make it thicker), etc.

This usually happen when your designated component supplier, in house or out side, falls behind and you end up having to fight a price war with a competitor in a handicap. Sort of like when auto makers cutting corner to be competitive when the competitors have a different union contract or no union contract.
 
Originally Posted By: yonyon
Outside of mobile and server use, how are you guys getting hard drives to fail?

seriously...
like, dude....
how?


Maxtor drives.
They don't even work.
Put it in your machine, get it up and running and you're happy, it fails. They are one of the worst products ever on the market. Failure rate at my old company with Maxtor drives had to be close to 100% after 1-2 years in service. They were just so bad. I have had drives fail at home too. WD Raptor 74GB failed, of course a Maxtor that I had (only ever had one at home) but never a Seagate, or a Scorpio series (Black)
 
Maxtor drives. I'll show my age. After about 5-6 years in Networking/IT I was working on Sun workstations with 105 MEGAbyte Maxtor drives. They would lock up. We could get them going again by dropping the workstation 1" on the table.

The we would call in to get a replacement drive under our service contract. I think the workstations were about two or three years old when the drives would start to flake out.

Based on that anecdotal evidence, I'd say they are more reliable today. It's just that there are so many more drives spinning out there that it seems like we are replacing more drives.
 
Originally Posted By: javacontour
Maxtor drives. I'll show my age. After about 5-6 years in Networking/IT I was working on Sun workstations with 105 MEGAbyte Maxtor drives. They would lock up. We could get them going again by dropping the workstation 1" on the table.

The we would call in to get a replacement drive under our service contract. I think the workstations were about two or three years old when the drives would start to flake out.

Based on that anecdotal evidence, I'd say they are more reliable today. It's just that there are so many more drives spinning out there that it seems like we are replacing more drives.

Thats hilarious. Those were probably those old 5.25" drives Ive only ever seen one of.
 
Status
Not open for further replies.
Back
Top