SSD Use During YouTube or Streaming

Yeah, the ESD emphasis I think stems from the earlier years of computing where components weren't so durable. When I first started working on computers, which was when I was about 8 years old, I had no idea about it of course. By the time I was 10 (1990) I had read about how it could damage semiconductor components and how you were supposed to wear an anti-static wrist strap or at least keep yourself grounded. Of course SIMM's or DIMM's weren't a thing then, even when my dad bought a TI 486SX/25 laptop, it took RAM that had pins that you had to push into the board.

That concern and cautionary principle carried well forward into today where it is still advised, despite increased durability and resiliency that has made its way into these components.
Reading this got me thinking I wish I could show you what I was working on in the mid 70's. I don't think they will let me post it though..

No not girls...guns...
You just brought me back to my youth...thanks...weird huh???
Nice write.
 
That concern and cautionary principle carried well forward into today where it is still advised, despite increased durability and resiliency that has made its way into these components.

I started thinking about it, and one of the big issues with ESD isn't necessarily static destroying a powered down piece of electronics, but something called latch-up on powered semiconductors. The CMOS transistor process actually mimics a possible positive feedback amplifier. All it takes is a shock to bias this feedback loop to the point where it will just overheat and destroy itself. That can come from cosmic radiation (a problem in electronics going into space or even high altitude aircraft) or an ESD event. A friend of mine got a patent on a method to detect and shut down when something goes into latch-up, before the electronics destroys itself. However, his device didn't use traditional semiconductor packaging, and had direct contact with human skin.

GpzGgZax5ySE13X_dEZ8JJx4LHDeCCcyj1OcpfleY8MJp0OsGOt52HSuqsPoJ6n4z0GJ7Px2S1VsuNvMpc6gR-IhHlfLxDvgjo2eLn-OdwhQRbNyZ8PJHHpZidfudSbVPQ
 
I started thinking about it, and one of the big issues with ESD isn't necessarily static destroying a powered down piece of electronics, but something called latch-up. The CMOS transistor process actually mimics a possible positive feedback amplifier. All it takes is a shock to bias this feedback loop to the point where it will just overheat and destroy itself. That can come from cosmic radiation (a problem in electronics going into space or even high altitude aircraft) or an ESD event. A friend of mine got a patent on a method to detect and shut down when something goes into latch-up, before the electronics destroys itself.

GpzGgZax5ySE13X_dEZ8JJx4LHDeCCcyj1OcpfleY8MJp0OsGOt52HSuqsPoJ6n4z0GJ7Px2S1VsuNvMpc6gR-IhHlfLxDvgjo2eLn-OdwhQRbNyZ8PJHHpZidfudSbVPQ
OK, now that's pretty cool!
 
Not exactly cool. More like, really, really hot. Seriously - those devices wouldn't just destroy themselves but had a strong likelihood of burn injuries if there wasn't something to stop it.

EDSlatchup.jpg


Sorry - I added stuff after my initial post.
Well, you piqued my interest and found this paper from Texas Instruments on this subject:
 
Home user typically don't write much, Program (writing some 0s to the flash) / Erase (writing all 1s to the flash) cycles are what wears out the SSD and other flash drives / card. If you are not super low on memory and have to swap all the time between SSD and RAM, you won't write much at all browsing webs or watching videos.

This is why the new Mac with Apple's own chip wears out the SSD fast: short of RAM and swapping constantly between RAM and SSD to make up for it.
 
All this talk about SSD lifecycles...has anyone had one fail?
I wore one out after about 6 years. It was a heavily used drive for code build, but not as heavily used as swap partition. The drive was an 50GB OCZ Vertex 2 so it wasn't the best either, but none of my other SSD wore out yet before 4 years.

If they did it wasn't flash nand wear related. They usually fail from something else like a controller or other component but as long as it's a good brand like Samsung, crucial, or western digital who have in house controllers and nand id sleep well at night.

If it's a brand that buys varying grades of components from varying vendors with varying combinations components no thanks.
PureStorage said the #1 reason SSD fails prematurely is firmware bugs.

There's a bug in some older HPE servers that wear out a speciality board's storage memory. As a lot of them failed they made a patch to change the wear leveling algorithm. In the last two years they've gotten pissy about applying all recommended updates before opening a support case.

I had a SATA SSD stop dead after it hit a certain number of run hours. A patch brought it back to life. It was known to just stop at that number of hours.

For the most part the people who made sucky controllers went out of business. The hobby enthusiasts were very vocal; the enterprise was very slow to adopt so benefitted by staying out of the market for awhile.
Enterprise wear patterns and performance requirements are very different than consumer (including workstation). For one enterprise drive typically have multiple parallel IO going on, and they want predictable performance instead of slowing down too much. Enterprise drive also don't write their data immediately and may store them in the onboard RAM, acknowledge the server it is done writing, then push them to the NAND when it has time to do so or if a power loss happens (this is why they have a capacitor that stores about 20ms of energy to flush things down at power loss, they are pretty expensive). Consumer drives for OEM are much cheaper, they obviously don't have the super cap but the recent ones don't even have DRAM for the address lookup table (about 1GB DRAM per 8TB of NAND, can get expensive fast), and are not designed to be written continuously or have a lot of IO per seconds. Most desktops or workstations have 1-16 workload going on at a time instead of hundreds of threads or customers accessing at the same time on a server.

Sucky controllers companies are still around, but most drive makers would rather buy a mature design and slap a label on it, so they like to go for medium size good enough controllers from a reputable company. Sometimes you can save money buying a good controller and a cheap nand, because good controllers can handle them better (error correction algorithm, internal RAID / XOR engine, etc). So, most reputable brands would use reasonable stuff like Phison, Marvell, or NAND companies building their own controllers since they have the economy of scale to do it. I don't think people would buy JMicro as a first choice unless they are selling to some cheap embedded project for fun, and these days they use eMMC instead because they are even cheaper.

This post made me chuckle. I remember years ago when I was first building my first computer and people at my work were showing me a stick of RAM and told me the precautions. We were holding the RAM like baby Jesus fresh out of the womb with anti static wrist cables and whatnot. Then I went to a computer show and bought some RAM from the RAM purveyors there. They were not so careful. LOL. They threw them around like a bar of candy.
Most chips once soldered on a PCB, have significantly increased the ESD tolerance due to the amount of metal that can "hold" the discharge. I wouldn't say they won't get ESD damage anymore but they are much better off afterward. For home users the changes of getting ESD damage before the RAM or other products are obsolete is small, but for OEM this tiny 0.1% increase would still mean a huge profit loss, and for engineering companies and R&D department the wasted time to investigate and delay can be massive (say you don't know if your design is good and you wasted 20 engineering days on it only to find out someone zapped it without wearing protection). For home user what's another $50 loss in 0.01% chance? 5 cents? Nothing to worry about.
 
Last edited:
I'm not really that well versed on PCBs, but I do remember where electromigration was discussed in a digital design class. We were actually doing layout projects, and the talk was about how wide the traces had to be to reduce the chances of electromigration. Doubling the width would often reduce it to almost nothing, but that of course had an area penalty.

But PCBs can be really interesting how they can fail. The most obvious would be poor quality solder joints that break off, but this paper talks about ionic migration, electromigration, stress migration, and thermal migration.


I guess they call some forms of ionic migration "dendrites" since they look like the dendrites in neurons. They increase the resistance of the trace before they become open, but worst case is that they short to the next trace.

3-s2.0-B9780120885749000070-f07-09-9780120885749.jpg
Interesting - with the adoption of Pb-free solder to meet EU/Chinese RoHS regulations that’s more of a silver/tin/bismuth alloy with a higher melting point, chip packaging going to BGA or microsoldered formats and new silicon recipes being doped with rare earth elements or exotic metals, are phase-change and the nobility of metals causing issues with PCB design?

Lead-free solder has been a bane in electronics - Apple’s had many issues with GPUs, as did Sony and Microsoft with the PS3/4 and Xbox 360.
 
Lead-free solder has been a bane in electronics - Apple’s had many issues with GPUs, as did Sony and Microsoft with the PS3/4 and Xbox 360.

To be fair, most of the GPU issues haven't actually been an Apple issue, per se, but rather issues with(IIRC) the attachment of the GPU die to its substrate.

The notable ones are the Geforce 8600M GT on the Macbook Pro 3,1/4,1 and the Radeon 6xxx series in the early/late 2011 MBPs(Macbook Pro 8,2 and 8,3).

nVIdia acknowledged the 8600M problem and issued a revised chip that permanently fixed the issue. Apple did repair a lot of boards with this the revised one, and there are people now who do it aftermarket. Once installed, these are permanent fixes. AMD never issued a fix for the GPUs in 2011s, and they will all eventually fail. I have a 2011 17"(the last 17" model, and only quad core 17" made) with a dead one now. The discrete GPU can be permanently disabled and the computer only operate on the integrated GPU, although this has its own issues(no external video). The fix is a bit involved and involves both electrically disabling the chip and flashing the firmware so that the computer doesn't look for it.

With that said, Apple doesn't help any of this by running their computers on the bleeding edge of the thermal envelope. I had an MBP 4,1 I used for a while and I managed to stave off issues by using a fan control program to aggressively ramp the fans intentionally keep the temperatures lower than Apple normally would run them(the default is to only start ramping the fans when you go over ~90ºC so as to keep the computer quiet-I'd set them so that the fans would be full blast at 90ºC and start ramping up from idle at 70º-yes it was loud but also cool).

But yes, at the end of the day, lead free solder is behind a lot of this.

BTW, on temperatures, it's unreal that the M1 Pro I'm using is sitting at 33ºC without the fan running at all...
 
The discrete GPU can be permanently disabled and the computer only operate on the integrated GPU, although this has its own issues(no external video). The fix is a bit involved and involves both electrically disabling the chip and flashing the firmware so that the computer doesn't look for it.

With that said, Apple doesn't help any of this by running their computers on the bleeding edge of the thermal envelope.


BTW, on temperatures, it's unreal that the M1 Pro I'm using is sitting at 33ºC without the fan running at all...
Sounds a lot like Nvidia Optimus or AMD’s Dual Graphics - the graphics driver will choose which GPU to use depending on which DirectX/OpenGL or GPU compute instructions are called for by the program. Nvidia pushed Optimus with Dell/HP/Lenovo business laptops more than consumer/gaming ones(which co-opt the Intel/AMD iGPU for a GeForce or Radeon dGPU). The ThinkPads we got with Optimus worked OK, if not a little cranky and yes - you must be docked to use external displays, the Intel iGPU was hard-wired to the internal display.

Apple just has a sick obsession with form over function. No expandable memory or storage in the newer Macs. I’m an Apple person for my personal needs. They are fan-phobes and thermal management phobes. Samsung proves with the Galaxy S10 and beyond you have heat pipe/vapor chamber cooling on a phone as thin as an iPhone although it might get uncomfortably warm. The iPhone uses a stacked PCB with no way to cool the SOC, hence the throttling.

the PC makers are starting to take the Apple approach - the Lenovo ThinkPad X1 series as well as the Dell XPS series, the Surface series HP’s new ZBook tablets and Razer’s Blade have been pushing thin designs and probably pushing the thermals a bit. Nvidia’s encouraging the OEMs to use Max Q design for their gaming machines. There’s compromise though - the RAM or SSD is soldered on.

The M1 SOCs are a beast - I was hoping the new MacBook Pro 14/16 wasn’t as expensive as they are. I would be fine with an M1 Air(I don’t do any video/photo work or UI/UX design in Adobe CC, just general use and some light coding) but the passive cooling is a turn off. I’ll solider on with my late 2013 rMBP until the next Mac event if the Air gets some of the Pro’s ports, better cooling and M2 or if Apple has a bridge model between the Air and new Pros with M1 Pro.
 
Back
Top