Haven't really kept up with it to see if anything has changed, but back when lead-free solder was starting to become normal/required in some countries, pretty serious issues started popping up with premature failure.Been through that once. I think I mentioned it here.
Some circuit boards can get iffy when they have marginal solder connections. Another possibility would be electromigration. That would be where the electric current will literally cause the metal in the traces to move. This is more of an issue in the traces inside an integrated circuit, but some printed circuit boards can suffer from that too.
Electromigration Analysis for PCB and IC Design
Electromigration analysis can help you prevent a common cause of failure in ICs and PCBs. Here’s how it works.resources.pcb.cadence.com
local repair that fixed my big, heavy older TV said heating + cooling aka off + on cracks the soldered circuits, he quickly touched up a few + it was fixed. only $50 for the house call. living in a smaller town has bennies, loke my fair price indie vehicle repair guy!!
Typically the components on it (capacitor) go bad or cold solder fail due to vibration and shock, or over current / voltage fried some wiring and insulation. Recently overheating killing some of them too (they aren't used to be that power hungry) but many have thermal shutdown build in.
Flash memory have finite write cycles, that can wear out over time.
Then there's design problem that makes them not lasting as long as initially though.
Not necessarily. Electrolytics can dry out and pop off like that. Happens all the time. Usually they just self-heat and bulge, but occasionally they'll pop with enough force to blow themselves apart. I lived through repairing piles of 'bad' boards in the late 90's early 2000's when lots of board makers switched over to sub-par Chinese caps that needed replacing with better units. Replace the caps with some good United Chemi-Con or Nippon Chemi-Con and send them on down the way.But if it blew, there was probably a more severe problem.
There was an analysis on 3D Xpoint or other MRAM device, the point is they are not worth the money. It cost about 10x of Samsung's SLC SSD but only 2x as fast. The density is also not that high so you end up wasting either DDR slot for it (and you reduce the amount of ram you can use) or you use it in PCIe and lost the capacity you could have used for NVMe, network bandwidth, or GPU bandwidth.Those aren't necessarily the board or the board connections going bad though.
I had a PC at work that stopped functioning. Wouldn't boot or anything. I took off the cover and saw the shell of an electrolytic capacitor that shot across the case and the remains of the body in place. But if it blew, there was probably a more severe problem. Once I had an old VCR that clearly blew a fuse. I found an exact replacement glass fuse at Radio Shack and put it right in there, and it promptly blew when I turned it on. So once a fuse blows it's probably something that needs to be fixed beyond just replacing a fuse.
At least with flash memory most systems will keep track of how much wear there is. Also applying wear leveling (not sure if anyone really wants to get that deep into the weeds) so that more frequently accessed locations don't get excessive wear. Also - performance will eventually go down once the drive controlled needs to go into error correction. I'm not that well versed on it, but it's my understanding that brand new flash with few cycles probably won't need error correction, but no drive would work long term without error correction because they always start failing.
I have heard that Intel/Micron with their 3D Xpoint memory is supposed to be the next evolution of non-volatile memory. Supposedly faster than flash although slower than DRAM. But they claim it's like RAM where it doesn't specifically wear out.
There was an analysis on 3D Xpoint or other MRAM device, the point is they are not worth the money. It cost about 10x of Samsung's SLC SSD but only 2x as fast. The density is also not that high so you end up wasting either DDR slot for it (and you reduce the amount of ram you can use) or you use it in PCIe and lost the capacity you could have used for NVMe, network bandwidth, or GPU bandwidth.
The scaling couldn't catch up, it "might" be able to if Intel / Micron spend the R&D for it but it may not have good return. For the moment the most benefit of these fast storage, the low hanging fruit, is to remove mechanical latency and this is why they are focusing on getting it cheaper and cheaper, instead of faster and faster. For the "faster and faster" application they are focusing on the software side (SPDK / DPDK), by removing the OS layer to access the drive (interrupt, device driver, context switch in OS) and go back to accessing it all inside the application without going to the OS. That alone double the throughput. The next I heard is the Zone Name Space (moving the logical to physical mapping to the application instead of inside the SSD) and that would remove the SSD controllers' bottleneck on IOPS (how many access can be done in parallel).
Everything is really analog in the fundamental, NAND / NOR / MRAM or 3D xpoint. The real trick really is the amount of resource spend on fine tuning it and how small can you get each bit to at what cost.
Most likely some power electronics engineer.To some degree it helps to know the analog fundamentals, but a lot of digital circuitry creates bistable nodes.
DRAM is fun. I wonder whose idea it was that you could just let a 1 state decay and then periodically refresh it. I did a few exercises in dynamic logic in grad school, but I don't ever remember using it in ASIC/SoC design.
I'll just say the most memorable question I ever got in a job interview was about my favorite logic gate. I guess no wrong answers, but I just said "inverter". I like the simplicity.