ssd vs sas???

928

Joined
Jul 25, 2017
Messages
257
Location
Sacramento, Ca.
I have an hp workstation with 4 -15000 rpm sas drives. Boot up time is not important. Would I really notice the speed difference with an ssd?
 
There's also the other great benefits like not having a computer full of annoying 15K SAS drives and like 30-60W lower power consumption depending on what 15k SAS drives we're talking about specifically.
 
SAS is an interface, basically a serial version of SCSI like SATA is the serial version of IDE, they have SAS SSD as well as hard drive. They are gradually declining in popularity and the future is NVMe SSD. Removing the mechanical latency by going from mechanical drive to SSD is a huge boost in performance, it is totally worth it.

SAS HDD do have a good use if you need write endurance but cannot afford enterprise grade SSD.
 
Recently purchased a NUC with an NVMe primary drive and an SSD secondary; IIRC the SSD transfer rate is 1/6 the NVMe one.

To the OP, yes, an SSD would make the machine faster (and less noisy)
 
15K SAS(or even its predecessor of UW SCSI) is about the best you can do in spinners for read/write of the drives themselves and an interface to handle the speed.

SCSI protocols also have inherently higher data integrity by orders of magnitude than do ATA based protocols. That's a huge benefit for data centers writing terabytes of data every day, but your average home user won't write enough data in their lifetime that "flipped bits' for 1 out of every 10^6 written vs 1 out of every 10^12(I think those are roughly the numbers for the current generation of SAS vs. SCSI) to actually matter.

If you're so inclined, SAS SSDs do exist. They're expensive, but when I looked they're not as crazy I expected. Amazon has 1.6gb Dell PowerEdge SAS SSDs for $509, compared with $250 for a 2tb Samsung EVO 870. That's comparing a high end consumer SSD to a full on enterprise grade SAS drive. A 1.9gb Intel enterprise SSD is $400, so that difference isn't drastic either.

BTW, I actually don't mind 15K drives and I really can't hear them running. 10K drives can be a different story, and a lot of them whine in such a way that I can't stand to be in the same room with them. OTOH, I threw together a computer last year where I wanted a fast drive and was having trouble getting a suitable SSD, so I threw a WD Velociraptor in it. These are 10K consumer class drives-they were targeted, IIRC, at gamers and other folks who wanted fast read/writes back in the days before SSDs were really affordable or practical. I came into a bunch of them, and the ones I have are a 2.5" form factor with a big heatsink in them(they're not laptop drives-they are too thick, too power hungry, and too hot). It was actually a surprisingly peppy drive.
 
SAS SSD future development has pretty much halted since NVMe enterprise SSD came along. The main reason being the performance in NVMe is much better, latency is much lower, and with the right development kit (SPDK/DPDK) you can have near 0% CPU utilization dealing with drive traffic vs a lot of device driver / OS to handle SAS. Most SAS drives development in the future are joint development between competitors to upgrade old server array instead of new server / customers in the cloud. It will go away in a few years for SSD but may stay around for enterprise mechanical drive.

The future is to move more of the overhead from the SSD to the server's application, and skip the OS going in between. One benchmark has shown if they move all the storage related "drivers" out of OS kernel and into the software, with NVMe it can almost double the throughput due to latency reduction.
 
If you're so inclined, SAS SSDs do exist. They're expensive, but when I looked they're not as crazy I expected. Amazon has 1.6gb Dell PowerEdge SAS SSDs for $509, compared with $250 for a 2tb Samsung EVO 870. That's comparing a high end consumer SSD to a full on enterprise grade SAS drive. A 1.9gb Intel enterprise SSD is $400, so that difference isn't drastic either.

I was scared to look at the prices of SAS vs SATA SSDs for our server upgrades but they ended up being about the same price for slightly less storage.

1625289406450.jpg
 
SAS SSD future development has pretty much halted since NVMe enterprise SSD came along. The main reason being the performance in NVMe is much better, latency is much lower, and with the right development kit (SPDK/DPDK) you can have near 0% CPU utilization dealing with drive traffic vs a lot of device driver / OS to handle SAS. Most SAS drives development in the future are joint development between competitors to upgrade old server array instead of new server / customers in the cloud. It will go away in a few years for SSD but may stay around for enterprise mechanical drive.

I love my PCIe NVMe drives. I also have older AHCI PCIe drives in use(the boot drives in my Mac Pro).

With that said, do you know how data integrity of NVMe over PCIe compares to SCSI based protocols?
 
I love my PCIe NVMe drives. I also have older AHCI PCIe drives in use(the boot drives in my Mac Pro).

With that said, do you know how data integrity of NVMe over PCIe compares to SCSI based protocols?
Well if it is good enough for enterprise SSD to replace SAS, it should be a least as good as SCSI based protocols. PCIe itself has CRC and enterprise grade stuff has end to end data protection (can detect if some parts in the middle has been corrupted).
 
Enterprise SSDs can be purchased in "read mostly," "mixed," or "write intensive configurations.

I had to purchase 19Tb of write intensive SSDs for collecting event information from 15000 PCs; I'm thinking they were 800Gb and, IIRC, well above $2KUSD this was a few years ago.

I check the remaining life and the were still above 92%
 
Enterprise SSDs can be purchased in "read mostly," "mixed," or "write intensive configurations.

I had to purchase 19Tb of write intensive SSDs for collecting event information from 15000 PCs; I'm thinking they were 800Gb and, IIRC, well above $2KUSD this was a few years ago.

I check the remaining life and the were still above 92%
Yup, they are built differently depends on how you want it, and cost accordingly as well.

Rule of thumb generally is you can double your endurance by increase 7% of capacity as reserve, double performance by using lower density nand and increase cost by 20% or so (with reduced max capacity), or slow down the speed of erase, program, read to make each block last longer. Basically you can play with them and get the result you want like that. If all else fail you always can go from TLC back to MLC or SLC and get 1/3 less or 2/3 less capacity per dollar.
 
Back
Top