Intel announces Optane DC Persistent Memory DIMMs

Status
Not open for further replies.
Joined
Aug 21, 2008
Messages
25,104
Location
ON, Canada eh?
https://www.techspot.com/news/79483-intel-announces-optane-dc-persistent-memory-dimms.html


Quote
Why it matters: Intel's long awaited Optane DC Persistent Memory DIMMs have arrived and they could greatly transform the way servers and data centers handle data sets. The new DIMMs will see Intel attempting to bridge the price and performance gap between DRAM and NAND and the new class of memory opens up several unique use cases.
During Intel's Data-Centric Innovation Day, the company announced the highly anticipated Optane DC Persistent Memory DIMMs. While the new Cascade Lake Xeons may be the star of the show, Intel's Optane is a key product in propelling the company forward to its data-centric future for which they've come up with a new slogan "Moving, Storing and Processing Data."

The new Optane DIMMs fit under the "store" designation as Intel is looking to leverage them to store more data affordably, while also potentially disrupting the memory market. Intel's Optane DIMMs will use 3D XPoint memory, a type of non-volatile memory that is something of an amalgamation of NAND and DRAM.

A key aspect is that 3D XPoint retains data after power loss, which means it can be addressed as both memory and storage and poises it for many new use cases.

The new Optane DIMMs will populate a standard DDR4 slot, but offer much denser storage options: capacities of 128GB, 256GB, and 512GB will be offered. That's a significant increase from the current 128GB DDR4 module. Intel is positioning the DIMMs to bridge the price and performance gap between DRAM and NAND, although pricing details aren't currently known. However, the DIMMs are expected to be priced much lower than current DDR4 DRAM.

The DIMMs will also come with an SSD-like controller as well as a proprietary memory controller designed by Intel. Optane Memory DIMMs can be used with traditional DRAM, although they will be manged much differently due to latency, bandwidth and protocol concerns. Hence the need for Cascade Lake's reworked memory controller.
Intel's new class of memory should allow servers to deploy much larger amounts of memory -- into the terabytes -- much closer to the CPU, in addition to traditional RAM. Intel's Optane DIMMs should in theory greatly reduce the amount of trips to storage systems. Because it's non-volatile, this should also make data loss from memory (should a server need to be rebooted or go down) almost a non-issue.
 
Last edited:
Wyr, not much, as the wording suggests it's all intended for the server/datacenter market, not end-user PCs. Which means Amazon will probably be buying a ton of them for AWS, or looking to buy Intel altogether if this is as big as it looks to be on the surface.
 
On the surface , that sounds like an awful large bite for Amazon to swallow ?

As far as the previous question , thank you . :)

I usually buy a big box computer , several generations past the price curve . So , as you said , will not affect me much or at all .
 
I can see this used in products like our Oracle Exadata servers where we use PCIE flash cards in the storage cells. Applications like data warehousing and similar with large data sets can keep more of the data set in storage faster than disk. Or used for OLTP or similar where speed is crucial.

As I used to teach years ago, the difference between spinning disk and RAM is like the difference between waiting a minute and waiting 4-6 months for something to happen. Since humans don't typically understand the difference between nanoseconds and millisecond, one can scale the example to use values humans understand. So nanoseconds become minutes and milliseconds become months.

The more you can put into RAM or other technologies that are similar, the less time you spend waiting for data to come from or go to the spinning disk.

I can also see it used in NAS appliances like the battery backed memory on a RAID controller is used so the NAS head can tell the client the write is confirmed, while it takes its time actually writing to the disk back end until it's a good time to do so.

If the DIMMs or other technology like them can keep the data secure over a power outage (and I'd probably want some sort of RAID technology and not be at risk of data loss due to a single point of failure) more can be cached for both reading and writing for whatever purposes the architect of the system might desire.

This provides those trying to tune systems for maximum smoke more options for better performance.
 
Status
Not open for further replies.
Back
Top