Test Procedures

Our usual SSD test procedure was not designed to handle multi-device tiered storage, so some changes had to be made for this review and as a result much of the data presented here is not directly comparable to our previous reviews. The major changes are:

  • All test configurations were running the latest OS patches and CPU microcode updates for the Spectre and Meltdown vulnerabilities. Regular SSD reviews with post-patch test results will begin later this month.
  • Our synthetic benchmarks are usually run under Linux, but Intel's caching software is Windows-only so the usual fio scripts were adapted to run on Windows. The settings for data transfer sizes and test duration are unchanged, but the difference in storage APIs between operating systems means that the results shown here are lower across the board, especially for the low queue depth random I/O that is the greatest strength of Optane SSDs.
  • We only have equipment to measure the power consumption of one drive at a time. Rather than move that equipment out of the primary SSD testbed and use it to measure either the cache drive or the hard drive, we kept it busy testing drives for future reviews. The SYSmark 2014 SE test results include the usual whole-system energy usage measurements.
  • Optane SSDs and hard drives are not any slower when full than when empty, because they do not have the complicated wear leveling and block erase mechanisms that flash-based SSDs require, nor any equivalent to SLC write caches. The AnandTech Storage Bench (ATSB) trace-based tests in this review omit the usual full-drive test runs. Instead, caching configurations were tested by running each test three times in a row to check for effects of warming up the cache.
  • Our AnandTech Storage Bench "The Destroyer" test takes about 12 hours to run on a good SATA SSD and about 7 hours on the best PCIe SSDs. On a mechanical hard drive, it takes more like 24 hours. Results for The Destroyer will probably not be ready this week. In the meantime, the ATSB Heavy test is sufficiently large to illustrate how SSD caching performs for workloads that do not fit into the cache.

Benchmark Summary

This review analyzes the performance of Optane Memory caching both for boot drives and secondary drives. The Optane Memory modules are also tested as standalone SSDs. The benchmarks in this review fall into three categories:

Application benchmarks: SYSmark 2014 SE

SYSmark directly measures how long applications take to respond to simulated user input. The scores are normalized against a reference system, but otherwise are directly proportional to the accumulated time between user input and the result showing up on screen. SYSmark measures whole-system performance and energy usage with a broad variety of non-gaming applications. The tests are not particularly storage-intensive, and differences in CPU and RAM can have a much greater impact on scores than storage upgrades.

AnandTech Storage Bench: The Destroyer, Heavy, Light

These three tests are recorded traces of real-world I/O that are replayed onto the storage device under test. This allows for the same storage workload to be reproduced consistently and almost completely independent of changes in CPU, RAM or GPU, because none of the computational workload of the original applications is reproduced. The ATSB Light test is similar in scope to SYSmark while the ATSB Heavy and The Destroyer tests represent much more computer usage with a broader range of applications. As a concession to practicality, these traces are replayed with long disk idle times cut short, so that the Destroyer doesn't take a full week to run.

Synthetic Benchmarks: Flexible IO Tester (FIO)

FIO is used to produce and measure artificial storage workloads according to our custom scripts. Poor choice of data sizes, access patterns and test duration can produce results that are either unrealistically flattering to SSDs or are unfairly difficult. Our FIO-based tests are designed specifically for modern consumer SSDs, with an emphasis on queue depths and transfer sizes that are most relevant to client computing workloads. Test durations and preconditioning workloads have been chosen to avoid unrealistically triggering thermal throttling on M.2 SSDs or overflowing SLC write caches.

Introduction SYSmark 2014 SE
Comments Locked


View All Comments

  • CheapSushi - Wednesday, May 16, 2018 - link

    Maybe you didn't notice this but NVMe NAND M.2 drives tend to be x4, meaning 4 PCIe lanes. These are x2, meaning 2 PCIe names. These are also slightly gimped controller wise, so enterprise doesn't use them instead. There's also an AIC/HHHL version of the Optane drives, even for enterprise. And regardless, Optane still has a huge amount of benefits compared to a NAND drive. It doesn't slow down the fuller it gets unlike NAND drives, the endurance is MUCH higher than even MLC NAND, the latency is better overall, etc. The fast majority of what is being done on a PC, even with file swapping, caching is low queue depth, not high. So it just depends on your workload and what you want to accomplish. Have you ever looked at how a completely full NVMe SSD slows down? What about when the DRAM RAM buffer gets full? No issue with Optane.

    Personally if I care about having large bulk storage. I'll be using Optane for cache. If I'm going for just ONE drive for my ENTIRE system, sure, I'll go with a large NVMe NAND drive and spend the $1K or more for it..
  • Spunjji - Wednesday, May 16, 2018 - link

    Your response doesn't cover the flaws discussed in the post you're responding to, save to astroturf them by defending Intel's artificial product segmentation. It's bizarre!
  • CheapSushi - Wednesday, May 16, 2018 - link

    Meant to write, "meaning 2 PCIe lanes" and it falls in line with actual real world MB/s rather than theoretical max bandwidth on two lanes.
  • hanselltc - Thursday, May 17, 2018 - link

    Maybe you should take a look at the Optane NVMe drives.
  • haukionkannel - Friday, May 18, 2018 - link

    8 Tb ssd are still too expensive. Now the picture that you are editing from your 16tb picture library, runs automatically faster, because it will be in cache part, instead of really slow storage HDD... this is excellent product!
  • frenchy_2001 - Friday, May 18, 2018 - link

    Actually, that would depend on your flow.
    To load data into the cache, you need to access it several times.
    If you process your images by loading them one by one, editing then saving, this will not help.
    Then again, why would you need help for that, as this is all sequential access and HDDs are reasonably good at it.
  • escksu - Wednesday, May 23, 2018 - link

    No it doesnt. Like any cache out there, the data has to be inside the cache before it can speed things up. The whole idea is that you will probably be using the same data again so by storing it in the cache, you get it faster.

    But, cache is not magic. You initial loading of your photo be just as slow because it reads from your slow HDD. After that, it will get from the cache so things speeds up. there are algorithms such as read ahead to predict what you may need so it reads more than you need. But don't count on it to work all the time.
  • escksu - Wednesday, May 23, 2018 - link

    Another thing is no one in the right mind will buy a single 8TB SSD and store everything inside. You need redundancy in case the SSD fails. Thats why people run RAID 5.
  • Death666Angel - Wednesday, May 16, 2018 - link

    I personally don't understand the appeal of such a small cache. The speed improvements only kick in after the first use of the data and while Optane can be quite a bit faster than SSD, the 99th percentile numbers don't look great, which isn't a problem for data access but can be for program access (hiccups). Also, 64GB cache vs >100GB (which I think you meant to write instead of "less than", since <100GB does not mean anything [can be 1kb for all we know]) of data does not look like it will show great improvements to your workflow on a regular basis.
    If you are working off a 6TB HDD and need great speed, why not employ a RAID? RAID 5 with 4 2TB HDDs should be able to give you in excess of 300MB/s read/write speeds. Or RAID 10 if you want a simpler system, speed should still be over 300MB/s. Seems like a more equal solution than this 64GB cache that kicks in after one or two runs and even then is a bit uneven.
  • escksu - Wednesday, May 23, 2018 - link

    Btw, this cache is not everything. Its still very very very slow compared to RAM. If someone needs to work with 100GB photos, they ought to invest at least 128GB of RAM.

Log in

Don't have an account? Sign up now