Test Procedures

Our usual SSD test procedure was not designed to handle multi-device tiered storage, so some changes had to be made for this review and as a result much of the data presented here is not directly comparable to our previous reviews. The major changes are:

  • All test configurations were running the latest OS patches and CPU microcode updates for the Spectre and Meltdown vulnerabilities. Regular SSD reviews with post-patch test results will begin later this month.
  • Our synthetic benchmarks are usually run under Linux, but Intel's caching software is Windows-only so the usual fio scripts were adapted to run on Windows. The settings for data transfer sizes and test duration are unchanged, but the difference in storage APIs between operating systems means that the results shown here are lower across the board, especially for the low queue depth random I/O that is the greatest strength of Optane SSDs.
  • We only have equipment to measure the power consumption of one drive at a time. Rather than move that equipment out of the primary SSD testbed and use it to measure either the cache drive or the hard drive, we kept it busy testing drives for future reviews. The SYSmark 2014 SE test results include the usual whole-system energy usage measurements.
  • Optane SSDs and hard drives are not any slower when full than when empty, because they do not have the complicated wear leveling and block erase mechanisms that flash-based SSDs require, nor any equivalent to SLC write caches. The AnandTech Storage Bench (ATSB) trace-based tests in this review omit the usual full-drive test runs. Instead, caching configurations were tested by running each test three times in a row to check for effects of warming up the cache.
  • Our AnandTech Storage Bench "The Destroyer" test takes about 12 hours to run on a good SATA SSD and about 7 hours on the best PCIe SSDs. On a mechanical hard drive, it takes more like 24 hours. Results for The Destroyer will probably not be ready this week. In the meantime, the ATSB Heavy test is sufficiently large to illustrate how SSD caching performs for workloads that do not fit into the cache.

Benchmark Summary

This review analyzes the performance of Optane Memory caching both for boot drives and secondary drives. The Optane Memory modules are also tested as standalone SSDs. The benchmarks in this review fall into three categories:

Application benchmarks: SYSmark 2014 SE

SYSmark directly measures how long applications take to respond to simulated user input. The scores are normalized against a reference system, but otherwise are directly proportional to the accumulated time between user input and the result showing up on screen. SYSmark measures whole-system performance and energy usage with a broad variety of non-gaming applications. The tests are not particularly storage-intensive, and differences in CPU and RAM can have a much greater impact on scores than storage upgrades.

AnandTech Storage Bench: The Destroyer, Heavy, Light

These three tests are recorded traces of real-world I/O that are replayed onto the storage device under test. This allows for the same storage workload to be reproduced consistently and almost completely independent of changes in CPU, RAM or GPU, because none of the computational workload of the original applications is reproduced. The ATSB Light test is similar in scope to SYSmark while the ATSB Heavy and The Destroyer tests represent much more computer usage with a broader range of applications. As a concession to practicality, these traces are replayed with long disk idle times cut short, so that the Destroyer doesn't take a full week to run.

Synthetic Benchmarks: Flexible IO Tester (FIO)

FIO is used to produce and measure artificial storage workloads according to our custom scripts. Poor choice of data sizes, access patterns and test duration can produce results that are either unrealistically flattering to SSDs or are unfairly difficult. Our FIO-based tests are designed specifically for modern consumer SSDs, with an emphasis on queue depths and transfer sizes that are most relevant to client computing workloads. Test durations and preconditioning workloads have been chosen to avoid unrealistically triggering thermal throttling on M.2 SSDs or overflowing SLC write caches.

Introduction SYSmark 2014 SE
POST A COMMENT

97 Comments

View All Comments

  • Keljian - Tuesday, May 29, 2018 - link

    optane's forte is latency and small block access with low queue depth.. eg anything that uses anything resembling a database. Provided you have some precaching, for general apps this is a big deal. Reply
  • Spunjji - Wednesday, May 16, 2018 - link

    The scenario you just described would see zero measurable benefit from Optane. The way I see it there's two obvious scenarios. In the first, you're regularly working on the same photos, in which case keeping them on SSD until the project is "done" and then archiving them to HDD is not especially difficult. In the second, you regularly dip back into your older images, in which case an Optane cache will never learn the pattern and won't speed anything up.

    You're much better off having any catalogues stored on SSD alongside the most recent images, then bumping those images across to HDD storage when the project is finished. This is how I manage my own workflow and it's not at all difficult to handle.
    Reply
  • niva - Wednesday, May 16, 2018 - link

    I also fail to see the benefit from Optane as cache. The tech is cool, but I want drives big enough to just install my OS on rather than do this caching thing.

    For the scenario when massive ammounts of data must be stored and SSDs are not practical why aren't users reverting to HDD RAID arrays? With things like photos and movies HDD sequential access speeds are perfectly adequate.
    Reply
  • SkipPpe - Friday, May 18, 2018 - link

    Actually, caching is really quite nice. I used to use an SSD cache (32gb) bck when SSD's were expensive. Many people do not want or like to run RAID or ZFS arrays. They have one large drive (say a 4tb HDD) and they want it to boot fast. Optane does this well. Even guys with a 25 gb boot drive would benefit from caching their large HDD. For example, I have a Samsung 850 256gb boot with a Hitachi 2tb HDD. A 64gb Optane would be perfect for my large drive (basically storage and my steam library). I already have other SSD's for often-played games, but optane makes a lot of sense for someone who wants one large drive (say the new helium-filled massive HDD's), and some responsiveness to frequently used files. Reply
  • RagnarAntonisen - Sunday, May 20, 2018 - link

    It's interesting technology but it doesn't make any sense.

    E.g. it would be good for an old PC with a large mechanical hard drive. There an SSD the same size would cost more than adding an Optane cache. Problem is Optane only works on a modern motherboard and that modern motherboard needs a new CPU. So you're not going to put one into one of those old PCs with a mechanical drive.

    And actually for an old PC with a mechanical SATA drive you'd be better off replacing the drive with a hybrid one. Intel could have built an adapter so you could put your old SATA drive in and that had an Optane cache but they didn't.

    Making Optane an M.2 module that requires a motherboard with a recent Bios that knows how to do caching means they lose most of their audience.

    And as I said above it's all a bit of shame. Optane as a storage technology shows a lot of promise. The problem is that Intel can't make drives with a high enough capacity and is instead marketing it as a cache for older machines. Ones which can't support Optane.
    Reply
  • Keljian - Tuesday, May 29, 2018 - link

    "Making Optane an M.2 module that requires a motherboard with a recent Bios that knows how to do caching means they lose most of their audience." -- nope, just needs to have a bios that knows nvme and software(eg the optane cache software from intel, the storemi software from AMD, or PrimoCache et al) to drive it. Reply
  • escksu - Wednesday, May 23, 2018 - link

    Simple. Get 4 x 2TB SSD and run them in RAID 5. Problem solved. Reply
  • dullard - Tuesday, May 15, 2018 - link

    Flunk, the reason to get these drives is that an Optane cache + standard hard drive is FASTER and LARGER CAPACITY than the 512 GB SSD. If you don't like larger or faster, then go ahead with just a SSD. Reply
  • bananaforscale - Tuesday, May 15, 2018 - link

    You totally miss the point. An SSD is cheaper and irrelevantly slower and you can use it for caching. Reply
  • wumpus - Wednesday, May 16, 2018 - link

    You can? You used to be able to use a 64GB cache on Intel boards, and you can use a 512GB cache on just released AMD (470) boards [unfortunately, that bit of the review still has [words] under the storeMI section].

    If you can pull it off, a 512GB caching SATA drive makes all kinds of sense for anything you might want to do with this. As near as I can tell, Optane's only advantage is that they provide the caching software without having to hit windows and motherboard requirements. Which makes the whole "optane is so fast" advantage a bit of a joke.

    Wake me up when optane has the endurance to be used with a DDR4 interface (presumably with caching HBM2/Intel system DRAM). This doesn't give any advantage (besides providing the software license).
    Reply

Log in

Don't have an account? Sign up now