NAND Support: Everything

The SF-2000 controllers are NAND manufacturer agnostic. Both ONFI 2 and toggle interfaces are supported. Let’s talk about what this means.

Legacy NAND is written in a very straight forward manner. A write enable signal is sent to the NAND, once the WE signal is high data can latch to the NAND.

Both ONFI 2 and Toggle NAND add another bit to the NAND interface: the DQS signal. The Write Enable signal is still present but it’s now only used for latching commands and addresses, DQS is used for data transfers. Instead of only transferring data when the DQS signal is high, ONFI2 and Toggle NAND support transferring data on both the rising and falling edges of the DQS signal. This should sound a lot like DDR to you, because it is.

The benefit is tremendous. Whereas the current interface to NAND is limited to 40MB/s per device, ONFI 2 and Toggle increase that to 166MB/s per device.

Micron indicates that a dual plane NAND array can be read from at up to 330MB/s and written to at 33MB/s. By implementing an ONFI 2 and Toggle compliant interface, SandForce immediately gets a huge boost in potential performance. Now it’s just a matter of managing it all.

The controller accommodates the faster NAND interface by supporting more active NAND die concurrently. The SF-2000 controllers can activate twice as many die in parallel compared to the SF-1200/1500. This should result in some pretty hefty performance gains as you’ll soon see. The controller is physically faster and has more internal memory/larger buffers in order to enable the higher expected bandwidths.

Initial designs will be based on 34nm eMLC from Micron and 32nm eMLC from Toshiba. The controller does support 25nm NAND as well, so we’ll see a transition there when the yields are high enough. Note that SandForce, like Intel, will be using enterprise grade MLC instead of SLC for the server market. The demand pretty much requires it and luckily, with a good enough controller and NAND, eMLC looks like it may be able to handle a server workload.

The SF-2000 Performance: Welcome to the 500 Club
POST A COMMENT

84 Comments

View All Comments

  • jwilliams4200 - Thursday, October 7, 2010 - link

    It is the Sandforce marketing department that is impressive. They have a lot of people drinking their Kool-aid. But Sandforce's actual technology does not live up to their hype. Reply
  • therealnickdanger - Thursday, October 7, 2010 - link

    It doesn't?

    http://www.anandtech.com/Bench/SSD
    Reply
  • jwilliams4200 - Thursday, October 7, 2010 - link

    Note that the Sandforce drives got beat by the C300 and the X25-E on the benchmark you cited. Neither of those SSDs claims a write speed as high as 275 MB/s as Sandforce does.

    Also check out these benchmarks of copying real data files:

    http://www.behardware.com/articles/794-11/ssd-2010...

    The Sandforce drives do not even achieve 50% of their claimed write speed when faced with copying realistic data files. With real files, their write speeds are about 130 MB/s on a fresh SSD, and drop to about 83 MB/s on a well-used SSD.

    This from a company that claims 275 MB/s write speeds. Sandforce is good at hype, not so much at delivering what they claim.
    Reply
  • jwilliams4200 - Thursday, October 7, 2010 - link

    Also check out these benchmarks of copying real data files:

    (couldn't include this in previous comment)

    bit.ly/96HJIL

    The Sandforce drives do not even achieve 50% of their claimed write speed when faced with copying realistic data files. With real files, their write speeds are about 130 MB/s on a fresh SSD, and drop to about 83 MB/s on a well-used SSD.
    Reply
  • therealnickdanger - Friday, October 8, 2010 - link

    Seriously, how often do you spend the majority of your time copying that many files to other drives?

    Those examples are pretty selective and also, it's hardly fair to pit SLC against MLC. Special use scenarios are all fine and good, but for your typical user, the current SF MLC drives beat Intel MLC in typical multi-tasking real-world scenarios (AT's benchmark, Vantage).

    According to AT's reviews of SF-based drives, they all bounce back original speeds after TRIM... with "real" files. Intel degrades over time as well and then is restored after TRIM. It's the nature of the beast.

    The evidence points strongly to SF beating out Intel overall by a substantial margin in real-world and synthetic tests, with Intel only winning in a handful of non-typical scenarios. I think you're just seeing what you want to see.
    Reply
  • jwilliams4200 - Friday, October 8, 2010 - link

    Copying files is a basic benchmark which gives an indication of how all other reads and writes will go. If a drive performs at less than half its claimed specification when copying files, you can be sure that it will perform similarly poorly on other tests.

    Yes, Anand's tests missed the Sandforce problem of performance degradation that cannot be recovered through TRIM, I'm not sure what your point is. Surely no one thinks Anand is perfect. The problem is real, and has been observed by bit-tech and by computerbase. I have also spoken with several people who have seen the problem themselves.

    And the evidence is that Intel matches or beats Sandforce on most real world tests, when you are looking at a well-used drive. Sandforce's used performance degradation is really bad when you are writing data that its controller cannot compress.
    Reply
  • 'nar - Sunday, October 10, 2010 - link

    Famously simple answer:

    "You're holding it wrong."

    Copying files is not necessarily representative of normal workloads, you need a course in deductive reasoning. You cannot assume that large, contiguous, compressed files copied one at a time are at all representative of small, uncompressed, random files accessed concurrently.
    Reply
  • Breit - Saturday, October 9, 2010 - link

    This seems to be a bit unfair with SF. Since their Controllers (or lets say SSDs with their controllers) can achieve a fairly high IOps count, you should at least bench the aggregate bandwidth they achieve with multiple file transfers at once...

    If this is a realistic workload or not depends entirely on your needs of course, but you also should choose Hard Drives and especially SSDs depending on your application and what delivers the best performance for you. Maybe SF-SSDs aren't the best SSDs for your average workload if speedy large single-file data transfer is your main goal. :)
    Reply
  • 'nar - Sunday, October 10, 2010 - link

    Anand has covered this already. Compression reduces write amplification, thus improves performance in most workloads, and extends Flash life by writing to NAND less.

    "SandForce’s controller gets around the inherent problems with writing to NAND by simply writing less" - from this article.

    Then here is the test with truly random data:
    http://www.anandtech.com/show/3681/oczs-vertex-2-s...

    No drive is perfect. Most large files, such as what you linked with 6.8 GB files, are compressed already. Highly compressed files like movies do not benefit from SF compression, but they also don't need to. How fast do you watch a movie? All of my movies are on hard drives.

    This is not Kool-Aid, this is a choice. Use what is most appropriate for your workloads. Don't trash-talk the drive or mislead others due to one type of synthetic benchmark, or one supposed "real world scenario" that really is not what most people would use them for anyway.

    Just accept that this drive has less performance with compressed, encrypted, or truly random files. I have, and I have moved on. I have purchased three sf drives while being fully aware of that fact, two OCZ LE's and a G.Skill Phoenix Pro. I do not use compressed data on them anyway, just windows and applications, all are compressible. Well, mostly compressible.
    Reply
  • vol7ron - Thursday, October 7, 2010 - link

    I imagine the added compression generates more heat for these components.

    Do you think that it will deteriorate the drive quicker?

    I'm not up to speed on the cooling inside an SSD, but I'm curious what happens to performance when a few cells in the proc begin to go.
    Reply

Log in

Don't have an account? Sign up now