We have lately seen SSD manufacturers paying more and more attention to the retail mSATA SSD market. For long the retail mSATA SSD market was controlled by only a few players: Crucial, ADATA and Mushkin were the only ones with widely available models. Intel also had a few models available for retail but those were all rather small and outdated (SATA 3Gbps and mainly aimed for caching with Intel Smart Response Technology). In the OEM market, you have mSATA SSDs from major brands, such as Samsung and Toshiba, but unfortunately many manufacturers have decided not to push their mSATA SSDs for the retail market. 

Like I've said before, the market for retail mSATA SSDs isn't exactly alluring but on the other hand, the market can't grow if the products available are not competitive. With only a few manufacturers playing in the field, it was clear that there wasn't enough competition, especially when compared to the 2.5" SATA market. A short while ago, Intel brought the SSD 525 and some very needed presence from a big SSD manufacturer to the mSATA retail market. Now we have another player, Plextor, joining the chorus.

Plextor showcased their M5M mSATA SSD already at CES but the actual release took place mid-February. Architecturally the M5M is similar to Plextor's M5 Pro Xtreme: Both use Marvell's 88SS9187 controller, 19nm Toshiba NAND and Plextor's custom firmware. The only substantial difference is four NAND packages instead of 8/16, which is due to mSATA's space constraints. 

  M5M (256GB) M5 Pro Xtreme (256GB)
Sequential Read 540MB/s 540MB/s
Sequential Write 430MB/s 460MB/s
4KB Random Read 80K IOPS 100K IOPS
4KB Random Write 76K IOPS 86K IOPS

Performance wise the M5M is slightly behind the M5 Pro Xtreme but given the limited NAND channels, the performance is very good for an mSATA SSD. Below are the complete specs for each capacity of the M5M:

Plextor M5M mSATA Specifications
Capacity 64GB 128GB 256GB
Controller Marvell 88SS9187
NAND Toshiba 19nm Toggle-Mode MLC
Cache (DDR3) 128MB 256MB 512MB
Sequential Read 540MB/s 540MB/s 540MB/s
Sequential Write 160MB/s 320MB/s 430MB/s
4KB Random Read 73K IOPS 80K IOPS 79K IOPS
4KB Random Write 42K IOPS 76K IOPS 77K IOPS
Warranty 3 years

The M5M tops out at 256GB because that's the maximum capacity that you can currently achieve with four NAND packages and 8GB die (4x8x8GB). It's possible that we'll see a 512GB model later once 16GB per die NAND is more widely available. 

Similar to Plextor's other SSDs, the M5M uses DRAM from Nanya and NAND from Toshiba. There's a 512MB DDR3-1333 chips acting as a cache, which is coupled by four 64GB (8x 8GB die) MLC NAND packages. The small chip you're seeing is a 85MHz 8Mb serial NOR flash chip from Macronix, which is used to house the drive's firmware. This isn't anything new as Plextor has always used NOR flash to store the firmware, but the package is just different to meet mSATA dimension requirements. 

Removing the sticker reveals the heart of the M5M: The Marvell 88SS9187. 

I discovered a weird bug during the testing of the M5M. Every once in a while, the drive would drop to SATA 3Gbps speeds (~220MB/s in Iometer) after a secure erase and the performance wouldn't recover until another secure erase command was issued. I couldn't find any logic behind the bug as the slow downs were totally random; sometimes the drive went through a dozen cycles (secure erase, test, repeat) while on some occasions the issue occurred after nearly every secure erase. At first I thought it was my mSATA to SATA 6Gbps adapter, so I asked Plextor for a new adapter and sample to make sure we were not dealing with defective hardware. However, the bug persisted. I've noticed similar behavior in the M5 Pro Xtreme (though not in the original M5 Pro) which is why I'm guessing the bug is firmware related (hardware issue would be much harder to fix). 

To date, Plextor has not been able to reproduce the bug, although I'm still working with their engineers in order to repeat our testing methodology as closely as possible. I don't think the bug will be a huge issue for most buyers as there's rarely a need to secure erase the drive but it's still something to keep in mind when looking at the M5M.

Test System

CPU Intel Core i5-2500K running at 3.3GHz (Turbo and EIST enabled)
Motherboard AsRock Z68 Pro3
Chipset Intel Z68
Chipset Drivers Intel + Intel RST 10.2
Memory G.Skill RipjawsX DDR3-1600 2 x 4GB (9-9-9-24)
Video Card XFX AMD Radeon HD 6850 XXX
(800MHz core clock; 4.2GHz GDDR5 effective)
Video Drivers AMD Catalyst 10.1
Desktop Resolution 1920 x 1080
OS Windows 7 x64


Random & Sequential Performance


View All Comments

  • JPForums - Thursday, April 18, 2013 - link

    Sorry, I wasn't trying to bait you. The posts just came off as a little hostile. Probably a result of the my morning meetings.

    If I'm understanding you correctly, your biggest issue is with the method of consistency. I read in another of your posts that this method is similar to the tests that several large enterprises use. You seem to be familiar with these methods. Is there an alternate (better) method in use that Anandtech could be using? Alternately do you have a superior method in mind that isn't currently in use? I'm guessing (for starters) you'd be happier with a method that measures individual operation latencies (I would too), but I'm unaware any tools that could accomplish this.
  • JellyRoll - Thursday, April 18, 2013 - link

    The consistency testing and all trace based testing used by this site are tested without partitions or filesystems, and no TRIM functionality. This has been disclosed by the staff in the comment sections of previous reviews.
    If you are testing consumer hardware, the first order of the day is to use methods that accurately reflect real workloads. Removing the must crucial component required for performance consistency (TRIM), then testing 'consistency' anyway, is ridiculous. Would you test a vehicle without fuel?
  • Kristian Vättö - Thursday, April 18, 2013 - link

    TRIM does not affect performance consistency of a continuous write workload. TRIM will only tell the controller which LBAs are no longer in use - the actual LBAs still need to be erased before new data can be written. When you're constantly writing to the drive, it doesn't have time to erase the blocks as fast as new write requests come in, which causes the performance to sink.

    If you know methods that "accurately reflect real workloads" then please share them. Pointing out flaws is easy but unhelpful unless you can provide a method that's better.
  • JellyRoll - Thursday, April 18, 2013 - link

    Pasted from the Wiki:
    "The TRIM command is designed to enable the operating system to notify the SSD which pages no longer contain valid data due to erases either by the user or operating system itself. During a delete operation, the OS will both mark the sectors as free for new data and send a TRIM command to the SSD to be marked as no longer valid. After that the SSD knows not to relocate data from the affected LBAs during garbage collection."

    During a pure write workload there is no need for the SSD's internal garbage collection functions to read-write-modify in order to write new data. That is the purpose of TRIM. Without TRIM writes require read-write-modify activity, with TRIM they do not. Very easy to see how it boosts performance.
  • Kristian Vättö - Thursday, April 18, 2013 - link

    You still have to erase the blocks, which is the time consuming part. Again, there's no time for normal idle garbage collection to kick in. Yes, the drive will know what LBAs are no longer in use but it still has to erase the blocks containing those LBAs. If you let the drive idle, then it will have time to reorganize the data so that there'll be enough empty blocks to maintain good performance but that is not the case in a continuous write workload. Reply
  • JellyRoll - Thursday, April 18, 2013 - link

    It is removing the 'write' from the read-write-modify cycle. Writing a page smaller than the block requires the SSD to relocate the other data in the block first, adding work for the SSD. Remember, they erase at block level. If it isn't aware that the rest of the block is also invalid (the point of TRIM) it must first move the other data. Reply
  • Kristian Vättö - Thursday, April 18, 2013 - link

    It's read-modify-write cycle (read the block to cache, modify the data, write the modified data) so the write operation is still there, otherwise the drive wouldn't complete the write request in the first place. You also seem to be assuming that the rest of the pages in the block are invalid, which is unlikely the case unless we're dealing with an empty drive. Hence it's exactly the same cycle with TRIM as you still have to read at least some of the data and then rewrite it. You may have to read/write less data as some of it will be invalid, but remember that garbage collection (with TRIM off) will also mark pages as invalid on its own. That's the reason why performance will stay high even if TRIM is not supported (e.g. OS X), assuming that the garbage collection is effective (there's at least 7% OP so there is always invalid pages). Reply
  • JellyRoll - Thursday, April 18, 2013 - link

    I am not assuming the data is still valid, the SSD does. It has to move the data if it considers it valid. TRIM removes the need to move this 'other' data, thus speeding the drive. Reply
  • Kristian Vättö - Monday, April 22, 2013 - link

    Here are some tests I did with Plextor M5 Pro Xtreme

    RAW (no partition): https://dl.dropboxusercontent.com/u/128928769/Cons...
    NTFS (default cluster size): https://dl.dropboxusercontent.com/u/128928769/Cons...

    As you can see, there's no major difference. In fact, there's a bigger slowdown with NTFS versus raw drive.
  • JPForums - Thursday, April 18, 2013 - link

    1) I was not aware that another website created this method of characterizing performance, but I'll give you the benefit of the doubt. Nonetheless, the statement that Anand introduced it to the standard test suite here at Anandtech in the Intel SSD DC S3700 review is a true statement. Given the context of the original statement, this is more likely the intended interpretation. Out of curiosity, which site did create the method?

    2) I'm not sure whether or not the test measures individual operation latencies or not as IOPS is basically the inverse of an average of the those latencies over time. It is kind of like the difference between FPS and Frame latencies. That said, the representation on the graphs is more the inverse of a one second sliding window average. Saying as much is kind of a mouthful, though. How would you phrase it?

Log in

Don't have an account? Sign up now