Pages

29 December 2011

OCZ Octane 128GB SSD Review

An SSD State of the Union Update

I'm very pleased with the level of acceptance of SSDs today. When the X25-M first hit the market and even for the year that followed, any positive SSD recommendation was followed by a discussion of how many VelociRaptors you could RAID together for the same price as one SSD. Now we have an entire category of notebooks that come standard with some degree of solid state storage. End users are much more accepting of SSDs in general, aided by the fact that prices have finally dropped very close to the magical $1/GB marker (a boundary I expect us to finally cross by the end of 2012).

Today's SSDs are not only more prevalent than those we were reviewing just a couple of years ago, they're also a lot better. While the first SSDs still had difficulties competing with mechanical storage for sequential transfers, modern SSDs are several times faster even in the most HDD-friendly workloads. The implementation of technologies such as TRIM helps ensure the performance degradation issues of the very first drives are less likely to be encountered by anyone with a normal client workload.

Although it helped shape the client SSD business, Intel's drives are no longer the only option for performance and reliability. There are good, reliable SSDs from companies like Micron and Samsung that can easily hang with their Intel competitors. And if you're willing to live on the bleeding edge for the promise of the absolute best performance, there's always SandForce.
The client SSD space may not be mature, but I'm happy with the current state of things. With the exception of the ultra low price points, if you want an SSD, there's a solution on the market for you today.


Going forward, I'm not expecting a ton of change in the near term. Intel's Cherryville SSD has been delayed. This is the long awaited SandForce based drive from Intel. I suspect the delay has to do with Intel working through bugs in SandForce's firmware, but its efforts should hopefully make the platform more robust (although it remains to be seen if any of Intel's efforts are ported back into the general SF codebase).

In the first half of next year we'll see the first 20nm IMFT NAND shipping in client SSDs. The smaller transistor geometry will eventually pave the way for cheaper drives, although I wouldn't expect an immediate drop in prices. At the very least we'll see firmware updates enabling 20nm NAND support, although we may see the introduction of some new controllers as well. We're going to be pretty limited in terms of performance gains until ONFI 3.0 based controllers/NAND show up in early 2013. I expect the next 12 months to be more about driving enterprise SSDs and bringing down the cost of consumer drives.

The Topic At Hand: OCZ's Octane

Now for the reason we're all here today. Earlier this year OCZ acquired Indilinx, one of the first SSD controller makers to really make a splash in the enthusiast community. Ever since OCZ entered the SSD business it wanted to guarantee its independence by securing exclusive rights to a controller. OCZ initially did so by buying up all available inventory, first of Indilinx controllers, then of SandForce controllers. That strategy would only work for a (relatively) short period of time as the controller vendors sought to expand their market by selling chips to OCZ's competitors. A few slip ups on the roadmap and Indilinx was ripe for acquisition. OCZ stepped up to the plate and sealed the deal. Several months later, OCZ debuted its first drive based on an unreleased, exclusive Indilinx design: Octane.

Although Octane didn't set any performance records, it was competitive. Performance was definitely current gen, giving OCZ an in-house alternative to SandForce. There was just one issue: OCZ only sent out 512GB Octane review samples. SSDs get a good amount of their performance by executing reads/writes in parallel across multiple NAND devices. Higher capacities have more devices to read/write in parallel, and thus generally deliver the best performance. The greatest sales volume is of the lower capacity models - they're cheaper to own and NAND prices are falling quickly enough that investing in a 512GB drive rarely makes financial sense.
OCZ finally sent out a 128GB Octane, which I promptly put through our standard test suite.


The drive still uses sixteen IMFT synchronous NAND devices. In this case each package features 64Gb (8GB) of 25nm MLC NAND on a single die. You may remember from our original review of the 512GB Octane that the Indilinx Everest controller supports 8-channels, but pipelining read/write requests to multiple devices per channel is supported.

Gone from the Octane's PCB are the TI muxes that we found on the 512GB version. With the only difference between these drives being their capacity, it's likely that the muxes were used to switch between NAND die/packages. The 512GB version has 4x the number of NAND die than the 128GB version, and it's possible that the Everest controller is only capable of directly communicating with 16 or 32 die on its own. An external mux per channel would allow OCZ to scale capacities much further.
Other than the absent muxes and a change in PCB color, the 128GB Octane is no different than the original 512GB drive we reviewed. The drive does ship with a newer firmware revision:

The updated firmware doesn't do much for performance, although it does apparently fix a number of bugs that existed in the previous version. I haven't seen any mass reports of significant issues with the Octane, although it is still pretty new. Let's hope the trend continues.

The Test

CPU Intel Core i7 2600K running at 3.4GHz (Turbo & EIST Disabled) - for AT SB 2011, AS SSD & ATTO
Motherboard: Intel DH67BL Motherboard
Chipset: Intel H67
Chipset Drivers: Intel 9.1.1.1015 + Intel RST 10.2
Memory: Corsair Vengeance DDR3-1333 2 x 2GB (7-7-7-20)
Video Card: eVGA GeForce GTX 285
Video Drivers: NVIDIA ForceWare 190.38 64-bit
Desktop Resolution: 1920 x 1200
OS: Windows 7 x64

Random Read/Write Speed

The four corners of SSD performance are as follows: random read, random write, sequential read and sequential write speed. Random accesses are generally small in size, while sequential accesses tend to be larger and thus we have the four Iometer tests we use in all of our reviews.

Our first test writes 4KB in a completely random pattern over an 8GB space of the drive to simulate the sort of random access that you'd see on an OS drive (even this is more stressful than a normal desktop user would see). I perform three concurrent IOs and run the test for 3 minutes. The results reported are in average MB/s over the entire time. We use both standard pseudo randomly generated data for each write as well as fully random data to show you both the maximum and minimum performance offered by SandForce based drives in these tests. The average performance of SF drives will likely be somewhere in between the two values for each drive you see in the graphs. For an understanding of why this matters, read our original SandForce article.

Desktop Iometer - 4KB Random Read (4K Aligned)

Random read performance remains untouched with the move to 128GB, although random write performance is cut in half compared to the 512GB version:

Desktop Iometer - 4KB Random Write (4K Aligned) - 8GB LBA Space

Many of you have asked for random write performance at higher queue depths. What I have below is our 4KB random write test performed at a queue depth of 32 instead of 3. While the vast majority of desktop usage models experience queue depths of 0 - 5, higher depths are possible in heavy I/O (and multi-user) workloads:

Desktop Iometer - 4KB Random Write (8GB LBA Space QD=32)

 

Sequential Read/Write Speed

To measure sequential performance I ran a 1 minute long 128KB sequential test over the entire span of the drive at a queue depth of 1. The results reported are in average MB/s over the entire test length.
Desktop Iometer - 128KB Sequential Read (4K Aligned)

Sequential read performance is once again untouched compared to the larger capacity Octane, while sequential write performance is seriously impacted:

Desktop Iometer - 128KB Sequential Write (4K Aligned)

AS-SSD Incompressible Sequential Performance

The AS-SSD sequential benchmark uses incompressible data for all of its transfers. The result is a pretty big reduction in sequential write speed on SandForce based controllers.

Incompressible Sequential Read Performance - AS-SSD

Incompressible Sequential Write Performance - AS-SSD 


Performance Over Time & TRIM

In our initial Octane review I mentioned that the drive exhibited very high write amplification under a workload comprised of heavy random writes. I mentioned that this would mostly impact server workloads, however you could see issues if you were running on a system without TRIM enabled. We turn to our standard TRIM test to show how bad things could get.

The easiest way to ensure real time garbage collection is working is to fill the drive with data and then write sequentially across the drive. All LBAs will have data in them and any additional writes will force the controller to allocate from the drive's pool of spare area. This path shouldn't have any bottlenecks in it; the process should be seamless. As we've already seen from our Iometer numbers, sequential write performance at low queue depths is around 160MB/s. A quick HD Tach pass of a completely full drive gives us the same result:


The Octane works as expected here, but now what happens if we subject the drive to a ton of 4KB random writes? Unfortunately this is where the Octane falls short. Our standard test involves a 20 minute, 4KB random write across all LBAs at a queue depth of 32. Look at the drive's performance after our torture test:


Average write speed is now less than a tenth of what it was when new. The good news is that any reasonable client workload won't put the drive in this state. The bad news is that OCZ is going to have its work cut out for itself when it goes to move Everest into the enterprise space. With the drive in this state we can test the garbage collection path of the firmware. A quick format in Windows 7 TRIMs all user addressable LBAs, which should fully restore performance if TRIM is working:


Indeed it does. In reality, client workloads won't generate anywhere near this amount of random data and TRIM should help keep everything else in check. I would still like to see lower write amplification (it makes me sleep better at night) but I suspect we won't see that until we meet Everest's true successor.

Power Consumption

With a more reasonable sized drive in house, we're able to find out just how power efficient the Octane really is. At idle, the drive still uses more power than the competition but under load it's actually quite good - about on par with SandForce's SF-2281 (or better depending on the workload).

Drive Power Consumption - Idle

Drive Power Consumption - Sequential Write

Drive Power Consumption - Random Write

Final Words

I'm still trying to get my hands on smaller capacities of other newer SSDs to add them to our growing database of SSD performance data. For now it looks like the Octane is a good solution for typical desktop users, even at its 128GB capacity. Performance in those tests is once again competitive with SandForce drives. It's in write heavy workloads that the reduction in number of available NAND die penalizes the Octane. The distinction is as simple as that: if you're running write heavy workloads, the higher capacity Octanes remain competitive where the 128GB falls off. For most desktop/notebook users however, the 128GB drive should be among the best.

I still wouldn't recommend the Octane for Mac OS X use without TRIM. SandForce is still best suited for the TRIM-less environments, although I've been quite pleased with the Samsung SSD 830 under OS X for the past few months as well.

My recommendation continues to be that you wait-and-see. The Octane has only been publicly available for a month now, it'll be several more before we get a good idea of how well these drives are holding up in the myriad of system configurations and usage models that are out there. So far, so good though.

source:http://www.anandtech.com