Having just taken over the company, Ryan was eager to know what he had to change to fix OCZ's reputation. I gave him a long list of issues to address. Most of my suggestions were obvious, just to go above and beyond the call of duty in taking care of his customers and our readers. He agreed to do everything on the list, with one exception. I told him that if he really wanted to succeed, he needed to abandon the OCZ name and start fresh. He told me that he didn't believe it was necessary. We agreed to disagree.
I remember leaving that meeting thinking that Ryan didn't stand a chance. Memory companies were a dime a dozen. Differentiation was bordering on impossible. Having to overcome a bad reputation on top of that didn't make things any easier.
Ryan is headstrong. He'll sow a bunch of seed with the hopes of seeing just one blade of grass grow. I consider myself an optimist, but he's a different breed of one. He's had his share of failures over the years. Remember the OCZ brain mouse? The foray into notebook PCs? No one ever succeeded without trying.
Since then OCZ has abandoned memory altogether. It focuses on two product lines: power supplies and SSDs, the latter making up the bulk of its revenue. And earlier this year, OCZ bought one of the first high-performance SSD controller manufacturers - Indilinx.
OCZ's strategy there didn't make sense to me. I knew Ryan wanted to buy SandForce, but SF was too expensive. I asked Ryan why bother with Indilinx if what he really wanted was SandForce? He told me that the best way to drive the price down on SF was to buy Indilinx. It didn't add up until now.
Ryan took a big risk on Indilinx. They had a promising controller in 2009 and he bought up the bulk of what they could make in exchange for exclusivity rights. OCZ made Indilinx, and Indilinx made OCZ. As Indilinx began courting more vendors, OCZ went after SandForce. As soon as a first generation controller was ready, OCZ began shifting its volume from Indilinx to SandForce. More partners stepped up to fill the gap left by OCZ, but by then no one wanted Indilinx - they wanted SandForce based drives.
Simultaneously (perhaps a result?) Indilinx's execution suffered, the stumble was irrecoverable. The value of Indilinx went down, and Ryan got the company for cheap.
I can only assume the strategy was to rinse and repeat. I had heard rumors of OCZ working on its own controller for the past two years. The Indilinx acquisition sped things up considerably. If the Indilinx solution was good enough, OCZ would shift its volume away from SandForce to its own controller. Starve SandForce and swoop back in later to buy them at a more reasonable price. Competition makes for competitive prices on both sides of the fence it seems.
Things of course didn't work out that way. OCZ took a while to get its own controller design done and it was still very dependent on SandForce. At the same time, SandForce had diversified its portfolio. Since the announcement of the Indilinx acquisition, SandForce brought on a number of new partners to sell its drives. Even Kingston signed up. Finally, LSI agreed to purchase SandForce at a number well in the range of what SF was looking to sell for.
The situation didn't play out exactly how Ryan had hoped, I'm sure. But the result actually isn't all that bad. LSI has no intentions of stopping its supply of SandForce controllers to OCZ (or other partners), and all of the work OCZ put into its own controller finally paid off. Personally, it's hard to believe that I'm writing about the company I once advised to completely abandon their brand. Furthermore, I'm not just writing about them, but I'm writing about their first in-house SSD controller. This is the Indilinx Everest:
There's not much we can tell from looking at the silkscreen on the IC, but it's the first all new SSD controller from Indilinx since 2009. Jetstream was its predecessor, but that part never made it to market.
Everest is Indilinx's first 6Gbps controller and its delivery vehicle is the OCZ Octane SSD. You'll see both 6Gbps and 3Gbps versions of the drive, although what's launching today is the 6Gbps part.
The controller features eight NAND channels, with the ability to interleave multiple requests per channel. The capacities and price breakdown are below:
OCZ Octane Lineup | |||||||
1TB | 512GB | 256GB | 128GB | ||||
NAND Type | 25nm Intel Sync MLC | 25nm Intel Sync MLC | 25nm Intel Sync MLC | 25nm Intel Sync MLC | |||
NAND | 1TB | 512GB | 256GB | 128GB | |||
User Capacity | 953GiB | 476GiB | 238GiB | 119GiB | |||
Random Read Performance | Up to 45K IOPS | Up to 37K IOPS | Up to 37K IOPS | Up to 37K IOPS | |||
Random Write Performance | Up to 19.5K IOPS | Up to 16K IOPS | Up to 12K IOPS | Up to 7.7K IOPS | |||
Sequential Read Performance | Up to 560 MB/s | Up to 535 MB/s | Up to 535 MB/s | Up to 535 MB/s | |||
Sequential Write Performance | Up to 400 MB/s | Up to 400 MB/s | Up to 270 MB/s | Up to 170 MB/s | |||
MSRP | TBD | $879.99 | $369.99 | $199.99 |
The 6Gbps drive uses Intel 25nm 2-bit-per-cell MLC synchronous NAND, similar to what you'd find in a Vertex 3. OCZ sent us a 512GB version with sixteen NAND packages and four 8GB die per package. We typically don't see any interleaving benefits beyond two die per package, so I'd expect similar performance between the 512GB drive and the 256GB version (despite the significant difference in specs). Spare area is pretty standard at around 7% of the drive's total NAND.
The Octane PCB is interesting to look at. While OCZ has a history of building its own PCBs, this is the first time that we have an SSD where both the PCB and controller are made by OCZ. The controller-side of the Octane PCB is home to eight TI muxes. OCZ wouldn't tell me their purpose, but I suspect it has to do with enabling interleaving across all of the available NAND packages. With only eight channels directly connected to the controller, accessing more than eight packages will inevitably require some pipelining/interleaving. In typical SSDs I assume that the muxes (switches) to juggle multiple NAND die or packages are internal to the controller. My guess is that OCZ moved them external with Everest, although I'm not entirely sure why. It's also possible that this is somehow related to OCZ's ability to deliver a 1TB version of the drive.
All Octane drives will have a 512MB DRAM cache split into two 256MB chips. OCZ's experience in buying DRAM in bulk from its days as a memory vendor likely comes in handy with securing such a large amount of memory per drive. The amount of cache in use will depend on the capacity of the drive. Larger drives have more LBAs to map to NAND pages, and thus require larger page mapping tables.
OCZ is clearly storing user data in the Octane's on-board DRAM (hence the large size). The verdict isn't out on whether or not this is a good idea. Intel prides itself on not storing any user data in DRAM (only in on-chip caches), while SandForce's technology negates the need for any external DRAMs at all. On the other hand, the Marvell based solutions (e.g. Crucial m4) or Samsung's own controller both keep user data in on-board DRAM. Switching between architectures requires a lot of firmware work and as long as performance can be maintained, I see no reason to choose one over the other. There's always the risk of power related data loss, but that's more of a concern for enterprise customers.
OCZ sent along this block diagram of the Everest controller which indicates there's an AES encryption engine on-chip:
The Octane comes with OCZ's usual toolbox for secure erasing/updating firmware. Both of those processes are very simple thanks to the utility. Unfortunately OCZ is still unable to get the toolbox working if you have Intel's RST driver installed, which significantly diminishes the usability of the software.
Testing OCZ's Octane proved flawless in the short period of time I've had with the drive. That's not saying much other than there's no obvious firmware issues with the drive. The Octane will ship with firmware revision 1315, which is the same firmware revision I tested with.
The Test
CPU | Intel Core i7 2600K running at 3.4GHz (Turbo & EIST Disabled) - for AT SB 2011, AS SSD & ATTO |
Motherboard: | Intel DH67BL Motherboard |
Chipset: | Intel H67 |
Chipset Drivers: | Intel 9.1.1.1015 + Intel RST 10.2 |
Memory: | Corsair Vengeance DDR3-1333 2 x 2GB (7-7-7-20) |
Video Card: | eVGA GeForce GTX 285 |
Video Drivers: | NVIDIA ForceWare 190.38 64-bit |
Desktop Resolution: | 1920 x 1200 |
OS: | Windows 7 x64 |
Random Read/Write Speed
The four corners of SSD performance are as follows: random read, random write, sequential read and sequential write speed. Random accesses are generally small in size, while sequential accesses tend to be larger and thus we have the four Iometer tests we use in all of our reviews.Our first test writes 4KB in a completely random pattern over an 8GB space of the drive to simulate the sort of random access that you'd see on an OS drive (even this is more stressful than a normal desktop user would see). I perform three concurrent IOs and run the test for 3 minutes. The results reported are in average MB/s over the entire time. We use both standard pseudo randomly generated data for each write as well as fully random data to show you both the maximum and minimum performance offered by SandForce based drives in these tests. The average performance of SF drives will likely be somewhere in between the two values for each drive you see in the graphs. For an understanding of why this matters, read our original SandForce article.
Sequential Read/Write Speed
To measure sequential performance I ran a 1 minute long 128KB sequential test over the entire span of the drive at a queue depth of 1. The results reported are in average MB/s over the entire test length.AS-SSD Incompressible Sequential Performance
The AS-SSD sequential benchmark uses incompressible data for all of its transfers. The result is a pretty big reduction in sequential write speed on SandForce based controllers.AnandTech Storage Bench 2011
Last year we introduced our AnandTech Storage Bench, a suite of benchmarks that took traces of real OS/application usage and played them back in a repeatable manner. I assembled the traces myself out of frustration with the majority of what we have today in terms of SSD benchmarks.Although the AnandTech Storage Bench tests did a good job of characterizing SSD performance, they weren't stressful enough. All of the tests performed less than 10GB of reads/writes and typically involved only 4GB of writes specifically. That's not even enough exceed the spare area on most SSDs. Most canned SSD benchmarks don't even come close to writing a single gigabyte of data, but that doesn't mean that simply writing 4GB is acceptable.
Originally I kept the benchmarks short enough that they wouldn't be a burden to run (~30 minutes) but long enough that they were representative of what a power user might do with their system.
Not too long ago I tweeted that I had created what I referred to as the Mother of All SSD Benchmarks (MOASB). Rather than only writing 4GB of data to the drive, this benchmark writes 106.32GB. It's the load you'd put on a drive after nearly two weeks of constant usage. And it takes a *long* time to run.
1) The MOASB, officially called AnandTech Storage Bench 2011 - Heavy Workload, mainly focuses on the times when your I/O activity is the highest. There is a lot of downloading and application installing that happens during the course of this test. My thinking was that it's during application installs, file copies, downloading and multitasking with all of this that you can really notice performance differences between drives.
2) I tried to cover as many bases as possible with the software I incorporated into this test. There's a lot of photo editing in Photoshop, HTML editing in Dreamweaver, web browsing, game playing/level loading (Starcraft II & WoW are both a part of the test) as well as general use stuff (application installing, virus scanning). I included a large amount of email downloading, document creation and editing as well. To top it all off I even use Visual Studio 2008 to build Chromium during the test.
The test has 2,168,893 read operations and 1,783,447 write operations. The IO breakdown is as follows:
AnandTech Storage Bench 2011 - Heavy Workload IO Breakdown | ||||
IO Size | % of Total | |||
4KB | 28% | |||
16KB | 10% | |||
32KB | 10% | |||
64KB | 4% |
Many of you have asked for a better way to really characterize performance. Simply looking at IOPS doesn't really say much. As a result I'm going to be presenting Storage Bench 2011 data in a slightly different way. We'll have performance represented as Average MB/s, with higher numbers being better. At the same time I'll be reporting how long the SSD was busy while running this test. These disk busy graphs will show you exactly how much time was shaved off by using a faster drive vs. a slower one during the course of this test. Finally, I will also break out performance into reads, writes and combined. The reason I do this is to help balance out the fact that this test is unusually write intensive, which can often hide the benefits of a drive with good read performance.
There's also a new light workload for 2011. This is a far more reasonable, typical every day use case benchmark. Lots of web browsing, photo editing (but with a greater focus on photo consumption), video playback as well as some application installs and gaming. This test isn't nearly as write intensive as the MOASB but it's still multiple times more write intensive than what we were running last year.
As always I don't believe that these two benchmarks alone are enough to characterize the performance of a drive, but hopefully along with the rest of our tests they will help provide a better idea.
The testbed for Storage Bench 2011 has changed as well. We're now using a Sandy Bridge platform with full 6Gbps support for these tests.
AnandTech Storage Bench 2011 - Heavy Workload
We'll start out by looking at average data rate throughout our new heavy workload test:AnandTech Storage Bench 2011 - Light Workload
Our new light workload actually has more write operations than read operations. The split is as follows: 372,630 reads and 459,709 writes. The relatively close read/write ratio does better mimic a typical light workload (although even lighter workloads would be far more read centric).The I/O breakdown is similar to the heavy workload at small IOs, however you'll notice that there are far fewer large IO transfers:
AnandTech Storage Bench 2011 - Light Workload IO Breakdown | ||||
IO Size | % of Total | |||
4KB | 27% | |||
16KB | 8% | |||
32KB | 6% | |||
64KB | 5% |
Performance Over Time & TRIM
Testing TRIM functionality is important because it gives us insight into the drive's garbage collection algorithms. OCZ insists the Octane has idle time garbage collection, a remnant of the original Indilinx drives, however in my testing I could not get the idle GC to do anything once I put the drive into a highly fragmented state. Let's start at the beginning though. The easiest way to ensure real time garbage collection is working is to fill the drive with data and then write sequentially across the drive. All LBAs will have data in them and any additional writes will force the controller to allocate from the drive's pool of spare area. This path shouldn't have any bottlenecks in it; the process should be seamless. As we've already seen from our Iometer numbers, sequential write performance at low queue depths is around 280MB/s. A quick HD Tach pass of a completely full drive gives us the same result:If you have TRIM enabled on a desktop platform with a client (read: non-server) workload, none of this should matter to you. TRIM works and there doesn't appear to be any weird lag or bottlenecks in the GC path. If you don't have TRIM enabled (read: OS X) with a client workload, this could warrant a pass. The only reason I'm hesitant to recommend the Octane for use with a TRIM-less OS X installation is because I'm not entirely sure the drive will recover from this ultra low performance state without TRIM. Sequential writing alone may not be enough to adequately restore the Octane's performance. Normally idle GC would be enough, but it seems as if things get slow enough the drive's idle GC can't do much. I suspect all of this is stuff that OCZ can tweak via firmware, but I need more time with the drive to really be certain.
Finally if you're deploying a server with lots of random writes, the Octane isn't for you. OCZ will eventually release an Everest based drive for the enterprise, but the Octane is not that drive.
Power Consumption
As we saw in our Samsung SSD 830 review, power consumption of these 512GB drives can be quite high. The Octane is no exception. At idle the Octane uses a little less power than the 830, but still more than any of the other smaller drives we have:Final Words
The Indilinx Everest is a surprisingly competent controller. When OCZ first mentioned its work on the controller to me I wrote it off as yet another low performing alternative that wasn't worth consideration. Based on its performance in our Storage Bench tests, I'd say the OCZ Octane is easily able to hold its own against SandForce based drives. The obvious benefit is you get solid performance regardless of the type of data you're moving around - everything from text to compressed movies can move at the same rate. The benefit is also a downside. SandForce drives tend to have very good average write amplification (0.60 - 0.70) thanks to their real time compression/dedupe of commonly used files. The result is relatively consistent performance over time, something that more traditional SSDs can't offer nearly as well. With TRIM enabled this should be a moot point, but it's still an advantage that no one else can duplicate without SandForce's technology.Write amplification is a concern, although I suspect it'll only be a problem for enterprise workloads. The bigger issue is that to address these limitations, OCZ will likely have to do a significant redesign of the Octane's firmware architecture. OCZ did let me know that an even faster Octane H drive was due out in the not-too-distant future. It's possible that the Octane H may address my concerns here. I'll find out in due time.
It's clear that the Octane is a powerful competitor, what matters now is its reliability. In the past OCZ has been at the mercy of third party controller makers to fix bugs in their firmware, but now with Indilinx in house I wonder how things will change. I believe OCZ needs a good 12 months of an Intel or Samsung-like track record to really build confidence in its products. The brand definitely took a hit with all of the SandForce BSOD issues (and the wild goose chase interim "solutions" to the problem). OCZ has the opportunity to start fresh with Octane and there can be no finger pointing this time. The controller, firmware and drive are all produced in house. I don't expect the drive to be perfect in every system, but it had better be very close to it.
The good news is that if OCZ is able to deliver reliable and compatible firmware, the Octane is worth owning. It performs at the top of its class, and it's priced more aggressively than OCZ's SandForce based drives. My standard recommendation for any new SSD still applies: wait and see. Let others (myself included, the Octane will be going into a work machine starting today) be the beta testers. If the waters look safe, only then should you jump in.
Source: AnandTech
No comments:
Post a Comment