white board messageAnyone who knows me knows I like to ask “why?” Maybe I never outgrew the 2-year-old phase. But I also like to ask “why not?” Every now and then you need to rethink everything you know top to bottom because something might have changed.

I’ve been talking to a lot of enterprise datacenter architects and managers lately. They’re interested in using flash in their servers and storage, but they can’t get over all the “problems.”

The conversation goes something like this: Flash is interesting, but it’s crazy expensive $/bit. The prices have to come way down – after all it’s just a commodity part. And I have these $4k servers – why would I put an $8k PCIe card in them – that makes no sense. And the stuff wears out, which is an operational risk for me – disks last forever. Maybe flash isn’t ready for prime time yet.

These arguments are reasonable if you think about flash as a disk replacement, and don’t think through all the follow-on implications.

In contrast I’ve also been spending a lot of time with the biggest datacenters in the world – you know – the ones we all know by brand name. They have at least 200k servers, and anywhere from 1.5 million to 7 million disks. They notice CapEx and OpEx a lot. You multiply anything by that much and it’s noticeable. (My simple example is add 1 LED to each server with 200k servers and the cost adds up to 26K watts + $10K LED cost.) They are very scientific about cost. More specifically they measure work/$ very carefully. Anything to increase work or reduce $ is very interesting – doing both at once is the holy grail. Already one of those datacenters is completely diskless. Others are part way there, or have the ambition of being there. You might think they’re crazy – how can they spend so much on flash when disks are so much cheaper, and these guys offer their services for free?

When the large datacenters – I call the hyperscale datacenters – measure cost, they’re looking at purchase cost, including metal racks and enclosures, shipping, service cost both parts and human expense, as well as operational disruption overhead and the complexity of managing that, the opportunity cost of new systems vs. old systems that are less efficient, and of course facilities expenses – buildings, power, cooling, people… They try to optimize the mix of these.

Let’s look at the arguments against using flash one by one.

Flash is just a commodity part
This is a very big fallacy. It’s not a commodity part, and flash is not all the same. The parts you see in cheap consumer devices deserve their price. In the chip industry, it’s common to have manufacturing fallout; 3% – 10% is reasonable. What’s more the devices come at different performance levels – just look at x86 performance versions of the same design. In the flash business 100% of the devices are sold, used, and find their way into products. Those cheap consumer products are usually the 3%-10% that would be scrap in other industries. (I was once told – with a smile – “those are the parts we sweep off the floor”…)

Each generation of flash (about 18 months between them) and each manufacturer (there are 5, depending how you count) have very different characteristics. There are wild differences in erase time, write time, read time, bandwidth, capacity, endurance, and cost. There is no one supplier that is best at all of these, and leadership moves around. More importantly, in a flash system, how you trade these things off has a huge effect on write latency (#1 impactor on work done), latency outliers (consistent operation), endurance or life span, power consumption, and solution cost. All flash products are not equal – not by a long shot. Even hyperscale datacenters have different types of solutions for different needs.

It’s also important to know that temperature of operation and storage, inter-arrival time of writes, and “over provisioning” (the amount hidden for background use and garbage collection) have profound impacts on lifespan and performance.

$8k PCIe card in a $4k server – really?
I am always stunned by this. No one thinks twice about spending more on virtualization licenses than on hardware, or say $50k for a database license to run on a $4k server. It’s all about what work you need to accomplish, and what’s the best way to accomplish it. It’s no joke that in database applications it’s pretty easy to get 4x the work from a server with a flash solution inserted. You probably won’t get worse than 4x, and as good as 10x. On a purely hardware basis, that makes sense – I can have 1 server @ $4k +  $8K flash vs. 4 servers @ $4k. I just saved $4k CapEx. More importantly, I saved the service contract, power, cooling and admin of 3 servers. If I include virtualization or database licenses, I saved another $150k + annual service contracts on those licenses. That’s easy math. If I worry about users supported rather than work done, I can support as many as 100x users. The math becomes overwhelming. $8K PCIe card in a $4k server? You bet when I think of work/$.

The stuff wears out & disks last forever
It’s true that car tires wear out, and depending on how hard you use them that might be faster or slower. But tires are one of the most important parts in a cars performance – acceleration, stopping, handling – you couldn’t do any of that without them. The only time you really have catastrophic failure with tires is when you wear them way past any reasonable point – until they are bald and should have been replaced. Flash is like that – you get lots of warning as its wearing out, and you get lots of opportunity to operationally plan and replace the flash without disruption. You might need to replace it after 4 or 5 years, but you can plan and do it gracefully. Disks can last “forever,” but they also fail randomly and often.

Reliability statistics across millions of hard drives show somewhere around 2.5% fail annually. And that’s for 1st quality drives. Those are unpredicted, catastrophic failures, and depending on your storage systems that means you need to go into rebuild or replication of TBytes of data, and you have a subsequent degradation in performance (which can completely mess up load balancing of a cluster of 20 to 200 other nodes too), potentially network traffic overhead, and a physical service event that needs to be handled manually and fairly quickly. And really – how often do admins want to take the risk of physically replacing a drive while a system is running. Just one mistake by your tech and it’s all over… Operationally flash is way better, less disruptive, predictable, lower cost, and the follow on implications are much simpler.

Crazy expensive $/bit
OK – so this argument doesn’t seem so relevant anymore. Even so, in most cases you can’t use much of the disk capacity you have. It will be stranded because you need to have spare space as databases, etc. grow. If you run out of space for db’s the result is catastrophic. If you are driving a system hard, you often don’t have the bandwidth left to actually access that extra capacity. It’s common to only use ½ of the available capacity of drives.

Caching solutions change the equation as well. You can spend money on flash for the performance characteristics, and shift disk drive spend to fewer, higher capacity, slower, more power efficient drives for bulk capacity. Often for the same or similar overall storage spend you can have the same capacity at 4x the system performance. And the space and power consumed and cooling needed for that system is dramatically reduced.

Even so, flash is not going to replace large capacity storage for a long, long time, if ever. What ever the case, the $/bit is simply not the right metric for evaluating flash. But it’s true, flash is more expensive per bit. It’s simply that in most operational contexts, it more than makes up for that by other savings and work/$ improvements.

So I would argue (and I’m backed up by the biggest hyperscale datacenters in the world) that flash is ready for prime time adoption. Work/$ is the correct metric, but you need to measure from the application down to the storage bits to get that metric. It’s not correct to think about flash as “just a disk replacement” – it changes the entire balance of a solution stack from application performance and responsiveness and cumulative work, to server utilization to power consumption and cooling to maintenance and service to predictable operational stability. It’s not just a small win; it’s a big win. It’s not a fit yet for large pools of archival storage – but even for that a lot of energy is going into trying to make that work. So no – enterprise will not go diskless for quite a while, but it is understandable why hyperscale datacenters want to go diskless. It’s simple math.

Every now and then you need to rethink everything you know top to bottom because something might have changed.

Tags: , , , , , ,
Views: (22453)


big fish swallowing small fishMerging the working cultures of two different companies can be a very complex task. In my past experience with these situations I have not typically found the result to be highly positive for the employees of the incoming company.

As one example, the acquiring company may tell the employees they will be able to keep their startup environment and mentality, but within one or two quarters nearly all of those attributes are eliminated and the employees just become another cog in the bigger machine. After that the drive and creativity that was fostered with that startup mentality often disappears.

Year two and still happily married
In the first week of 2012, LSI completed the acquisition of SandForce, further expanding its growing coverage of flash memory technology IP. SandForce had grown significantly since its emergence from stealth mode in 2009 to become a leading provider of flash controllers for enterprise, cloud and client solid state storage solutions.

The SandForce team was kept in whole and created the Flash Components Division of LSI. The team even continues to reside in the same building it had occupied the prior year, further supporting the internal feeling of their original startup mentality. Most of the original culture of SandForce was kept intact and that enabled the transition into the larger LSI Corporation much easier for most people. Because those changes were spread out over a longer period of time it was easier for the team to digest with minimal disruption to their daily flow.

For the business side of the acquisition, there has been significant upside as a result of the merger. Over the last 14 months, both companies have invested significant time and resources to leverage the SandForce® flash controller technology across the company. LSI had already designed a high-end enterprise solution using the SandForce technology in their PCIe based Nytro™ product line. With both companies now under one umbrella, the engineering teams are free to develop advanced capabilities between the products to enable deeper integration that will result in greater customer benefits.

Greater efficiency … for the sake of customers
As a single larger company, LSI is now able to redistribute engineering and support resources as needed to better align with the quick expansion of flash memory storage solutions for its customers. It is also much easier to ensure a higher level of interoperability between related products and solutions. Many of the customers already purchasing LSI products can use the same sales and support teams already in place to access and incorporate a larger set of solutions from LSI.

Enterprise storage manufacturers have millions if not billions of dollars of revenue and their reputation at stake when they select new and emerging technologies like flash memory to provide storage for their customers. There is always a level of concern when these companies work with smaller startup organizations. The high-technology industry is full of company names that had a great technology, but for one reason or another they could not sustain the business and fell into oblivion. The acquisition of SandForce by LSI adds the support and confidence of a multi-billion dollar company to help assuage any possible concerns of those enterprise manufacturers.

Personally, as one of the early SandForce employees, I have found the acquisition to be very beneficial to everyone involved including employees, customers, and end users. I look forward to further advancements we will make to the flash industry as we continue to drive flash memory into the storage industry.

 

Tags: , , , , ,
Views: (19246)