I want to warn you, there is some thick background information here first. But donâ€™t worry. Iâ€™ll get to the meat of the topic and thatâ€™s this: Ultimately, I think thatÂ PCIeÂ® cards will evolve to more external, rack-level, pooled flash solutions, without sacrificing all their great attributes today. This is just my opinion, but other leaders in flash are going down this path tooâ€¦
Iâ€™ve been working on enterprise flash storage since 2007 â€“ mulling over how to make it work. Endurance, capacity, cost, performance have all been concerns that have been grappled with. Of course the flash is changing too as the nodes change: 60nm, 50nm, 35nm, 24nm, 20nmâ€¦ and single level cell (SLC) to multi level cell (MLC) to triple level cell (TLC) and all the variants of these â€śtrimmedâ€ť for specific use cases. The spec â€śenduranceâ€ť has gone from 1 million program/erase cycles (PE) to 3,000, and in some cases 500.
Itâ€™s worth pointing out that almost all the â€śmagicâ€ť that has been developed around flash was already scoped out in 2007. It just takes a while for a whole new industry to mature. Individual die capacity increased, meaning fewer die are needed for a solution â€“ and that means less parallel bandwidth for data transferâ€¦ And the â€śrequirementâ€ť for state-of-the-art single operation write latency has fallen well below the write latency of the flash itself. (What the ?? Yea â€“ talk about that later in some other blog. But flash is ~1500uS write latency, where state of the art flash cards are ~50uS.) When I describe the state of technology it sounds pretty pessimistic. Â Iâ€™m not. Weâ€™ve overcome a lot.
We built our first PCIe card solution at LSI in 2009. It wasnâ€™t perfect, but it was better than anything else out there in many ways. Weâ€™ve learned a lot in the years since â€“ both from making them, and from dealing with customer and users â€“ both of our own solutions and our competitors.Â Weâ€™re lucky to be an important player in storage, so in general the big OEMs, large enterprises and theÂ hyperscale datacenters all want to talk with us â€“ not just about what we have or can sell, but what we could have and what we could do. Theyâ€™re generous enough to share what works and what doesnâ€™t. What the values of solutions are and what the pitfalls are too. Honestly? Itâ€™s theÂ hyperscale datacenters in the lead both practically and in vision.
If you havenâ€™tÂ nodded off to sleep yet, thatâ€™s a long-winded way of saying â€“ things have changed fast, and, boy, weâ€™ve learned a lot in just a few years.
Most important thing weâ€™ve learnedâ€¦
Most importantly, weâ€™ve learned itâ€™s latency that matters. No one is pushing the IOPs limits of flash, and no one is pushing the bandwidth limits of flash. But they sure are pushing the latency limits.
PCIe cards are great, butâ€¦
Weâ€™ve gotten lots of feedback, and one of the biggest things weâ€™ve learned is â€“ PCIe flash cards are awesome. They radically change performance profiles of most applications, especially databases allowing servers to run efficiently and actual work done by that server to multiply 4x to 10x (and in a few extreme cases 100x). So the feedback we get from large users is â€śPCIe cards are fantastic. Weâ€™re so thankful they came along. Butâ€¦â€ť Thereâ€™s always a â€śbut,â€ť right??
It tends to be a pretty long list of frustrations, and they differ depending on the type of datacenter using them. Weâ€™re not the only ones hearing it. To be clear, none of these are stopping people from deploying PCIe flashâ€¦ the attraction is just too compelling. But the problems are real, and they have real implications, and the market is asking for real solutions.
Of course, everyone wants these fixed without affecting single operation latency, or increasing cost, etc. Thatâ€™s what weâ€™re here for though â€“ right? Solve the impossible?
A quick summary is in order. Itâ€™s not looking good. For a given solution, flash is getting less reliable, there is less bandwidth available at capacity because there are fewer die, weâ€™re driving latency way below the actual write latency of flash, and weâ€™re not satisfied with the best solutions we have for all the reasons above.
If you think these through enough, you start to consider one basic path. It also turns out weâ€™re not the only ones realizing this. Where will PCIe flash solutions evolve over the next 2, 3, 4 years? The basic goals are:
One easy answer would be â€“ thatâ€™s a flash SAN or NAS. But thatâ€™s not the answer. Not many customers want a flash SAN or NAS â€“ not for their new infrastructure, but more importantly, all the data is at the wrong end of the straw. The poor server is left sucking hard. Remember â€“ this is flash, and people use flash for latency. Today these SAN type of flash devices have 4x-10x worse latency than PCIe cards. Ouch. You have to suck the data through a relatively low bandwidth interconnect, after passing through both the storage and network stacks. And there is interaction between the I/O threads of various servers and applications â€“ you have to wait in line for that resource. Itâ€™s true there is a lot of startup energy in this space. Â It seems to make sense if youâ€™re a startup, because SAN/NAS is what people use today, and thereâ€™s lots of money spent in that market today. However, itâ€™s not what the market is asking for.
Another easy answer is NVMe SSDs. Right? Everyone wants them â€“ right? Well, OEMs at least. Front bay PCIe SSDs (HDD form factor or NVMe â€“ lots of names) that crowd out your disk drive bays. But they donâ€™t fix the problems. The extra mechanicals and form factor are more expensive, and just make replacing the cards every 5 years a few minutes faster. Wow. With NVME SSDs, you can fit fewer HDDs â€“ not good. They also provide uniformly bad cooling, and hard limit power to 9W or 25W per device. But to protect the storage in these devices, you need to have enough of them that you can RAID or otherwise protect. Once you have enough of those for protection, they give you awesome capacity, IOPs and bandwidth, too much in fact, but thatâ€™s not what applications need â€“ they need low latency for the working set of data.
What do I think the PCIe replacement solutions in the near future will look like? You need to pool the flash across servers (to optimize bandwidth and resource usage, and allocate appropriate capacity). You need to protect against failures/errors and limit the span of failure,Â commit writes at very low latency (lower than native flash) and maintain low latency, bottleneck-free physical links to each serverâ€¦ To me that implies:
That means the performance looks exactly as if each server had multiple PCIe cards. But the capacity and bandwidth resources are shared, and systems can remain resilient. So ultimately, I think that PCIe cards will evolve to more external, rack level, pooled flash solutions, without sacrificing all their great attributes today. This is just my opinion, but as I say â€“ other leaders in flash are going down this path tooâ€¦
Whatâ€™s your opinion?
Tags: DAS, datacenter, direct attached storage, enterprise IT, flash, hard disk drive, HDD, hyperscale, latency, NAS, network attached storage, NVMe, PCIe, SAN, solid state drive, SSD, storage area network
Big data and Hadoop are all about exploiting new value and opportunities with data. In financial trading, business and some areas of science, itâ€™s all about being fastest or first to take advantage of the data. The bigger the data sets, the smarter the analytics. The next competitive edge with big data comes when you layer in flash acceleration. The challenge is scaling performance in Hadoop clusters.
The most cost-effective option emerging for breaking through disk-to-I/O bottlenecks to scale performance is to use high-performance read/write flash cache acceleration cards for caching. This is essentially a way to get more work for less cost, by bringing data closer to the processing. The LSIÂ® Nytroâ„˘ product has been shown during testing to improve the time it takes to complete Hadoop software framework jobs up to a 33%.
Combining flash cache acceleration cards with Hadoop software is a big opportunity for end users and suppliers. LSI estimates that less than 10% of Hadoop software installations today incorporate flash acceleration1. Â This will grow rapidly as companies see the increased productivity and ROI of flash to accelerate their systems.Â And use of Hadoop software is also growing fast. IDC predicts a CAGR of as much as 60% by 20162. Drivers include IT security, e-commerce, fraud detection and mobile data user management. Gartner predicts that Hadoop software will be in two-thirds of advanced analytics products by 20153. There are many thousands of Hadoop software clusters already employed.
Where flash makes the most immediate sense is with those who have smaller clusters doing lots of in-place batch processing. Hadoop is purpose-built for analyzing a variety of data, whether structured, semi-structured or unstructured, without the need to define a schema or otherwise anticipate results in advance. Hadoop enables scaling that allows an unprecedented volume of data to be analyzed quickly and cost-effectively on clusters of commodity servers. Speed gains are about data proximity. This is why flash cache acceleration typically delivers the highest performance gains when the card is placed directly in the server on the PCI ExpressÂ® (PCIe) bus.
PCIe flash cache cards are now available with multiple terabytes of NAND flash storage, which substantially increases the hit rate. We offer a solution with both onboard flash modules and Serial-Attached SCSI (SAS) interfaces to create high-performance direct-attached storage (DAS) configurations consisting of solid state and hard disk drive storage. This couples the low latency performance benefits of flash with the capacity and cost per gigabyte advantages of HDDs.
To keep the processor close to the data, Hadoop uses servers with DAS. And to get the data even closer to the processor, the servers are usually equipped with significant amounts of random access memory (RAM). An additional benefit, smart implementation of Hadoop and flash components can reduce the overall server footprint required. Scaling is simplified, with some solutions providing the ability to allow up to 128 devices which share a very high bandwidth interface. Most commodity servers provide 8 or less SATA ports for disks, reducing expandability.
Hadoop is great, but flash-accelerated Hadoop is best. Itâ€™s an effective way, as you work to extract full value from big data, to secure a competitive edge.
Anyone who knows me knows I like to ask â€śwhy?â€ť Maybe I never outgrew the 2-year-old phase. But I also like to ask â€śwhy not?â€ť Every now and then you need to rethink everything you know top to bottom because something might have changed.
Iâ€™ve been talking to a lot of enterprise datacenter architects and managers lately. Theyâ€™re interested in using flash in their servers and storage, but they canâ€™t get over all the â€śproblems.â€ť
The conversation goes something like this: Flash is interesting, but itâ€™s crazy expensive $/bit. The prices have to come way down â€“ after all itâ€™s just a commodity part. And I have these $4k servers â€“ why would I put an $8k PCIe card in them â€“ that makes no sense. And the stuff wears out, which is an operational risk for me â€“ disks last forever. Maybe flash isnâ€™t ready for prime time yet.
These arguments are reasonable if you think about flash as a disk replacement, and donâ€™t think through all the follow-on implications.
In contrast Iâ€™ve also been spending a lot of time with the biggest datacenters in the world â€“ you know â€“ the ones we all know by brand name. They have at least 200k servers, and anywhere from 1.5 million to 7 million disks. They notice CapEx and OpEx a lot. You multiply anything by that much and itâ€™s noticeable. (My simple example is add 1 LED to each server with 200k servers and the cost adds up to 26K watts + $10K LED cost.) They are very scientific about cost. More specifically they measure work/$ very carefully. Anything to increase work or reduce $ is very interesting â€“ doing both at once is the holy grail. Already one of those datacenters is completely diskless. Others are part way there, or have the ambition of being there. You might think theyâ€™re crazy â€“ how can they spend so much on flash when disks are so much cheaper, and these guys offer their services for free?
When the large datacenters â€“ I call theÂ hyperscale datacenters â€“ measure cost, theyâ€™re looking at purchase cost, including metal racks and enclosures, shipping, service cost both parts and human expense, as well as operational disruption overhead and the complexity of managing that, the opportunity cost of new systems vs. old systems that are less efficient, and of course facilities expenses â€“ buildings, power, cooling, peopleâ€¦ They try to optimize the mix of these.
Letâ€™s look at the arguments against using flash one by one.
Flash is just a commodity part
This is a very big fallacy. Itâ€™s not a commodity part, and flash is not all the same. The parts you see in cheap consumer devices deserve their price. In the chip industry, itâ€™s common to have manufacturing fallout; 3% – 10% is reasonable. Whatâ€™s more the devices come at different performance levels â€“ just look at x86 performance versions of the same design. In the flash business 100% of the devices are sold, used, and find their way into products. Those cheap consumer products are usually the 3%-10% that would be scrap in other industries. (I was once told â€“ with a smile â€“ â€śthose are the parts we sweep off the floorâ€ťâ€¦)
Each generation of flash (about 18 months between them) and each manufacturer (there are 5, depending how you count) have very different characteristics. There are wild differences in erase time, write time, read time, bandwidth, capacity, endurance, and cost. There is no one supplier that is best at all of these, and leadership moves around. More importantly, in a flash system, how you trade these things off has a huge effect on write latency (#1 impactor on work done), latency outliers (consistent operation), endurance or life span, power consumption, and solution cost. All flash products are not equal â€“ not by a long shot. EvenÂ hyperscale datacenters have different types of solutions for different needs.
Itâ€™s also important to know that temperature of operation and storage, inter-arrival time of writes, and â€śover provisioningâ€ť (the amount hidden for background use and garbage collection) have profound impacts on lifespan and performance.
$8k PCIe card in a $4k server â€“ really?
I am always stunned by this. No one thinks twice about spending more on virtualization licenses than on hardware, or say $50k for a database license to run on a $4k server. Itâ€™s all about what work you need to accomplish, and whatâ€™s the best way to accomplish it. Itâ€™s no joke that in database applications itâ€™s pretty easy to get 4x the work from a server with a flash solution inserted. You probably wonâ€™t get worse than 4x, and as good as 10x. On a purely hardware basis, that makes sense â€“ I can have 1 server @ $4k + Â $8K flash vs. 4 servers @ $4k. I just saved $4k CapEx. More importantly, I saved the service contract, power, cooling and admin of 3 servers. If I include virtualization or database licenses, I saved another $150k + annual service contracts on those licenses. Thatâ€™s easy math. If I worry about users supported rather than work done, I can support as many as 100x users. The math becomes overwhelming. $8K PCIe card in a $4k server? You bet when I think of work/$.
The stuff wears out & disks last forever
Itâ€™s true that car tires wear out, and depending on how hard you use them that might be faster or slower. But tires are one of the most important parts in a cars performance â€“ acceleration, stopping, handling â€“ you couldnâ€™t do any of that without them. The only time you really have catastrophic failure with tires is when you wear them way past any reasonable point â€“ until they are bald and should have been replaced. Flash is like that â€“ you get lots of warning as its wearing out, and you get lots of opportunity to operationally plan and replace the flash without disruption. You might need to replace it after 4 or 5 years, but you can plan and do it gracefully. Disks can last â€śforever,â€ť but they also fail randomly and often.
Reliability statistics across millions of hard drives show somewhere around 2.5% fail annually. And thatâ€™s for 1st quality drives. Those are unpredicted, catastrophic failures, and depending on your storage systems that means you need to go into rebuild or replication of TBytes of data, and you have a subsequent degradation in performance (which can completely mess up load balancing of a cluster of 20 to 200 other nodes too), potentially network traffic overhead, and a physical service event that needs to be handled manually and fairly quickly. And really â€“ how often do admins want to take the risk of physically replacing a drive while a system is running. Just one mistake by your tech and itâ€™s all overâ€¦ Operationally flash is way better, less disruptive, predictable, lower cost, and the follow on implications are much simpler.
Crazy expensive $/bit
OK â€“ so this argument doesnâ€™t seem so relevant anymore. Even so, in most cases you canâ€™t use much of the disk capacity you have. It will be stranded because you need to have spare space as databases, etc. grow. If you run out of space for dbâ€™s the result is catastrophic. If you are driving a system hard, you often donâ€™t have the bandwidth left to actually access that extra capacity. Itâ€™s common to only use Â˝ of the available capacity of drives.
Caching solutions change the equation as well. You can spend money on flash for the performance characteristics, and shift disk drive spend to fewer, higher capacity, slower, more power efficient drives for bulk capacity. Often for the same or similar overall storage spend you can have the same capacity at 4x the system performance. And the space and power consumed and cooling needed for that system is dramatically reduced.
Even so, flash is not going to replace large capacity storage for a long, long time, if ever. What ever the case, the $/bit is simply not the right metric for evaluating flash. But itâ€™s true, flash is more expensive per bit. Itâ€™s simply that in most operational contexts, it more than makes up for that by other savings and work/$ improvements.
So I would argue (and Iâ€™m backed up by the biggestÂ hyperscale datacenters in the world) that flash is ready for prime time adoption. Work/$ is the correct metric, but you need to measure from the application down to the storage bits to get that metric. Itâ€™s not correct to think about flash as â€śjust a disk replacementâ€ť â€“ it changes the entire balance of a solution stack from application performance and responsiveness and cumulative work, to server utilization to power consumption and cooling to maintenance and service to predictable operational stability. Itâ€™s not just a small win; itâ€™s a big win. Itâ€™s not a fit yet for large pools of archival storage â€“ but even for that a lot of energy is going into trying to make that work. So no â€“ enterprise will not go diskless for quite a while, but it is understandable whyÂ hyperscale datacenters want to go diskless. Itâ€™s simple math.
Every now and then you need to rethink everything you know top to bottom because something might have changed.