Software-defined datacenters (SDDC) and software-defined storage (SDS) are big movements in the industry right now. Just read the trade press or attend any conference and you’ll see that – it’s a big deal. We’re seeing for-pay vendors providing solutions, as well as strong ecosystems evolving around open source solutions. It’s not surprising why – there is a need for enterprises to deploy large scale compute clusters, and that takes either deep expertise that’s very rare, or orchestration tools that have not existed in the past. It’s the “necessity being the mother of invention” thing…

So datacenters are being forced to deploy large-scale clusters to handle the scale of compute needed, and the amount of data that is being captured, analyzed and stored. As an industry then, we’re being forced to simplify applications as well as the management and deployment of these large scale clusters. That’s great for datacenters. It’s even better that we’re figuring out how to provide those expanded resources and manage them for less money, and with fewer people to manage them. (well, it’s probably good for everyone but the sys admins…)

These new technologies are the key enabler. This blog, the second in my three-part series (based on interesting questions I was asked by CEO & CIO, a Chinese business magazine) examines how SDDC and SDS are helping enterprises get more out of their datacenter gear. You can read part 1 here.

CEO & CIO: What are your views on software-defined storage? What’s the development roadmap of LSI in achieving software-defined storage?

We see SDS as one of a number of vital changes underway in the datacenter. SDS promises to span some or all of file, object, key-value and block in order to pool resources and to simplify the infrastructure required in a datacenter, as well as to smooth the migration to object or key-value storage over time. Great examples of these SDS solutions are: Ceph, Swift, Cinder, Gluster, VSAN / VVOLs …. The model brings great benefits in datacenter management, resource pooling and allocation and usability. The main problem is performance – and by that I do not mean extreme performance. I mean poor performance that damages TCO, reduces efficiency of infrastructure and increases costs. Much worse than you would get otherwise. These solutions work, but compromise resource efficiency. Many require flash integrated in the system to simply maintain existing performance. However, this is a permanent change in how storage is used and deployed, and it’s a good change.

While block is what underlies most storage and will continue to for some time, the system and application level view is changing. We view SDS as having great synergy with LSI’s architectural direction – shared DAS infrastructure and ability to add “above the block” capability like quality of service (QoS), direct key/value hardware, etc, and bring improved performance and resource efficiency. Together, SDS + LSI innovation = resource pooling and allocation, including flash and cool/cold storage, management and virtual machine (VM) agility, performance and resource efficiency.

As a result, there has been tremendous interest from SDS vendors to work with us, to demonstrate prototype systems, and to make solutions better. We are working with many SDS partners to provide complete solutions. This is not a one-size-fits-all world, so there will be several solutions. Those solutions are not ready yet, but they’re coming, and will probably displace the older file and block storage systems we know and love.

CEO & CIO: Industry giants such as Intel have outlined their visions for software-defined datacenters. Chinese Internet giants have also put forward similar plans. What views does LSI have on software-defined datacenter?

If you view the AIS keynote, you’ll see we believe this is a critical part of the future datacenter. But just one critical part. Interestingly, we had Intel present as well during AIS.

SDDC creates a critical control plane for the datacenter. It is the software abstraction model that enables resource pooling. Resource pooling of compute, storage and network, with memory in the near future. It enables the automation and allocation of tasks and resources in the datacenter. The leading models are VMware® SDDC and OpenStack® software, but there are others that are important too. They’re just a little less public right now. Anyway – it’s way too early to predict which will be dominant. Just like SDS, SDDC exchanges simplified control and abstraction for performance and efficiency. As a result, it’s not a very useful concept, at least not at hyperscale levels, without hardware that really, truly supports and enables it. As the datacenter has changed from a compute-centric model to a dataflow model, the storage and network and, soon, memory become very important. They dictate the useful work that can be gotten from the datacenter.

I believe we are, as an industry, at the start of the hardware transition to support these. We are building hardware solutions for storage and network that are being designed into products today. We are working very closely with three of the largest datacenters in the US, and two in China to build not just the SDDC, but the pooled hardware infrastructure that is needed to make it work.

It’s critical to understand that SDDC solutions really work, but often the performance and efficiency is – well – terrible. That’s been the evolution in computer science and computer architecture since the beginning. You raise the abstraction level, which simplifies development and support, but either causes poor performance or requires more hardware capability that is architected to support those abstractions.

As a result, it’s really difficult to talk about SDDCs without a rack-scale architecture to support them. So we are working closely with the key SDDC software solutions/vendors, even the ones I didn’t list, to integrate and optimize the solutions to make the SDDC actually work. We have been working very closely with VMware and the OpenStack community, and we are changing the way the software plane interacts with the pooled resources. Again, there has been so much interest in our shared DAS, incorporating flash in the same architecture and management, and our Axxia® SDN control plane processor for networks.

I talk about rack-scale architectures to support SDS in the second half of this keynote and in my blog “China: A lot of talk about resource pooling, a better name for disaggregation.”

Summary: So I believe SDS is a big movement, it’s a good thing, and it’s here to stay. But… the performance is poor today. Very poor. That’s where we come in, with hardware that enables SDS and not only makes performance acceptable, but helps make it excellent, and improves efficiency and cost too. And SDDC is also a massive movement that will define the future datacenter. But it is intertwined with the rack-level concepts of pooling or disaggregation to make it really compelling. Again – that’s where we come in.

These were good questions that were interesting to answer. I hope it’s interesting to you too. I’ll post some more soon about how the Chinese Internet giants differ from other customers, and about forward-looking technologies.

Restructuring the datacenter ecosystem (Part 1)

Tags: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , ,
Views: (1887)


Turn on your smart phone and it works like charm. But explosive global adoption of smart phones with feature-rich applications is stressing mobile networks like never before. For mobile network providers, the challenge couldn’t be more acute: Find new ways to deliver more mobile bandwidth even as the average revenue per user remains flat.

In this AIS interview, LSI’s Jeff Connell, director of mobile networking product marketing, talks about how network providers are turning to heterogenous networks (HetNets) to reduce the cost of deploying, scaling and managing mobile networks. One way network providers are streamlining deployments is by using equipment built with smart silicon like LSI Axxia. The highly integrated ASIC helps customers cut the cost and power of new network equipment designs.

Reducing network latency
Speed is the currency of smart phone communications. Users want their information without delays. Here, Jon Devlin, director of networking ecosystem at LSI, discusses the importance of reducing network latency for applications including mobile video conferencing and voice over IP.

 

Tags: , , , , , , , ,
Views: (1384)


Open Compute and OpenStack are changing the datacenter world that we know and love. I thought they were having impact. Changing our OEMs and ODM products, changing what we expect from our vendors, changing the interoperability of managing infrastructure from different vendors. Changing our ability to deploy and manage grid and scale-out infrastructure. And changing how quickly and at what high level we can be innovating. I was wrong. It’s happening much more quickly than I thought.

On November 20-21 we hosted LSI AIS 2013. As I mentioned in a previous post, I was lucky enough to moderate a panel about Open Compute and OpenStack – “the perfect storm.” Truthfully? It felt more like sitting with two friends talking about our industry over beer. I hope to pick up that conversation again someday.

The panelists were awesome: Cole Crawford of Open Compute and Chris Kemp of OpenStack. These guys are not only influential. They have been involved from the very start of these two initiatives, and are in many ways key drivers of both movements. These are impressive, passionate guys who really are changing the world. There aren’t too many of us who can claim that. It was an engaging hour that I learned quite a bit from, and I think the audience did too. I wanted to share from my notes what I took away from that panel. I think you’ll be interested.

 

 Goals and Vision: two open source initiatives
There were a few motivations behind Open Compute, and the goal was to improve these things.

  • There have been no standards or formats for interchange in hardware design.
  • IT infrastructure has roots going back to railway switching standards (19” rack).
  • IT infrastructure has consisted of very closed systems with limited interoperability.
  • Datacenters have been wasting tremendous amounts of energy and resources on cooling and power distribution.

The goal then, for the first time, is to work backwards from workload and create open source hardware and infrastructure that is openly available and designed from the start for large scale-out deployments. The idea is to drive high efficiency in cost, materials use and energy consumption. More work/$.

One surprising thing that came up – LSI is in every current contribution in Open Compute.

OpenStack layers services that describe abstractions of computer networking and storage. LSI products tend to sit at that lowest level of abstraction, where there is now a wave of innovation. OpenStack had similar fragmentation issues to deal with and its goals are something like:

  • Bring software resource components together for pooled compute, storage and network resources.
  • Present them as resources for application deployment.
  • Create a virtual reference implementation, where the details can vary.
  • Allow integrating new infrastructure under that abstraction.
  • Simplify deployment of clusters at scale.
  • Almost like a kernel for the scale-out cluster

There is a certain amount of compatibility with Amazon’s cloud services. Chris’s point was that Amazon is incredibly innovative and a lot of enterprises should use it, but OpenStack enables both service providers and private clouds to compete with Amazon, and it allows unique innovation to evolve on top of it.

OpenStack and Open Compute are not products. They are “standards” or platform architectures, with companies using those standards to innovate on top of them. The idea is for one company to innovate on another’s improvements – everybody building on each other’s work. A huge brain trust. The goal is to create a competitive ecosystem and enable a rapid pace of innovation, and enable large-scale, inexpensive infrastructure that can be managed by a small team of people, and can be managed like a single server to solve massive scale problems.

Here’s their thought. Hardware is a supply chain management game + services.  Open Compute is an opportunity for anyone to supply that infrastructure. And today, OEMs are killer at that. But maybe ODMs can be too. Open Compute allows innovation on top of the basic interoperable platforms. OpenStack enables a framework for innovation on top as well: security, reliability, storage, network, performance. It becomes the enabler for innovation, and it provides an “easy” way for startups to plug into a large, vibrant ecosystem. And for customers – someone said its “exa data without exadollar”…

As a result, the argument is this should be good for OEMs and ISVs, and help create a more innovative ecosystem and should also enable more infrastructure capacity to create new and better services. I’m not convinced that will happen yet, but it’s a laudable goal, and frankly that promise is part of what is appealing to LSI.

Open Compute and OpenStack are peanut butter and jelly
Ok – if you’re outside of the US, that may not mean much to you. But if you’ve lived in the US, you know that means they fit perfectly, and make something much greater together than their humble selves.

Graham Weston, Chairman of the Rackspace Board, was the one who called these two “peanut butter and jelly.”

Cole and Chris both felt the initiatives are co-enabling, and probably co-travelers too. Sure they can and will deploy independently, but OpenStack enables the management of large scale clusters, which really is not easy. Open Compute enables lower cost large-scale manageable clusters to be deployed. Together? Large-scale clusters that can be installed and deployed more affordably, and easily without hiring a cadre of rare experts.

Personally? I still think they are both a bit short of being ready for “prime time” – or broad deployment, but Cole and Chris gave me really valid arguments to show me I’m wrong. I guess we’ll see.

US or global vision?
I asked if these are US-centric or global visions. There were no qualms – these are global visions. This is just the 3rd anniversary of OpenStack, but even so, there are OpenStack organizations in more than 100 countries, 750 active contributors, and large-scale deployments in datacenters that you probably use every day – especially in China and the US. Companies like PayPal and Yahoo, Rackspace, Baidu, Sina Weibo, Alibaba, JD, and government agencies and HPC clusters like CERN, NASA, and China Defense.

Open Compute is even younger – about 2 years old. (I remember – I was invited to the launch). Even so, most of Facebook’s infrastructure runs on Open Compute. Two Wall Street banks have deployed large clusters, with more coming, and Riot Games, which uses Open Compute infrastructure, drives 3% of the global network traffic with League of Legends. (A complete aside – one of my favorite bands to workout with did a lot of that game’s music, and the live music at the League of Legends competition a few months ago: http://www.youtube.com/watch?v=mWU4QvC09uM – not for everyone, but I like it.)

Both Cole and Chris emailed me more data after the fact on who is using these initiatives. I have to say – they are right. It really has taken off globally, especially OpenStack in the fast-paced Chinese market this year.

Book: 4th Paradigm – A tribute to computer science researcher Jim Grey
Cole and Chris mentioned a book during the panel discussion. A book I had frankly never heard of. It’s called the 4th Paradigm. It was a series of papers dedicated to researcher Jim Grey, who was a quiet but towering figure that I believe I met once at Microsoft Research. The book was put together by Gordon Bell, someone who I have met, and have profound respect for. And there are mentions of people, places, and things that have been woven through my (long) career. I think I would sum up its thesis in a quote from Jim Grey near the start of the book:

“We have to do better producing tools to support the whole research cycle – from data capture and data curation to data analysis and data visualization.”

This is stunningly similar to the very useful big data framework we have been using recently at LSI: ”capture, hold, analyze”… I guess we should have added visualize, but that doesn’t have too much to do with LSI’s business.

As an aside, I would recommend this book for the background and inspiration in why we as an industry are trying to solve many of these computer science problems, and how transformational the impact might be. I mean really transformational in the world around us, what we know, what we can do, and how quickly we can do it – which is tightly related to our CEO’s keynote and the vision video at AIS.

Demos at AIS: peanut butter and jelly - and bread?
Ok – I’m struggling for analogy. We had an awesome demo at AIS that Chris and Cole pointed out during the panel. It was originally built using Nebula’s TOR appliance, Open Compute hardware, and LSI’s storage magic to make it complete. The three pieces coming together. Tasty. The Open Compute hardware was swapped out last minute (for safety, those boxes were meant for the datacenter – not the showcase in a hotel with tipsy techies) and were  generously supplied by Supermicro.

I don’t think the proto was close to any one of our visions, but even as it stood, it inspired a lot of people, and would make a great product. A short rack of servers, with pooled storage in the rack, OpenStack orchestrating the point and click spawning and tear down of dynamically sized LUNs of different characteristics under the Cinder presentation layer, and deployment of tasks or VMs on them.

We’re working on completing our joint vision. I think the industry will be very impressed when they see it. Chris thinks people will be stunned, and the industry will be changed.

Catalyzing the market The future may be closer than we think
Ultimately, this is all about economics. We’re in the middle of an unprecedented bifurcation in IT use. On one hand we’re running existing apps on new, dense enterprise hardware using VMs to layer many applications on few servers. On the other, we’re investing in applications to run at scale across inexpensive clusters of commodity hardware. This has spawned a split in IT vendor business units, product lines and offerings, and sometimes even IT infrastructure management in the datacenter.

New applications and services are needing more infrastructure, and are getting more expensive to power, cool, purchase, run. And there is pressure to transform the datacenter from a cost center into a profit center. As these innovations start, more companies will need scale infrastructure, arguably Open Compute, and then will need an Openstack framework to deploy it quickly.

Whats this mean? With a combination of big data and mobile device services driving economic value, we may be at the point where these clusters start to become mainstream. As an industry we’re already seeing a slight decline in traditional IT equipment sales and a rapid growth in scale-out infrastructure sales. If that continues, then OpenStack and Open Compute are a natural fit. The deployment rate uptick in life sciences, oil and gas, financials this year – really anywhere there is large-scale Hadoop, big data or analytics – may be the start of that growth curve. But both Chris and Cole felt it would probably take 5 years to truly take off.

Time to Wrap Up
I asked Chris and Cole for audience takeaways. Theirs were pretty simple, though possibly controversial in an industry like ours.

Hardware vendors should think about products and how they interface and what abstractions they present and how they fit into the ecosystem. These new ecosystems should allow them to easily plug in. For example, storage under Cinder can be quickly and easily morphed – that’s what we did with our demo.

We should be designing new software to run on distributed scale-out systems in clouds. Chris went on to say their code name was “Maestro” because it orchestrates like in a symphony, bringing things together in a beautiful way. He said “make instruments for the artists out there.” The brain trust. Look for their brushstrokes.

Innovate in the open, and leverage the open initiatives that are available to accelerate innovation and efficiency.

On your next IT purchase, try an RFP with an Open Compute vendor. Cole said you might be surprised. Worst case, you may get a better deal from your existing vendor.

So, Open Compute and Openstack are changing the datacenter world that we know and love. I thought these were having a quick impact, changing our OEMs and ODM products, changing what we expect from our vendors, changing the interoperability of managing infrastructure from different vendors, changing our ability to deploy and manage grid and scale-out infrastructure, and changing how quickly and at what high level we can be innovating. I was wrong. It’s happening much more quickly than even I thought.

 

Tags: , , , , , , , , , , , , , , , , , , , , , , , , , , , , ,
Views: (1618)


Last week at LSI’s annual Accelerating Innovation Summit (AIS) the company took the wraps off a vision that should lead its technical direction for the next few years.

The LSI keynote featured a video of three situations as they might evolve in the future:

  • A man falls from a bicycle in a foreign country and needs medical attention
  • A bullet train stops before hitting a tree that fell across its tracks
  • A hacker is prevented from accessing secure information using identity theft

I’ll focus on just one of these to show how LSI expects the future to develop.  In the bicycle accident scenario, a businessman falls to the ground while riding a bicycle in a foreign country.  Security cameras that have been upgraded to understand what they see notify an emergency services agency which sends an ambulance to the scene.  The paramedic performs a retinal scan on the victim, using it to retrieve his medical records, including his DNA sequence, from the web.

The businessman’s wearable body monitoring system also communicates with the paramedic’s instruments to share his vital signs.  All of this information is used by cloud-based computers to determine a course of action which, in the video, requires an injection that has been custom-tuned to the victim’s current situation, his medical history, and his genetic makeup.

That’s a pretty tall order, and it will require several advances in the state of the art, but LSI is using this and other scenarios to work with its clients and translate this vision into the products of the future.

What are the key requirements to make this happen? Talwalkar told the audience that we need to create a society that is supported by preventive, predictive and assisted analytics to move in a direction where the general welfare is assisted by all that the Internet and advanced computing have to offer.  Since data is growing at an exponential rate, he argued that this will require the instant retrieval of interlinked data objects at scale. Everything that is key to solving the task must be immediately available, and must be quickly analyzed to provide a solution to the problem at hand. The key will be the ability to process interlinked pieces of data that have not been previously structured to handle any particular situation.

To achieve this we will need larger-scale computing resources than are currently available, all closely interconnected, that all operate at very high speeds.  LSI hopes to tap into these needs through its strengths in networking and communications chips for the communications, its HDD and server and storage connectivity array chips and boards for large-scale data, and its flash controller memory and PCIe SSD expertise for high performance.

LSI brought to AIS several of the customers and partners it is working with using to develop these technologies. Speakers from Intel, Microsoft, IBM, Toshiba, Ericsson and others showed how they are working with LSI’s various technologies to improve the performance of their own systems.  On the exhibition floor booths from LSI and many of its clients demonstrated new technologies that performed everything from high-speed stock market analysis to fast flash management.

It’s pretty exciting to see a company that has a clear vision of its future and is committed to moving its entire ecosystem ahead to make that happen and help companies manage their business more effectively during what LSI calls the “Datacentric Era.” LSI has certainly put a lot of effort into creating a vision and determining where its talents can be brought to bear to improve our lives in the future.

Tags: , , , , , , , , , , , , , , , , , ,
Views: (2362)


The problem with multicore processors isn’t that they have a lot of cores. I hope my IC designer colleagues don’t jump me when I say that having more than one core on a chip is a simple matter of cut and paste. The tricky part is getting all those cores to work together – a coordinated, efficient effort is key. After all, if it were enough for the cores to work independently, we would just use multiple single-core processors. To be sure, the devil is in the details of connecting cores and managing how they share resources.

A key value of a multicore processor is using the processing muscle of additional cores – all working on a problem at the same time – to accelerate system performance. Basically, two heads are better than one. And 16 are even better. That is if they don’t get in each other’s way. When multiple cores are working on one job, they need to deftly hand off information to each other and to other on-chip resources like memory and I/O. Managing and streamlining the movement of all that information to minimize delays can require complex traffic management. If one core or another resource becomes a bottleneck, the entire performance benefit of multiple cores can be lost.

The challenge of cache coherence
Another complexity of coordinating multiple cores is cache coherence – the process of ensuring the consistency of data stored in each processor’s cache memory. Processors store frequently accessed information in this small, fast memory so they don’t have to access it again and again from slower storage such as main memory or disks. For example, if a core is running an application for ordering products online, it might load the inventory record for a particular product from disk into cache, modify it, and then write it back to disk when the transaction is complete.

The rub arises when more than one core caches the same data. If two cores were running the online ordering application, they might both cache the same inventory record. Both cores might then execute a transaction to sell the last unit of that product and not detect that the product is sold out. In a system with coherent cache, when one core makes any changes to cached data, all other cores storing the same data are notified that their cache is outdated, prompting an update for consistency.  Tracking all cached data and making sure it is coherent is a formidable effort requiring highly sophisticated cache management.

A third challenge in getting multicore design right is choosing the number and type of cores. Networking system workloads consist of varying tasks. Some are large complex tasks that require powerful general-purpose cores running complex programs. Others are very simple, quick tasks that are executed millions of times a second and are best handled by specialized compute engines. And of course there are tasks that fall between these extremes. Getting the right number and mix of compute engines requires detailed understanding of the applications the multicore processor will be used in. Too many cores and the processor consumes too much power. Too few of one type of core and the others sit idle wasting cost and, again, power.

Striking the right balance of interconnect, cache coherence and cores
The problem with multicore processors is getting the right combination of interconnect, cache coherence and number and type of cores. LSI’s latest solution to the multicore challenge for enterprise networking is the Axxia® 4500 family of processors. For general-purpose processing, the Axxia 4500 features up to 4 ARM® Cortex™ A15 cores that deliver high performance and power efficiency in a standard Linux programming environment. For special-purpose packet processing, the new chips offer up to 50Gb/s packet processing and acceleration engines for security encryption, deep packet inspection, traffic management and other networking functions. Connecting all these compute resources is the ARM Corelink CCN504 interconnect with integrated cache coherence and quality of service technologies for efficient on-chip communications.

Tags: , , , , , , ,
Views: (1654)


Pushing your enterprise cluster solution to deliver the highest performance at the lowest cost is key in architecting scale-out datacenters. Administrators must expand their storage to keep pace with their compute power as capacity and processing demands grow.

safijidsjfijdsifjiodsjfiosjdifdsoijfdsoijfsfkdsjifodsjiof dfisojfidosj iojfsdiojofodisjfoisdjfiodsj ofijds fds foids gfd gfd gfd gfd gfd gfd gfd gfd gfd gfdg dfg gfdgfdg fd gfd gdf gfd gdfgdf g gfd gdfg dfgfdg fdgfdgBeyond price and capacity, storage resources must also deliver enough bandwidth to support these growing demands. Without enough I/O bandwidth, connected servers and users can bottleneck, requiring sophisticated storage tuning to maintain reasonable performance. By using direct attached storage (DAS) server architectures, IT administrators can

Diagram of DataBolt technology buffering 6Gb/s SAS media while maintaining a 12Gb/s SAS link.

Beyond price and capacity, storage resources must also deliver enough bandwidth to support these growing demands. Without enough I/O bandwidth, connected servers and users can bottleneck, requiring sophisticated storage tuning to maintain reasonable performance. By using direct attached storage (DAS) server architectures, IT administrators can reduce the complexities and performance latencies associated with storage area networks (SANs). Now, with LSI 12Gb/s SAS or MegaRAID® technology, or both, connected to 12Gb/s SAS expander-based storage enclosures, administrators can leverage the DataBolt™ technology to clear I/O bandwidth bottlenecks. The result: better overall resource utilization, while preserving legacy drive investments. Typically a slower end device would step down the entire 12Gb/s SAS storage subsystem to 6Gb/s SAS speeds. How does Databolt technology overcome this? Well, without diving too deep into the nuts and bolts, intelligence in the expander buffers data and then transfers it out to the drives at 6Gb/s speeds in order to match the bandwidth between faster hosts and slower SAS or SATA devices.

The DataBolt enabled Hadoop server bandwidth is optimized with 12Gb/s SAS.

So for this demonstration at AIS, we are showcasing two Hadoop Distributed File System (HDFS) servers. Each server houses the newly shipping MegaRAID 9361-8i 12Gb/s SAS RAID controller connected to a drive enclosure featuring a 12Gb/s SAS expander and 32 6Gb/s SAS hard drives. One has a DataBolt-enabled configuration, while the other is disabled.

For the benchmarks, we ran DFSIO, which simulates MapReduce workloads and is typically used to detect performance network bottlenecks and tune hardware configurations as well as overall I/O performance.

The primary goal of the DFSIO benchmarks is to saturate storage arrays with random read workloads in order to ensure maximum performance of a cluster configuration. Our tests resulted in MapReduce Jobs completing faster in 12Gb/s mode, and overall throughput increased by 25%.

DataBolt optimization of DFSIO MapReduce tests (MB/s) per cluster slot maps.

Tags: , , , , , , , , , , , , , , , ,
Views: (6852)


The lifeblood of any online retailer is the speed of its IT infrastructure. Shoppers aren’t infinitely patient. Sluggish infrastructure performance can make shoppers wait precious seconds longer than they can stand, sending them fleeing to other sites for a faster purchase. Our federal government’s halting rollout of the Health Insurance Marketplace website is a glaring example of what can happen when IT infrastructure isn’t solid. A few bad user experiences that go viral can be damaging enough. Tens of thousands can be crippling.  

In hyperscale datacenters, any number of problems including network issues, insufficient scaling and inconsistent management can undermine end users’ experience. But one that hits home for me is the impact of slow storage on the performance of databases, where the data sits. With the database at the heart of all those online transactions, retailers can ill afford to have their tier of database servers operating at anything less than peak performance.

Slow storage undermines database performance
Typically, Web 2.0 and e-commerce companies run relational databases (RDBs) on these massive server-centric infrastructures. (Take a look at my blog last week to get a feel for the size of these hyperscale datacenter infrastructures). If you are running that many servers to support millions of users, you are likely using some kind of open-sourced RDB such as MySQL or other variations. Keep in mind that Oracle 11gR2 likely retails around $30K per core but MSQL is free. But the performance of both, and most other relational databases, suffer immensely when transactions are retrieving data from storage (or disk). You can only throw so much RAM and CPU power at the performance problem … sooner rather than later you have to deal with slow storage.

Almost everyone in industry – Web 2.0, cloud, hyperscale and other providers of massive database infrastructures – is lining up to solve this problem the best way they can. How? By deploying flash as the sole storage for database servers and applications. But is low-latency flash enough? For sheer performance it beats rotational disk hands down. But … even flash storage has its limitations, most notably when you are trying to drive ultra-low latencies for write IOs. Most IO accesses by RDBs, which do the transactional processing, are a mix or read/writes to the storage. Specifically, the mix is 70%/30% reads/writes. These are also typically low q-depth accesses (less than 4). It is those writes that can really slow things down.

PCIe flash reduces write latencies
The good news is that the right PCIe flash technology in the mix can solve the slowdowns. Some interesting PCIe flash technologies designed to tackle this latency problem are on display at AIS this week. DRAM and in particular NVDRAM are being deployed as a tier in front of flash to really tackle those nasty write latencies.

Among other demos, we’re showing how a Nytro™ 6000 series PCIe flash card helps solve the MySQL database performance issues. The typical response time for a small data read (this is what the database will see for a Database IO) from an HDD is 5ms. Flash-based devices such as the Nytro WarpDrive® card can complete the same read in less than 50μs on average during testing, an improvement of several orders-of-magnitude in response time. This response time translates to getting much higher transactions out of the same infrastructure – but with less space (flash is denser) and a lot less power (flash consumes a lot lower power than HDDs).

We’re also showing the Nytro 7000 series PCIe flash cards. They reach even lower write latencies than the 6000 series and very low q-depths.  The 7000 series cards also provide DRAM buffering while maintaining data-integrity even in the event of a power loss.

For online retailers and other businesses, higher database speeds mean more than just faster transactions. They can help keep those cash registers ringing.

Tags: , , , , , , , , , , , , , , , , , , , ,
Views: (979)


High Availability (HA) systems traditionally have been confined to large datacenters because of their high cost and the difficulty of scaling down clustered servers and shared storage arrays to support smaller environments such as Small Office Home Office (SOHO) and Remote Business Office (ROBO).

Microsoft and LSI are changing that.

As part of Windows Server® 2012, Microsoft and LSI collaborated on the development of the innovation called Cluster in a Box (CiB). With CiB, HA systems are now available for SOHO and ROBO applications. At AIS, we’re demonstrating our Syncro® CS High Availability controller in a clustered server system.

Our demo shows how Syncro CS, with its easy-to-deploy yet powerful automatic failover, helps protect and provide cost-effective continuous access and availability to your valuable data. The solution now supports both Linux® and Windows® OS environments.

Last June, we launched Syncro CS solutions with a demo at the Microsoft® Tech Ed Conference.  The demo featured a Syncro CS discrete server cluster using two servers, two Syncro CS controllers and a JBOD.  Each server was loaded with Windows Server 2012 in a cluster.  Syncro CS controllers enabled the shared storage.  The entire system was interconnected with a “backbone communications” system provided through a SAS interface.  Each server was running Microsoft Server 2012 Hyper-V with a virtual machine (VM) housing Counterstrike 1.6 server.  Basically, the Counterstrike server was built into a Syncro CS High Availability server cluster.  Clients accessed the game with Microsoft Surface Tablets.

When one of the servers was turned off, the automatic failover engaged and the tablet users never were aware of the “failure.”

Our AIS demo uses RHEL 6.4 Linux as the native OS running the KVM hypervisor with a Windows Server 2012 VM.  Microsoft Surface Tablets access the Counterstrike server housed in the Windows VM.

The demonstration highlights the option for administrators to use Linux as the native operating system for each server while running Windows applications in HA architectures.

At AIS, we’re also excited about our panel discussion “Delivering a Paradigm of High Availability” featuring several industry experts and thought leaders.  On the panel are Michael Steineke, VP Information Technology, Edgenet; Trenton Baker, VP Business Development, DataOn; Gene Lee, CEO, EchoStreams; John Loveall, Principal Program Manager, Microsoft Windows Server, Microsoft Corporation; Greg Kleiman, Director Strategy, Storage Business Unit, Red Hat; and Rick Reisner, Product Line Director, Datacenter Solutions Group, LSI.

The panel will discuss market needs for HA storage, offer their perspectives on product deployment, and discuss potential future HA use cases and product developments.

Tags: , , , , , , , , ,
Views: (903)


Try using a sledgehammer to pack 15 pounds of potatoes into a bag with a 5-pound capacity, and what do you end up with? Too much messy and disgusting material crammed into a vessel too small for the job and a lot of sloppy overspill.

Unless you have the right sledgehammer. What does this have to do with computer storage? Plenty. And it all starts with a new data-reshaping capability of LSI® DuraWrite™ technology. Keep the numbers in mind: 15 pounds of data in a 5-pound bag. Or three units of data in a space designed for one. It’s key. More on that later.

What does DuraWrite technology do again?
My blog Write Amplification (Part 2) talks about how DuraWrite data reduction technology can make more space for over provisioning. That translates into faster data transfers, longer flash memory endurance and lower power draw. What it has NOT done is increase the space available for data storage.

Until now.

DuraWrite Virtual Capacity is like the Dr. Who TARDIS
Excuse my Dr. Who phone booth reference, but for those who know the TARDIS, it provides a great analogy. For those unfamiliar with the TARDIS, it is a fictional time machine that looks like a British police call box. It is very small by external appearances, but inside it is vast in its carrying capacity, taking occupants on an odyssey through time.

DuraWrite Virtual Capacity (DVC) is a new feature of our SandForce® SSD controllers, and it’s a bit like the TARDIS. While there is no time travel involved, it does provide a lot more than can be seen. DVC takes advantage of data entropy (randomness) as data is written to the SSD. Some people like to think of it as data compression. Whatever you call it, the end result is the same—less data is written to the flash than what is written from the host. DuraWrite technology alone will increase the over provisioning of an SSD, but DVC increases the user data storage available in an SSD (not the over provisioning).

How much more space can it add?
The efficiency of DVC is inversely related to the entropy. High entropy data like JPEG, encrypted and similar compressed files do little to increase data capacity. In contrast, files like Microsoft OfficeOutlook® PST, Oracle® databases, EXE, and DLL (operating system) files have much lower entropy and can increase usable storage space on the order of two to three times for the same physical flash memory.  Yes, I said two to three times. Better still, that translates into a two to three times reduction in the net cost of the flash storage. Again, no typo: two to three times more affordable. Since most enterprise deployments of flash memory are limited by the cost per GB of the flash, this kind of advance has the potential to further accelerate flash memory deployments in the enterprise.

Why hasn’t this been done in the past?
Think TARDIS again. Step into the booth, and take a joy ride through space and time. It happens on-the-fly. Simple … but, only in a fictional world. With on-the-fly data reduction and compression, the process is filled with complexities. The biggest problem is most operating systems don’t understand that the maximum capacity of a primary storage device (hard disk or solid state drive) can increase or decrease over time. However, open source operating systems can address the issue with customized drivers.

The other problem is any storage device that includes data reduction or compression must use a variable mapping table to track the location of the data on the device once data is reduced. Hard disk drives (HDDs) do not require any kind of mapping table because the operating system can write new data over old data. However, the lack of a mapping table prevents HDDs from supporting an on-the-fly data reduction and compression system.

All solid state drives (SSDs) using NAND flash feature  a basic mapping table, typically called the flash translation layer (FTL). This mapping table is required because NAND flash memory pages cannot be rewritten directly, but must first be erased in larger blocks. The SSD controller needs to relocate valid data while the old data gets erased. This process, called garbage collection, uses the mapping table. However, the data reduction and compression system requires a mapping table that is variable in size. Most SSDs lack that capability, but not those using a SandForce controller, making SSDs with SandForce controllers perfectly suited to the job.

What use cases can be applied to DVC?
DVC can be used to increase usable data storage space or provide more cache capacity flexibility by two to three times. To create more usable data storage space, the operating system must be altered with new primary storage device drivers for it to understand the drive’s maximum capacity, which can fluctuate over time based on how much the data is reduced or compressed.

To support greater cache capacity flexibility, a host controller would manage the flash memory directly as a cache. The controller would isolate the flash memory capacity from the host so the operating system does not even see it. The dynamic cache capacity would increase cache performance at a lower price per GB depending upon the entropy of the data. The LSI Nytro product line and some SandForce Driven® program SSDs already support both of these use cases.

When will this appear in my personal computer?
While DVC is already being deployed and evaluated in enterprise datacenters around the world, the use in personal computers will take a bit longer due to the need to have the operating system changed with new storage device drivers that understand the fluctuating maximum capacity.

When these operating system changes come together, you will not need that sledgehammer to pack more data into your TARDIS (SSD). Now that’s a space odyssey to write home about.

Tags: , , , , , , , , ,
Views: (1541)


Many of you may have heard of a poem written by Robert Fulgham 25 years ago called “All I Really Need to Know I Learned in Kindergarten.” In it he provides such pearls of wisdom like “Play fair,” “Clean up your own mess,” “Don’t take things that aren’t yours” and “Flush.” By now you’re wondering what any of this has to do with storage technology. Well the #1 item on the kindergarten knowledge list is “Share Everything.” And from my perspective that includes DAS (direct-attached storage).

Sharable DAS has been a primary topic of discussion at this year’s annual LSI Accelerating Innovation Summit (AIS). During one keynote session I proposed a continuum of data sharing, spanning from traditional server-based DAS to traditional external NAS and SAN with multiple points in between – including external DAS, simple pooled storage, advanced pooled storage, shared storage and HA (high-availability) shared storage. Each step along the continuum adds incremental features and value, giving datacenter architects the latitude to choose – and pay for – only the level of sharing absolutely required, and no more. This level of choice is being very warmly received by the market as storage requirements vary widely among Web-cloud, private cloud, traditional enterprise, and SMB configurations and applications.

Sharable DAS pools storage for operational benefits and efficiencies
Sharable DAS, with its inherent storage resource pooling, offers a number of operational benefits and efficiencies when applied at the rack level:

  • Standardized storage architectures, leveraging economies of scale of today’s high-volume DAS solutions, and minimizing storage qualifications
  • Simplified volume, boot and unified storage management by extending today’s widely deployed storage management tools
  • Reduced number of compute and storage SKUs within a datacenter, minimizing training and maintenance costs
  • Simplified life cycle management by de-coupling the upgrade cycles of compute (typically 18-24 months) and storage (typically 3-5 years)

LSI rolls out proof-of-concept Rack Scale architecture using sharable DAS
In addition to just talking about sharable DAS at AIS, we also rolled out a proof-of-concept Rack Scale architecture employing sharable DAS.  In it we configured 20 servers with 12Gb/s SAS RAID controllers, a prototype 40-port 12Gb/s SAS switch (that’s 160 12Gb/s SAS lanes) and 10 JBODs with 12Gb/s SAS for a total of 200 disk drives – all in a single rack. The drives were configured as a single storage resource pool with our media sharing (ability to spread volumes across multiple disk drives and aggregate disk drive bandwidth) and distributed RAID (ability to disperse data protection across multiple disk drives) features. This configuration pools the server storage into a single resource, delivering substantial, tangible performance and availability improvements, when compared to 20 stand-alone servers. In particular, the configuration:

  • Enables active servers to claim unused bandwidth and IOPs
  • Enhances server performance when a disk drive fails, providing consistent high performance to applications by distributing the impact of a single drive failure across all the drives in the pool
  • Accelerates time to redundancy (TTR), greatly minimizing the window of vulnerability for subsequent disk drive failures

I’m sure you’ll agree with me that Rack Scale architecture with sharable DAS is clearly a major step forward in providing a wide range of storage solutions under a single architecture. This in turn provides a multitude of operational efficiencies and performance benefits, giving datacenter architects wide latitude to employ what is needed – and only what is needed.

Now that we’ve tackled the #1 item on the kindergarten learning list, maybe I’ll set my sights on another item, like “Take a nap every afternoon.”

 

 

Tags: , , , , , ,
Views: (873)