High Availability (HA) systems traditionally have been confined to large datacenters because of their high cost and the difficulty of scaling down clustered servers and shared storage arrays to support smaller environments such as Small Office Home Office (SOHO) and Remote Business Office (ROBO).
Microsoft and LSI are changing that.
As part of Windows Server® 2012, Microsoft and LSI collaborated on the development of the innovation called Cluster in a Box (CiB). With CiB, HA systems are now available for SOHO and ROBO applications.
Many of you may have heard of a poem written by Robert Fulgham 25 years ago called “All I Really Need to Know I Learned in Kindergarten.” In it he provides such pearls of wisdom like “Play fair,” “Clean up your own mess,” “Don’t take things that aren’t yours” and “Flush.” By now you’re wondering what any of this has to do with storage technology. Well the #1 item on the kindergarten knowledge list is “Share Everything.” And from my perspective that includes DAS (direct-attached storage).
Scaling compute power and storage in space-constrained datacenters is one of the top IT challenges of our time. With datacenters worldwide pressed to maximize both within the same floor space, the central challenge is increasing density.
At IBM we continue to design products that help businesses meet their most pressing IT requirements, whether it’s optimizing data analytics, data management, the fastest growing workloads such as social media and cloud delivery or, of course, increasing compute and storage density. Our technology partners are a crucial part of our work, and this week at AIS we are teaming with LSI to showcase our new high-density NeXtScale computing platform and x3650 M4 HD server.
With the much anticipated launch of 12gb/s SAS MegaRAID and 12Gb/s SAS expanders featuring DataBolt™ SAS bandwidth-aggregation technologies, LSI is taking the bold step of moving beyond traditional IO performance benchmarks like IOMeter to benchmarks that simulate real-world workloads.
In order to fully illustrate the actual benefit many IT administrators can realize using 12Gb/s SAS MegaRAID products on their new server platforms, LSI is demonstrating application benchmarks on top of actual enterprise applications at AIS.
For our end-to-end 12Gb/s SAS MegaRAID demonstration, we chose Benchmark Factory® for Databases running on a MySQL Database.
The staggering growth of smart phones, tablets and other mobile devices is sending a massive flood of data through today’s mobile networks. Compared to just a few years ago, we are all producing and consuming far more videos, photos, multimedia and other digital content, and spending more time in immersive and interactive applications such as video and other games – all from handheld devices.
Think of mobile, and you think remote – using a handheld when you’re out and about. But according to the Cisco® VNI Mobile Forecast 2013, while 75% of all videos today are viewed on mobile devices by 2017, 46% of mobile video will be consumed indoors (at home, at the office, at the mall and elsewhere).
You might be surprised to find out how big the infrastructure for cloud and Web 2.0 is. It is mind-blowing. Microsoft has acknowledged packing more than 1 million servers into its datacenters, and by some accounts that is fewer than Google’s massive server count but a bit more than Amazon.
Facebook’s server count is said to have skyrocketed from 30,000 in 2012 to 180,000 just this past August, serving 900 million plus users. And the social media giant is even putting its considerable weight behind the Open Compute effort to make servers fit better in a rack and draw less power.
Back in the 1990s, a new paradigm was forced into space exploration. NASA faced big cost cuts. But grand ambitions for missions to Mars were still on its mind. The problem was it couldn’t dream and spend big. So the NASA mantra became “faster, better, cheaper.” The idea was that the agency could slash costs while still carrying out a wide variety of programs and space missions. This led to some radical rethinks, and some fantastically successful programs that had very outside-the-box solutions.
You may have noticed I’m interested in Open Compute. What you may not know is I’m also really interested in OpenStack. You’re either wondering what the heck I’m talking about or nodding your head. I think these two movements are co-dependent. Sure they can and will exist independently, but I think the success of each is tied to the other. In other words, I think they are two sides of the same coin.
Why is this on my mind? Well – I’m the lucky guy who gets to moderate a panel at LSI’s AIS conference, with the COO of Open Compute, and the founder of OpenStack.
The United Nations finding that mobile broadband subscriptions are surging in developing countries, reported by The New York Times on Sept. 26, is no surprise. Equally unsurprising, the growing number of users, density of users and increasing bandwidth needs of applications likely are continuing to strain existing wireless networks and per-user bandwidths not only in developing countries but worldwide.
But rising pressure on bandwidth, coupled with increasingly data-intensive applications, isn’t the whole story. Minimizing end-to-end latency – from user to network base station and back again – is crucial in enabling banking, e-commerce, enterprise and other important business applications.
Optimizing the work per dollar spent is a high priority in datacenters around the world. But there aren’t many ways to accomplish that. I’d argue that integrating flash into the storage system drives the best – sometimes most profound – improvement in the cost of getting work done.
Yea, I know work/$ is a US-centric metric, but replace the $ with your favorite currency. The principle remains the same.
I had the chance to talk with one of the execs who’s responsible for Google’s infrastructure last week.