Scroll to Top

With the much anticipated launch of 12gb/s SAS MegaRAID and 12Gb/s SAS expanders featuring DataBolt™ SAS bandwidth-aggregation technologies, LSI is taking the bold step of moving beyond traditional IO performance benchmarks like IOMeter to benchmarks that simulate real-world workloads.

In order to fully illustrate the actual benefit many IT administrators can realize using 12Gb/s SAS MegaRAID products on their new server platforms, LSI is demonstrating application benchmarks on top of actual enterprise applications at AIS.

For our end-to-end 12Gb/s SAS MegaRAID demonstration, we chose Benchmark Factory® for Databases running on a MySQL Database. Benchmark Factor, a database performance testing tool that allows you to conduct database workload replay, industry-standard benchmark testing and scalability testing, uses real database application workloads such as TPC-C, TPC-E and TPC-H. We chose the TPC-H benchmark, a decision-support benchmark, because of its large streaming query profile. TPC-H shows the performance of decision support systems – which examine large volumes of data to simplify the analysis of information for business decisions – making it an excellent benchmark to showcase 12Gb/s SAS MegaRAID technology and its ability to maximize the bandwidth of PCIe 3.0 platforms, compared to 6Gb/s SAS.

LSI MegaRAID SAS 9361-8i storage performance on display using Spotlight® on MySQL, which is monitoring the data traffic across Intel’s new R2216GZ4GC server based on the new Intel® Xeon® processor E5-2600 v2.

The demo uses the latest Intel R2216GZ4GC servers based on the new Intel® Xeon® processor E5-2600 v2 product family to illustrate how 12Gb/s SAS MegaRAID solutions are needed to take advantage of the bandwidth capabilities of PCIe® 3.0 bus technology.

When the benchmarks are run side-by-side on the two platforms, you can quickly see how much faster data transfer rates are executed, and how much more efficiently the Intel servers handles data traffic. We used Quest Software’s Spotlight® on MySQL tool to monitor and measure data traffic from end storage devices to the clients running the database queries. More importantly, the test also illustrates how many more user queries the 12Gb/s SAS-based system can handle before complete resource saturation – 60 percent more than 6Gb/s SAS in this demonstration.

What does this mean to IT administrators? Clearly, resourse utilization is much higher,  improving total cost of ownership (TCO) with their database server. Or, conversley, 12Gb/s SAS can reduce cost per IO since fewer drives can be used with the server to deliver the same performance as the previous 6Gb/s SAS generation of storage infrastructure.

Tags: , , , , , , , , , , , , ,
Views: (869)


The staggering growth of smart phones, tablets and other mobile devices is sending a massive flood of data through today’s mobile networks. Compared to just a few years ago, we are all producing and consuming far more videos, photos, multimedia and other digital content, and spending more time in immersive and interactive applications such as video and other games – all from handheld devices.

Think of mobile, and you think remote – using a handheld when you’re out and about. But according to the Cisco® VNI Mobile Forecast 2013,  while 75% of all videos today are viewed on mobile devices by 2017, 46% of mobile video will be consumed indoors (at home, at the office, at the mall and elsewhere). With the widespread implementation of IEEE® 802.11 WiFi on mobile devices, much of that indoor video traffic will be routed through fixed broadband pipes.

 

Unlike residential indoor solutions, enterprise and public area access infrastructures – for outdoor connections – are much more diverse and complicated. For example, the current access layer architectures include Layer 2/3 wiring closet switches and WiFi access points, as shown below. Mobile service providers are currently seeking architectures that enable them to take advantage of both indoor enterprise and public area access infrastructure. These architectures must integrate seamlessly with existing mobile infrastructures and require no investment in additional access equipment by service providers in order for them to provide a consistent, quality experience for end users indoors and outdoors. For their part, mobile service providers must:

  1. Offer indoor and outdoor access solutions
  2. Put in place agreements and collaborate with traditional fixed broadband service providers to offload mobile traffic seamlessly
  3. Continue to drive carrier-grade Wi-Fi specifications across traditional indoor access infrastructures (like Wi-Fi Access Point) while meeting the disparate reliability and management requirements of different carriers.

The following figure shows the three possible paths mobile service providers wanting to offer indoor enterprise/public can take. Approach 1 is ideal for enterprises trying to improve coverage in particular areas of a corporate campus. Approaches 2 and 3 not only provide uniform coverage across the campus but also support differentiating capabilities such as the allocation of application and mobility-centric radio spectrum  across WiFi and cellular frequencies. A key factor to consider when evaluating these approaches is the extent to which equipment ownership is split between the enterprise and the mobile service provider. Approaches 2 & 3 increase capital expenditures for the operator because of the radio heads and small cells that need to be deployed across the enterprise or public campus. At AIS, LSI is demonstrating approach 2.

The last but certainly not least important consideration between approaches 2 and 3 is whether these indoor/outdoor small cells employ self-organizing network (SON) techniques. For service providers, the small cells ideally would be self-organizing and the macro cells serve any additional management functions.  The advantage of approach 3 is that it offloads more of the macro cell traffic and makes various campus small cells self-organizing, significantly reducing operational costs for the service provider.

Tags: , , , , , , , , ,
Views: (725)


Back in the 1990s, a new paradigm was forced into space exploration. NASA faced big cost cuts. But grand ambitions for missions to Mars were still on its mind. The problem was it couldn’t dream and spend big. So the NASA mantra became “faster, better, cheaper.” The idea was that the agency could slash costs while still carrying out a wide variety of programs and space missions. This led to some radical rethinks, and some fantastically successful programs that had very outside-the-box solutions. (Bouncing Mars landers anyone?)

That probably sounds familiar to any IT admin. And that spirit is alive at LSI’s AIS – The Accelerating Innovation Summit, which is our annual congress of customers and industry pros, coming up Nov. 20-21 in San Jose. Like the people at Mission Control, they all want to make big things happen… without spending too much.

Take technology and line of business professionals. They need to speed up critical business applications. A lot. Or IT staff for enterprise and mobile networks, who must deliver more work to support the ever-growing number of users, devices and virtualized machines that depend on them. Or consider mega datacenter and cloud service providers, whose customers demand the highest levels of service, yet get that service for free. Or datacenter architects and managers, who need servers, storage and networks to run at ever-greater efficiency even as they grow capability exponentially.

(LSI has been working on many solutions to these problems, some of which I spoke about in this blog.)

It’s all about moving data faster, better, and cheaper. If NASA could do it, we can too. In that vein, here’s a look at some of the topics you can expect AIS to address around doing more work for fewer dollars:

  • Emerging solid state technologies – Flash is dramatically enhancing datacenter efficiency and enabling new use cases. Could emerging solid state technologies such as Phase Change Memory (PCM) and Spin-Torque Transfer (STT) RAM radically change the way we use storage and memory?
  • Hyperscale deployments – Traditional SAN and NAS lack the scalability and economics needed for today’s hyperscale deployments. As businesses begin to emulate hyperscale deployments, they need to scale and manage datacenter infrastructure more effectively. Will software increasingly be used to both manage storage and provide storage services on commodity hardware?
  • Sub-20nm flash – The emergence of sub-20nm flash promises new cost savings for the storage industry. But with reduced data reliability, slower overall access times and much lower intrinsic endurance, is it ready for the datacenter?
  • Triple-Level Cell flash – The move to Multi-Level Cell (MLC) flash helped double the capacity per square millimeter of silicon, and Triple-Level Cell (TLC) promises even higher storage density. But TCL comes at a cost: its working life is much shorter than MLC. So what, if any role will TLC play in the datacenter? Remember – it wasn’t long ago no one believed MLC could be used in enterprise.
  • Flash for virtual desktop – Virtual desktop technology has seen significant growth in today’s datacenters. However, storage demands on highly utilized VDI servers can cause unacceptable response times. Can flash help virtual desktop environments achieve the best overall performance to improve end-user productivity while lowering total solution cost?
  • Flash caching – Oracle and storage vendors have started enhancing their products to take advantage of flash caching. How can database administrators implement caching technology running on Oracle® Linux with Oracle Unbreakable Enterprise Kernel, utilizing Oracle Database Smart Flash Cache?
  • Software Defined Networks (SDN) – SDNs promise to make networks more flexible, easier to manage, and programmable. How and why are businesses using SDNs today?  
  • Big data analytics – Gathering, interpreting and correlating multiple data streams as they are created can enhance real-time decision making for industries like financial trading, national security, consumer marketing, and network security. How can specialized silicon greatly reduce the compute power necessary, and make the “real-time” part of real-time analytics possible?
  • Sharable DAS – Datacenters of all sizes are struggling to provide high performance and 24/7 uptime, while reducing TCO. How can DAS-based storage sharing and scaling help meet the growing need for reduced cost and greater ease of use, performance, agility and uptime?
  • 12Gb/s SAS – Applications such as Web 2.0/cloud infrastructure, transaction processing and business intelligence are driving the need for higher-performance storage. How can 12Gb/s SAS meet today’s high-performance challenges for IOPS and bandwidth while providing enterprise-class features, technology maturity and investment protection, even with existing storage devices?

And, I think you’ll find some astounding products, demos, proof of concepts and future solutions in the showcase too – not just from LSI but from partners and fellow travelers in this industry. Hey – that’s my favorite part. I can’t wait to see people’s reactions.

Since they rethought how to do business in 2002, NASA has embarked on nearly 60 Mars missions. Faster, better, cheaper. It can work here in IT too.

Tags: , , , , , , , , , , , , , , , , , ,
Views: (852)


You may have noticed I’m interested in Open Compute. What you may not know is I’m also really interested in OpenStack. You’re either wondering what the heck I’m talking about or nodding your head. I think these two movements are co-dependent. Sure they can and will exist independently, but I think the success of each is tied to the other.  In other words, I think they are two sides of the same coin.

Why is this on my mind? Well – I’m the lucky guy who gets to moderate a panel at LSI’s AIS conference, with the COO of Open Compute, and the founder of OpenStack. More on that later. First, I guess I should describe my view of the two. The people running these open-source efforts probably have a different view. We’ll find that out during the panel.

I view Open Compute as the very first viable open-source hardware initiative that general business will be able to use. It’s not just about saving money for rack-scale deployments. It’s about having interoperable, multi-source systems that have known, customer-malleable – even completely customized and unique – characteristics including management.  It also promises to reduce OpEx costs.

Ready for Prime Time?
But the truth is Open Compute is not ready for prime time yet. Facebook developed almost all the designs for its own use and gifted them to Open Compute, and they are mostly one or two generations old. And somewhere between 2,000 and 10,000 Open Compute servers have shipped. That’s all. But, it’s a start.

More importantly though, it’s still just hardware. There is still a need to deploy and manage the hardware, as well as distribute tasks, and load balance a cluster of Open Compute infrastructure. That’s a very specialized capability, and there really aren’t that many people who can do that. And the hardware is so bare bones – with specialized enclosures, cooling, etc – that it’s pretty hard to deploy small amounts. You really want to deploy at scale – thousands. If you’re deploying a few servers, Open Compute probably isn’t for you for quite some time.

I view OpenStack in a similar way. It’s also not ready for prime time. OpenStack is an orchestration layer for the datacenter. You hear about the “software defined datacenter.” Well, this is it – at least one version. It pools the resources (compute, object and block storage, network, and memory at some time in the future), presents them, allows them to be managed in a semi-automatic way, and automates deployment of tasks on the scaled infrastructure. Sure there are some large-scale deployments. But it’s still pretty tough to deploy at large scale. That’s because it needs to be tuned and tailored to specific hardware. In fact, the biggest datacenters in the world mostly use their own orchestration layer.  So that means today OpenStack is really better at smaller deployments, like 50, 100 or 200 server nodes.

The synergy – 2 sides of the same coin
You’ll probably start to see the synergy. Open Compute needs management and deployment. OpenStack prefers known homogenous hardware or else it’s not so easy to deploy. So there is a natural synergy between the two. It’s interesting too that some individuals are working on both… Ultimately, the two Open initiatives will meet in the big, but not-too-big (many hundreds to small thousands of servers) deployments in the next few years.

Ecosystem
And then of course there is the complexity of the interaction of for-profit companies and open-source designs and distributions. Companies are trying to add to the open standards. Sometimes to the betterment of standards, but sometimes in irrelevant ways. Several OEMs are jumping in to mature and support OpenStack. And many ODMs are working to make Open Compute more mature. And some companies are trying to accelerate the maturity and adoption of the technologies in pre-configured solutions or appliances. What’s even more interesting are the large customers – guys like Wall Street banks – that are working to make them both useful for deployment at scale. These won’t be the only way scaled systems are deployed, but they’re going to become very common platforms for scale-out or grid infrastructure for utility computing.

Here is how I charted the ecosystem last spring. There’s not a lot of direct interaction between the two, and I know there are a lot of players missing. Frankly, it’s getting crazy complex. There has been an explosion of players, and I’ve run out of space, so I’ve just not gotten around to updating it. (If anyone engaged in these ecosystems wants to update it and send me a copy – I’d be much obliged! Maybe you guys at Nebula ? ;-)).

An AIS keynote panel – What?
Which brings me back to that keynote panel at AIS. Every year LSI has a conference that’s by invitation only (sorry). It’s become a pretty big deal. We have some very high-profile keynotes from industry leaders. There is a fantastic tech showcase of LSI products, partner and ecosystem company’s products, and a good mix of proof of concepts, prototypes and what-if products. And there are a lot of breakout sessions on industry topics, trends and solutions. Last year I personally escorted an IBM fellow, Google VPs, Facebook architects, bank VPs, Amazon execs, flash company execs, several CTOs, some industry analysts, database and transactional company execs…

It’s a great place to meet and interact with peers if you’re involved in the datacenter, network or cellular infrastructure businesses.  One of the keynotes is actually a panel of 2. The COO of Open Compute, Cole Crawford, and the co-founder of OpenStack, Chris Kemp (who is also the founder and CSO of Nebula). Both of them are very smart, experienced and articulate, and deeply involved in these movements. It should be a really engaging, interesting keynote panel, and I’m lucky enough to have a front-row seat. I’ll be the moderator, and I’m already working on questions. If there is something specific you would like asked, let me know, and I’ll try to accommodate you.

You can see more here.

Yea – I’m very interested in Open Compute and OpenStack. I think these two movements are co-dependent. And I think they are already changing our industry – even before they are ready for real large-scale deployment.  Sure they can and will exist independently, but I think the success of each is tied to the other. The people running these open-source efforts might have a different view. Luckily, we’ll get to find out what they think next month… And I’m lucky enough to have a front row seat.

 

Tags: , , , , , , , , ,
Views: (8005)