Scaling compute power and storage in space-constrained datacenters is one of the top IT challenges of our time. With datacenters worldwide pressed to maximize both within the same floor space, the central challenge is increasing density.

At IBM we continue to design products that help businesses meet their most pressing IT requirements, whether it’s optimizing data analytics, data management, the fastest growing workloads such as social media and cloud delivery or, of course, increasing compute and storage density. Our technology partners are a crucial part of our work, and this week at AIS we are teaming with LSI to showcase our new high-density NeXtScale computing platform and x3650 M4 HD server. Both leverage LSI® SAS RAID controllers for data protection, and the x3650 M4 HD server features an integrated leading-edge LSI 12Gb/s SAS RAID controller.

IBM NeXtScale System

NeXtScale System – ideal for HPC, cloud service providers and Web 2.0
The NeXtScale System®, an economical addition to the IBM System® family, maximizes usable compute density by packing up to 84 x86-based systems and 2,016 processing cores into a standard 19-inch rack to enable seamless integration into existing infrastructures. The family also enables organizations of all sizes and budgets to start small and scale rapidly for growth. The NeXtScale System is an ideal high-density solution for high-performance computing (HPC), cloud service providers and Web 2.0.

IBM System x3650 M4 HD

The System x3650 M4 HD, IBM’s newest high-density storage server, is designed for data-intensive analytics or business-critical workloads. The 2U rack server supports up to 62% more drive bays than the System x3650 M4 platform, providing connections for up to 26 2.5-inch HDDs or SSDs. The server is powered by the Intel Xeon processor E5-2600 family and features up to 6 PCIe 3.0 slots and an onboard LSI 12Gb/s SAS RAID controller. This combination gives a big boost to data applications and cloud deployments by increasing the processing power, performance and data protection that are the lifeblood of these environments.

IBM dense storage solutions to help drive data management, cloud computing and big data strategies
Cloud computing and big data will continue to have a tremendous impact on the IT infrastructure and create data management challenges for businesses. At IBM, we think holistically about the needs of our customers and believe that our new line of dense storage solutions will help them design, develop and execute on their data management, cloud computing and big data strategies.

 

Tags: , , , , , , , , , , , , ,
Views: (6619)


With the much anticipated launch of 12gb/s SAS MegaRAID and 12Gb/s SAS expanders featuring DataBolt™ SAS bandwidth-aggregation technologies, LSI is taking the bold step of moving beyond traditional IO performance benchmarks like IOMeter to benchmarks that simulate real-world workloads.

In order to fully illustrate the actual benefit many IT administrators can realize using 12Gb/s SAS MegaRAID products on their new server platforms, LSI is demonstrating application benchmarks on top of actual enterprise applications at AIS.

For our end-to-end 12Gb/s SAS MegaRAID demonstration, we chose Benchmark Factory® for Databases running on a MySQL Database. Benchmark Factor, a database performance testing tool that allows you to conduct database workload replay, industry-standard benchmark testing and scalability testing, uses real database application workloads such as TPC-C, TPC-E and TPC-H. We chose the TPC-H benchmark, a decision-support benchmark, because of its large streaming query profile. TPC-H shows the performance of decision support systems – which examine large volumes of data to simplify the analysis of information for business decisions – making it an excellent benchmark to showcase 12Gb/s SAS MegaRAID technology and its ability to maximize the bandwidth of PCIe 3.0 platforms, compared to 6Gb/s SAS.

LSI MegaRAID SAS 9361-8i storage performance on display using Spotlight® on MySQL, which is monitoring the data traffic across Intel’s new R2216GZ4GC server based on the new Intel® Xeon® processor E5-2600 v2.

The demo uses the latest Intel R2216GZ4GC servers based on the new Intel® Xeon® processor E5-2600 v2 product family to illustrate how 12Gb/s SAS MegaRAID solutions are needed to take advantage of the bandwidth capabilities of PCIe® 3.0 bus technology.

When the benchmarks are run side-by-side on the two platforms, you can quickly see how much faster data transfer rates are executed, and how much more efficiently the Intel servers handles data traffic. We used Quest Software’s Spotlight® on MySQL tool to monitor and measure data traffic from end storage devices to the clients running the database queries. More importantly, the test also illustrates how many more user queries the 12Gb/s SAS-based system can handle before complete resource saturation – 60 percent more than 6Gb/s SAS in this demonstration.

What does this mean to IT administrators? Clearly, resourse utilization is much higher,  improving total cost of ownership (TCO) with their database server. Or, conversley, 12Gb/s SAS can reduce cost per IO since fewer drives can be used with the server to deliver the same performance as the previous 6Gb/s SAS generation of storage infrastructure.

Tags: , , , , , , , , , , , , ,
Views: (665)


The world according to DAS

by

You might be surprised to find out how big the infrastructure for cloud and Web 2.0 is. It is mind-blowing. Microsoft has acknowledged packing more than 1 million servers into its datacenters, and by some accounts that is fewer than Google’s massive server count but a bit more than Amazon.  

Facebook’s server count is said to have skyrocketed from 30,000 in 2012 to 180,000 just this past August, serving 900 million plus users. And the social media giant is even putting its considerable weight behind the Open Compute effort to make servers fit better in a rack and draw less power. The list of mega infrastructures also includes Tencent, Baidu and Alibaba and the roster goes on and on.

Even more jaw-dropping is that almost 99.9% of these hyperscale infrastructures are built with servers featuring direct-attached storage. That’s right – they do the computing and store the data. In other words, no special, dedicated storage gear. Yes, your Facebook photos, your Skydrive personal cloud and all the content you use for entertainment, on-demand video and gaming data are stored inside the server.

Direct-attached storage reigns supreme
Everything in these infrastructures – compute and storage – is built out of x-86 based servers with storage inside. What’s more, growth of direct-attached storage is many folds bigger than any other storage deployments in IT. Rising deployments of cloud, or cloud-like, architectures are behind much of this expansion.

The prevalence of direct-attached storage is not unique to hyperscale deployments. Large IT organizations are looking to reap the rewards of creating similar on-premise infrastructures. The benefits are impressive: Build one kind of infrastructure (server racks), host anything you want (any of your properties), and scale if you need to very easily. TCO is much less than infrastructures relying on network storage or SANs.

With direct-attached you no longer need dedicated appliances for your database tier, your email tier, your analytics tier, your EDA tier. All of that can be hosted on scalable, share-nothing infrastructure. And just as with hyperscale, the storage is all in the server. No SAN storage required.

Open Compute, OpenStack and software-defined storage drive DAS growth
Open Compute is part of the picture. A recent Open Compute show I attended was mostly sponsored by hyperscale customers/suppliers. Many big-bank IT folks attended. Open Compute isn’t the only initiative driving growing deployments of direct-attached storage. So is software-defined storage and OpenStack. Big application vendors such as Oracle, Microsoft, VMware and SAP are also on board, providing solutions that support server-based storage/compute platforms that are easy and cost-effective to deploy, maintain and scale and need no external storage (or SAN including all-flash arrays).

So if you are a network-storage or SAN manufacturer, you have to be doing some serious thinking (many have already) about how you’re going to catch and ride this huge wave of growth.

 

 

 

Tags: , , , , , , , , , , , , , , , ,
Views: (1887)


Most people fully understand that electronics are useless without power, but what happens when devices lose power in the middle of operating? That answer is highly dependent on a number of variables including what type of electronic device is in question.

For solid state drives (SSDs) the answer depends on factors such as whether an uninterruptable power supply (UPS) is connected, what controller or flash processor is used, the design of the power circuit of the SSD, and the type of memory. If an SSD is in the middle of a write operation to the flash memory and power to the SSD is disconnected, many bad things could happen if the right safety measures are not in place. Many users do not think about all the non-user initiated operations the SSDs may be performing like background garbage collection that could be under way when the power fails. Without the correct protection, in most cases data will be corrupted.

According to the Nielsen company, 108.4 million viewers were tuned into the 2013 Super Bowl in New Orleans only to be shocked to witness the power go down for 34 minutes in the middle of the game. If power can be lost during such an incredibly high profile event such as this, it can happen just about anywhere.

Inside the New Orleans Superdome stadium operations and broadcast server rooms
Enterprise computing environments typically have multiple servers with data connections and lots of storage. Over the past few years, a larger percentage of the storage is kept on SSDs for the very active or “hot” data. This greatly improves data access time and reduces overall latency of the system. Enterprise servers are often connected to UPS systems that supply the connected devices with temporary power during a power failure.

Usually this is enough power to support uninterrupted system operations until power is restored, or at least until technicians can complete their current work and shut down safely. However there are many times when UPS systems are not deployed or fail to operate properly themselves. In those cases, the server will experience a power failure as abrupt as if someone had yanked the power cord from the wall socket.

The LSI® SandForce® flash controllers are at the heart of many popular SSDs sold today. The flash controller connects the host computer with the flash memory to store user data in fast non-volatile memory. The SandForce flash controllers are specifically engineered to operate in different environments, and the SF-2500/2600 FSPs are designed to provide the high level of data integrity required for enterprise applications. In the area of power failure protection, they include a combination of firmware (FW) and hardware circuitry that monitors the power coming into the SSD. In the event of a power failure or even a brown-out, the flash controller is alerted to the situation and hold-up capacitors in the SSD provide the necessary power and time for the controller to complete pending writes to the flash memory. This same circuit is also designed to prevent the risk of lower page corruption with Multi-level Cell (MLC) memory.

Watch out for SSD solutions that provide backup capacitors, but lack the necessary support circuitry and special firmware to ensure the data is fully committed to the flash memory before the power runs out. Even if these other special circuits are present, only truly enterprise SSDs that are meticulously designed and tested to withstand power failures are up to the task of storing and protecting highly critical data.

In the control room and down on the field
The usage patterns of non-enterprise systems like notebooks and ultrabooks call for a different power failure support mechanism. Realize that when you have a notebook or ultrabook system, you have a built in mini-UPS system. A power outage from the wall socket has no impact to the system until the battery gets low. At that point the operating system will tell the computer to shut down and that will be ample warning for the SSD to safely shut down and ensure the integrity of the data. But what if the operating system locks up and does not warn the SSD or the system is an A/C-powered desktop without a battery?

The LSI SF-2100/2200 FSPs are purpose-built for these client environments and operate with the assumption that power could disappear at any point in time. They use special FW techniques so that even without a battery present, as is the case with desktop systems, they greatly limit the potential for data loss.

The naked facts
It should be clear that the answer to the original question is highly dependent upon the flash controller at the heart of the SSD. Without having the critical features discussed above and designed into the LSI SandForce flash controllers, it is very possible to lose data during a power failure. The LSI SandForce flash controllers are engineered to withstand power failures like the one that hit New Orleans at the Super Bowl, but don’t expect them to fix wardrobe malfunctions.

Tags: , , , , , , , , , , , , ,
Views: (19720)