Pushing your enterprise cluster solution to deliver the highest performance at the lowest cost is key in architecting scale-out datacenters. Administrators must expand their storage to keep pace with their compute power as capacity and processing demands grow.
safijidsjfijdsifjiodsjfiosjdifdsoijfdsoijfsfkdsjifodsjiof dfisojfidosj iojfsdiojofodisjfoisdjfiodsj ofijds fds foids gfd gfd gfd gfd gfd gfd gfd gfd gfd gfdg dfg gfdgfdg fd gfd gdf gfd gdfgdf g gfd gdfg dfgfdg fdgfdgBeyond price and capacity, storage resources must also deliver enough bandwidth to support these growing demands. Without enough I/O bandwidth, connected servers and users can bottleneck, requiring sophisticated storage tuning to maintain reasonable performance. By using direct attached storage (DAS) server architectures, IT administrators can
Beyond price and capacity, storage resources must also deliver enough bandwidth to support these growing demands. Without enough I/O bandwidth, connected servers and users can bottleneck, requiring sophisticated storage tuning to maintain reasonable performance. By using direct attached storage (DAS) server architectures, IT administrators can reduce the complexities and performance latencies associated with storage area networks (SANs).Â Now, with LSI 12Gb/s SAS or MegaRAIDÂ® technology, or both, connected to 12Gb/s SAS expander-based storage enclosures, administrators can leverage the DataBoltâ„˘ technology to clear I/O bandwidth bottlenecks. The result: better overall resource utilization, while preserving legacy drive investments. Typically a slower end device would step down the entire 12Gb/s SAS storage subsystem to 6Gb/s SAS speeds. How does Databolt technology overcome this? Well, without diving too deep into the nuts and bolts, intelligence in the expander buffers data and then transfers it out to the drives at 6Gb/s speeds in order to match the bandwidth between faster hosts and slower SAS or SATA devices.
So for this demonstration at AIS, we are showcasing two Hadoop Distributed File System (HDFS) servers. Each server houses the newly shipping MegaRAID 9361-8i 12Gb/s SAS RAID controller connected to a drive enclosure featuring a 12Gb/s SAS expander and 32 6Gb/s SAS hard drives. One has a DataBolt-enabled configuration, while the other is disabled.
For the benchmarks, we ran DFSIO, which simulates MapReduce workloads and is typically used to detect performance network bottlenecks and tune hardware configurations as well as overall I/O performance.
The primary goal of the DFSIO benchmarks is to saturate storage arrays with random read workloads in order to ensure maximum performance of a cluster configuration. Our tests resulted in MapReduce Jobs completing faster in 12Gb/s mode, and overall throughput increased by 25%.
With the much anticipated launch of 12gb/s SAS MegaRAID and 12Gb/s SAS expanders featuring DataBoltâ„˘ SAS bandwidth-aggregation technologies, LSI is taking the bold step of moving beyond traditional IO performance benchmarks like IOMeter to benchmarks that simulate real-world workloads.
In order to fully illustrate the actual benefit many IT administrators can realize using 12Gb/s SAS MegaRAID products on their new server platforms, LSI is demonstrating application benchmarks on top of actual enterprise applications at AIS.
For our end-to-end 12Gb/s SAS MegaRAID demonstration, we chose Benchmark FactoryÂ® for DatabasesÂ running on a MySQL Database. Benchmark Factor,Â a database performance testing tool that allows you to conduct database workload replay, industry-standard benchmark testing and scalability testing,Â uses real database application workloads such as TPC-C, TPC-E and TPC-H. We chose the TPC-H benchmark, a decision-support benchmark, because of its large streaming query profile. TPC-H shows the performance of decision support systems â€“ which examine large volumes of data to simplify the analysis of information for business decisions â€“ making it an excellent benchmark to showcase 12Gb/s SAS MegaRAID technology and its ability to maximize the bandwidth of PCIe 3.0 platforms, compared to 6Gb/s SAS.
The demo uses the latest Intel R2216GZ4GC servers based on the new IntelÂ® XeonÂ® processor E5-2600 v2 product family to illustrate how 12Gb/s SAS MegaRAID solutions are needed to take advantage of the bandwidth capabilities of PCIeÂ® 3.0 bus technology.
When the benchmarks are run side-by-side on the two platforms, you can quickly see how much faster data transfer rates are executed, and how much more efficiently the Intel servers handles data traffic.Â We used Quest Softwareâ€™s SpotlightÂ® on MySQL tool to monitor andÂ measure data traffic from end storage devices to the clients running the database queries. More importantly, the test also illustrates how many more user queries the 12Gb/s SAS-based system can handle before complete resource saturation â€“ 60 percent more than 6Gb/s SAS in this demonstration.
What does this mean to IT administrators? Clearly, resourse utilization is much higher,Â improving total cost of ownership (TCO) with their database server. Or, conversley, 12Gb/s SAS can reduce cost per IO since fewerÂ drives can be used with the server to deliver the same performance as the previous 6Gb/s SAS generation of storage infrastructure.
I remember in the mid-1990s the question of how many minutes away from a diversion airport a two-engine passenger jet should be allowed to fly in the event of an engine failure.Â Staying in the air long enough is one of those high-availability functions that really matters.Â In the case of the Boeing 777, it was the first aircraft to enter service with a 180-minute extended operations certification (ETOPS)1.Â This meant that longer over-water and remote terrain routes were immediately possible.
The question was â€ścan a two-engine passenger aircraft be as safe as a four engine aircraft for long haul flights?â€ťÂ The short answer is yes. Reducing the points of failure from four engines to two, while meeting strict maintenance requirements and maintaining redundant systems, reduces the probability of a failure. The 777 and many other aircraft have proven to be safe for these longer flights.Â Recently, the 777 has received FAA approval for a 330-minute ETOPS rating2, which allows airlines to offer routes that are longer, straighter and more economical.
What does this have to do with a datacenter?Â It turns out that someÂ hyperscale datacenters house hundreds of thousands of servers, each with its own boot drive.Â Each of these boot drives is a potential point of failure, which can drive up acquisition and operating costs and the odds of a breakdown.Â Datacenter managers need to control CapEx, so for the sheer volume of server boot drives they commonly use the lowest cost 2.5-inch notebook SATA hard drives. The problem is that these commodity hard drives tend to fail more often. This is not a huge issue with only a few servers. But in a datacenter with 200,000 servers, LSI has found through internal research that, on average, 40 to 200 drives fail per week! (2.5â€ł hard drive, ~2.5 to 4-year lifespan, which equates to a conservative 5% failure rate/year).
Traditionally, aÂ hyperscale datacenter has a sea of racks filled with servers.Â LSI approximates that, in the majority of large datacenters, at least 60% of the servers (Web servers, database servers, etc.) use a boot drive requiring no more than 40GB of storage capacity since it performs only boot-up and journaling or logging.Â For higher reliability, the key is to consolidate these low-capacity drives, virtually speaking. With our Syncroâ„˘ MX-B Rack Boot Appliance, we can consolidate the boot drives for 24 or 48 of these servers into a single mirrored array (using LSI MegaRAIDÂ technology), which makes 40GB of virtual disk space available to each server.
Combining all these boot drives with fewer larger drives that are mirrored helps reduce total cost of ownership (TCO) and improves reliability, availability and serviceability.Â If a rack boot appliance drive fails, an alert is sent to the IT operator. The operator then simply replaces the failed drive, and the appliance automatically copies the disk image from the working drive. The upshot is that operations are simplified, OpEx is reduced, and there is usually no downtime.
Syncro MX-B not only improves reliability by reducing failure points; it also significantly reduces power requirements (up to 40% less in the 24-port version, up to 60% less in the 48-port version) â€“ a good thing for the corporate utility bill and climate change. This, in turn, reduces cooling requirements, and helps make hardware upgrades less costly. With the boot drives disaggregated from the servers, thereâ€™s no need to simultaneously upgrade the drives, which typically are still functional during server hardware upgrades.
In the case of both commercial aircraft and servers, less really can be more (or at least better) in some situations. Eliminating excess can make the whole system simpler and more efficient.
To learn more, please visit the LSIÂ® Shared Storage Solutions web page:Â http://www.lsi.com/solutions/Pages/SharedStorage.aspx
One of the big challenges that I see so many IT managers struggling with is how are they supposed to deal with the almost exponential growth of data that has to be stored, accessed, and protected, with IT budgets that are flat or growing at rates lower than the nonstop increases in storage volumes.
Iâ€™ve found that it doesnâ€™t seem to matter if it is a departmental or small business datacenter, or aÂ hyperscale datacenter with many thousands of servers. The data growth continues to outpace the budgets.
At LSI we call this disparity between the IT budget and the needs growth the â€śdata deluge gap.â€ť
Of course, smaller datacenters have different issues than theÂ hyperscale datacenters. However, no matter the datacenter size, concerns generally center on TCO.Â This, of course, includes both CapEx and OpEx for the storage systems.
Itâ€™s a good feeling to know that we are tackling these datacenter growth and operations issues head-on for many different environments â€“ large and small.
LSI has developed and is starting to provide a new shared DAS (sDAS) architecture that supports the sharing of storage across multiple servers.Â We call it the LSIÂ® SyncroTM architecture and it really is the next step in the evolution of DAS.Â Our Syncro solutions deliver increased uptime, help to reduce overall costs, increase agility, and are designed for ease of deployment.Â The fact that the Syncro architecture is built on our proven MegaRAIDÂ® technology means that our customers can trust that it will work in all types of environments.
Syncro architecture is a very exciting new capability that addresses storage and data protection needs for numerous datacenter environments.Â Our first product, Syncro MX-B, is targeted atÂ hyperscale datacenter environments including Web 2.0 and cloud. Â I will be blogging about that offering in the near future. Â We will soon be announcing details on our Syncro CS product line, previously known as High Availability DAS, for small and medium businesses and I will blog about what it can mean for our customers and users.
Both of these initial versions of the Syncro architecture can be very exciting and I really like to watch how datacenter managers react when they find about these game-changing capabilities.
We say that â€świth the LSI Syncro architecture you take DAS out of the box and make it sharable and scalable.Â The LSI Syncro architecture helps make your storage Simple. Smart. On.â€ťÂ Our tag line for Syncro is â€śThe Smarter Way to ON.â„˘â€ťÂ It really is.
To learn more, please visit the LSI Shared Storage Solutions web page:Â http://www.lsi.com/solutions/Pages/SharedStorage.aspx