The lifeblood of any online retailer is the speed of its IT infrastructure. Shoppers arenâ€™t infinitely patient. Sluggish infrastructure performance can make shoppers wait precious seconds longer than they can stand, sending them fleeing to other sites for a faster purchase. Our federal governmentâ€™s halting rollout of the Health Insurance Marketplace website is a glaring example of what can happen when IT infrastructure isnâ€™t solid. A few bad user experiences that go viral can be damaging enough. Tens of thousands can be crippling. Â
In hyperscale datacenters, any number of problems including network issues, insufficient scaling and inconsistent management can undermine end usersâ€™ experience. But one that hits home for me is the impact of slow storage on the performance of databases, where the data sits. With the database at the heart of all those online transactions, retailers can ill afford to haveÂ their tier of database servers operating at anything less than peak performance.
Slow storage undermines database performance
Typically,Â Web 2.0 and e-commerce companies run relational databases (RDBs) on these massive server-centric infrastructures. (Take a look at my blog last week to get a feel for the size of these hyperscale datacenter infrastructures). If you are running that many servers to support millions of users, you are likely using some kind of open-sourced RDB such as MySQL or other variations. Keep in mind that Oracle 11gR2 likely retails around $30K per core but MSQL is free. But the performance of both, and most other relational databases, suffer immensely when transactions are retrieving data from storage (or disk). You can only throw so much RAM and CPU power at the performance problem â€¦ sooner rather than later you have to deal with slow storage.
Almost everyone in industry â€“ Web 2.0, cloud, hyperscale and other providers of massive database infrastructures â€“ is lining up to solve this problem the best way they can. How? By deploying flash as the sole storage for database servers and applications. But is low-latency flash enough? For sheer performance it beats rotational disk hands down. But â€¦ even flash storage has its limitations, most notably when you are trying to drive ultra-low latencies for write IOs. Most IO accesses by RDBs, which do the transactional processing, are a mix or read/writes to the storage. Specifically, the mix is 70%/30% reads/writes. These are also typically low q-depth accesses (less than 4). It is those writes that can really slow things down.
PCIe flash reduces write latencies
The good news is that the right PCIe flash technology in the mix can solve the slowdowns. Some interesting PCIe flash technologies designed to tackle this latency problem are on display atÂ AISÂ this week. DRAM and in particular NVDRAM are being deployed as a tier in front of flash to really tackle those nasty write latencies.
Among other demos, weâ€™re showing how a Nytroâ„˘ 6000 series PCIe flash cardÂ helps solve the MySQL database performance issues. The typical response time for a small data read (this is what the database will see for a Database IO) from an HDD is 5ms. Flash-based devices such as the Nytro WarpDriveÂ® card can complete the same read in less than 50ÎĽs on average during testing, an improvement of several orders-of-magnitude in response time. This response time translates to getting much higher transactions out of the same infrastructure â€“ but with less space (flash is denser) and a lot less power (flash consumes a lot lower power than HDDs).
Weâ€™re also showing the Nytro 7000 series PCIe flash cards. They reach even lower write latencies than the 6000 series and very low q-depths.Â The 7000 series cards also provide DRAM buffering while maintaining data-integrity even in the event of a power loss.
For online retailers and other businesses, higher database speeds mean more than just faster transactions. They canÂ help keep those cash registers ringing.
Tags: AIS, database, DRAM, e-commerce, flash, flash memory, hard disk drive, HDD, hyperscale datacenter, latency, MySQL, NVDRAM, Nytro 6000, Nytro 7000, Nytro WarpDrive, Oracle, PCIe flash, relational database, storage latency, web 2.0, write latency
Scaling compute power and storage in space-constrained datacenters is one of the top IT challenges of our time. With datacenters worldwide pressed to maximize both within the same floor space, the central challenge is increasing density.
At IBM we continue to design products that help businesses meet their most pressing IT requirements, whether itâ€™s optimizing data analytics, data management, the fastest growing workloads such as social media and cloud delivery or, of course, increasing compute and storage density. Our technology partners are a crucial part of our work, and this week at AISÂ we are teaming with LSI to showcase our new high-density NeXtScale computing platform and x3650 M4 HD server. Both leverage LSIÂ® SAS RAID controllers for data protection, and the x3650 M4 HD server features an integrated leading-edge LSI 12Gb/s SAS RAID controller.
NeXtScaleÂ System â€“Â ideal for HPC, cloud service providers and Web 2.0
The NeXtScale SystemÂ®, an economical addition to the IBM SystemÂ® family, maximizes usable compute density by packing up to 84 x86-based systems and 2,016 processing cores into a standard 19-inch rack to enable seamless integration into existing infrastructures. The family also enables organizations of all sizes and budgets to start small and scale rapidly for growth. The NeXtScale System is an ideal high-density solution for high-performance computing (HPC), cloud service providers and Web 2.0.
The System x3650 M4 HD, IBMâ€™s newest high-density storage server, is designed for data-intensive analytics or business-critical workloads. The 2U rack server supports up to 62% more drive bays than the System x3650 M4 platform, providing connections for up to 26 2.5-inch HDDs or SSDs. The server is powered by the Intel Xeon processor E5-2600 family and features up to 6 PCIe 3.0 slots and an onboard LSI 12Gb/s SAS RAID controller. This combination gives a big boost to data applications and cloud deployments by increasing the processing power, performance and data protection that are the lifeblood of these environments.
IBM dense storage solutions to help drive data management, cloud computing and big data strategies
Cloud computing and big data will continue to have a tremendous impact on the IT infrastructure and create data management challenges for businesses. At IBM, we think holistically about the needs of our customers and believe that our new line of dense storage solutions will help them design, develop and execute on their data management, cloud computing and big data strategies.
Tags: 12Gb/s SAS RAID controller, AIS, cloud service providers, dense computing, enterprise, enterprise IT, high performance computing, HPC, IBM, Intel Xeon processor, NeXtScale computing platform, PCIe 3.0, web 2.0, x3650 M4 HD server
Back in the 1990s, a new paradigm was forced into space exploration. NASA faced big cost cuts. But grand ambitions for missions to Mars were still on its mind. The problem was it couldnâ€™t dream and spend big. So the NASA mantra became â€śfaster, better, cheaper.â€ť The idea was that the agency could slash costs while still carrying out a wide variety of programs and space missions. This led to some radical rethinks, and some fantastically successful programs that had very outside-the-box solutions. (Bouncing Mars landers anyone?)
That probably sounds familiar to any IT admin. And that spirit is alive at LSIâ€™s AIS â€“ The Accelerating Innovation Summit, which is our annual congress of customers and industry pros, coming up Nov. 20-21 in San Jose. Like the people at Mission Control, they all want to make big things happenâ€¦ without spending too much.
Take technology and line of business professionals. They need to speed up critical business applications. A lot. Or IT staff for enterprise and mobile networks, who must deliver more work to support the ever-growing number of users, devices and virtualized machines that depend on them. Or consider mega datacenter and cloud service providers, whose customers demand the highest levels of service, yet get that service for free. Or datacenter architects and managers, who need servers, storage and networks to run at ever-greater efficiency even as they grow capability exponentially.
(LSI has been working on many solutions to these problems, some of which I spoke about in this blog.)
Itâ€™s all about moving data faster, better, and cheaper. If NASA could do it, we can too. In that vein, hereâ€™s a look at some of the topics you can expect AIS to address around doing more work for fewer dollars:
And, I think youâ€™ll find some astounding products, demos, proof of concepts and future solutions in the showcase too â€“ not just from LSI but from partners and fellow travelers in this industry. Hey â€“ thatâ€™s my favorite part. I canâ€™t wait to see peopleâ€™s reactions.
Since they rethought how to do business in 2002, NASA has embarked on nearly 60 Mars missions. Faster, better, cheaper. It can work here in IT too.
Tags: 12Gb/s SAS, AIS, big data analytics, cloud infrastructure, cloud services, datacenter, flash, flash memory, hyperscale datacenters, NAS, NASA, SAN, SDN, shareable DAS, software-defined networks, sub-20nm flash, triple-level cell flash, VDI, web 2.0
I’ve just been to China. Again. Â Itâ€™s only been a few months since I was last there.
I was lucky enough to attend the 5th China Cloud Computing Conference at the China National Convention Center in Beijing. You probably have not heard of it, but itâ€™s an impressive conference. Itâ€™s â€śthe oneâ€ť for the cloud computing industry. It was a unique view for me â€“ more of an inside-out view of the industry. Everyone whoâ€™s anyone in Chinaâ€™s cloud industry was there. Our CEO, Abhi Talwalkar, had been invited to keynote the conference, so I tagged along.
First, the air was really hazy, but I donâ€™t think the locals considered it that bad. The US consulate iPhone app said the particulates were in the very unhealthy range. Imagine looking across the street. Sure, you can see the building there, but the next one? Not so much. Look up. Can you see past the 10th floor? No, not really. The building disappears into the smog. Thatâ€™s what it was like at the China National Convention Center, which is part of the same Olympics complex as the famous Birdcage stadium: http://www.cnccchina.com/en/Venues/Traffic.aspx
I had a fantastic chance to catch up with a university friend, who has been living in Beijing since the 90â€™s, and is now a venture capitalist. Itâ€™s amazing how almost 30 years can disappear and you pick up where you left off. He sure knows how to live. I was picked up in his private limo, whisked off to a very well-known restaurant across the city, where we had a private room and private waitress. We even had some exotic, special dishes that needed to be ordered at least a day in advance. Wow.Â But we broke Chinese tradition and had imported beer in honor of our Canadian education.
Sizing up China’s cloud infrastructure
The most unusual meeting I attended was an invitation-only session â€“ the Sino-American roundtable on cloud computing. There were just about 40 people in a room â€“ half from the US, half from China. Mostly what I learned is that the cloud infrastructure in China is fragmented, and probably sub-scale. And itâ€™s like that for a reason. It was difficult to understand at first, but I think Iâ€™ve made sense of it.
I started asking why to friends and consultants and got some interesting answers. Essentially different regional governments are trying to capture the cloud â€śindustryâ€ť in their locality, so they promote activity, and they promote creation of new tools and infrastructure for that. Why reuse something thatâ€™s open source and works if you donâ€™t have to and you can create high-tech jobs? (Thatâ€™s sarcasm, by the way.) Many technologists I spoke with felt this will hold them back, and that they are probably 3-5 years behind the US. As well, each government-run industry specifies the datacenter and infrastructure needed to be a supplier or ecosystem partner with them, and each is different. The national train system has a different cloud infrastructure from the agriculture department, and from the shipping authority, etcâ€¦ and if you do business with them â€“ that is you are part of their ecosystem of vendors, then you use their infrastructure. It all spells fragmentation and sub-scale. In contrast, the Web 2.0 / social media companies seem to be doing just fine.
Baidu was also showing off its open rack. Itâ€™s an embodiment of the Scorpio V1 standard, which was jointly developed with Tencent, Alibaba and China Telecom. It views this as a first experiment, and is looking forward to V2, which will be a much more mature system.
I was also lucky to have personal meetings with general managers,chief architects and effective CTOs of the biggest cloud companies in China. What did I learn? They are all at an inflexion point. Many of the key technologists have experience at American Web 2.0 companies, so theyâ€™re able to evolveÂ quickly, leveraging their industry knowledge. Theyâ€™re all working to build or grow their own datacenters, their own infrastructure. And theyâ€™re aggressively expanding products, not just users, so theyâ€™re getting a compound growth rate.
Hereâ€™s a little of what I learned. In general, there is a trend to try and simplify infrastructure, harmonize divergent platforms, and deploy more infrastructure by spending less on each unit. (In general, they donâ€™t make as much per user as American companies, but they have more users). As a result they are more cost-focused than US companies. And they are starting to put more emphasis on operational simplicity in general. As one GM described it to me â€“ â€śYes, techs are inexpensive in China for maintainence, but more often than not they make mistakes that impact operations.â€ť So we (LSI) will be focussing more on simplifying management and maintainence for them.
Baiduâ€™s biggest Hadoop cluster is 20k nodes. I believe thatâ€™s as big as Yahooâ€™s â€“ and it is the originator of Hadoop. Baidu has a unique use profile for flash â€“ itâ€™s not like theÂ hyperscale datacenters in the US. But Baidu is starting to consume a lot. Like most other hyperscale datacenters, it is working on storage erasure coding across servers, racks and datacenters, andÂ it is trying to make a unified namespace across everything. One of its main interests is architecture at datacenter level, harmonizing the various platforms and looking for the optimum at the datacenter level. In general, Baidu is very proud of the advances it has made, and it has real confidence in its vision and route forward, and from what I heard, its architectural ambitions are big.
JD.com (which used to be 360buy.com) is the largest direct ecommerce company in China and (only) had about $10 billion (US) in revenue last year, with 100% CAGR growth. As the GM there said, its growth has to slow sometime, or in 5 years itâ€™ll be the biggest company in the world. I think it isÂ the closest equivalent to Amazon there is out there, and they have similar ambitions. They are in the process of transforming to a self-built, self-managed datacenter infrastructure. It is a company I am going to keep my eyes on.
Tencent is expanding into some interesting new businesses. Sure, people know about the Tencent cloud services that the Chinese government will be using, but Tencent also has some interesting and unique cloud services coming. Letâ€™s just say even I am interested in using them. And of course, while Tencent is already the largest Web 2.0 company in China, its new services promise to push it to new scale and new markets.
Extra! Extra! Read all about it …
And then there was press. I had a very enjoyable conversation with Yuan Shaolong, editor at WatchStor, that I think ran way over. Amazingly â€“ we discovered we have the same favorite band, even half a world away from each other. The results are here, though Iâ€™m not sure if Google translate messed a few things up, or if there was some miscommunication, but in general, I think most of the basics are right: http://translate.google.com/translate?hl=en&sl=zh-CN&u=http://tech.watchstor.com/storage-module-144394.htm&prev=/search%3Fq%3Drobert%2Bober%2BLSI%26client%3Dfirefox-a%26rls%3Dorg.mozilla:en-US:official%26biw%3D1346%26bih%3D619
I just keep learning new things every time I go to China. I suspect it has as much to do with how quickly things are changing as new stuff to learn. So I expect it wonâ€™t be too long until I go to China, againâ€¦
Tags: Abhi Talwalkar, Alibaba, Amazon, Baidu, China, China Cloud Computing Conference, China National Convention Center, China Telecom, datacenter, Hadoop, hyperscale, JD.com, WatchStor, web 2.0, Yahoo
One of the big challenges that I see so many IT managers struggling with is how are they supposed to deal with the almost exponential growth of data that has to be stored, accessed, and protected, with IT budgets that are flat or growing at rates lower than the nonstop increases in storage volumes.
Iâ€™ve found that it doesnâ€™t seem to matter if it is a departmental or small business datacenter, or aÂ hyperscale datacenter with many thousands of servers. The data growth continues to outpace the budgets.
At LSI we call this disparity between the IT budget and the needs growth the â€śdata deluge gap.â€ť
Of course, smaller datacenters have different issues than theÂ hyperscale datacenters. However, no matter the datacenter size, concerns generally center on TCO.Â This, of course, includes both CapEx and OpEx for the storage systems.
Itâ€™s a good feeling to know that we are tackling these datacenter growth and operations issues head-on for many different environments â€“ large and small.
LSI has developed and is starting to provide a new shared DAS (sDAS) architecture that supports the sharing of storage across multiple servers.Â We call it the LSIÂ® SyncroTM architecture and it really is the next step in the evolution of DAS.Â Our Syncro solutions deliver increased uptime, help to reduce overall costs, increase agility, and are designed for ease of deployment.Â The fact that the Syncro architecture is built on our proven MegaRAIDÂ® technology means that our customers can trust that it will work in all types of environments.
Syncro architecture is a very exciting new capability that addresses storage and data protection needs for numerous datacenter environments.Â Our first product, Syncro MX-B, is targeted atÂ hyperscale datacenter environments including Web 2.0 and cloud. Â I will be blogging about that offering in the near future. Â We will soon be announcing details on our Syncro CS product line, previously known as High Availability DAS, for small and medium businesses and I will blog about what it can mean for our customers and users.
Both of these initial versions of the Syncro architecture can be very exciting and I really like to watch how datacenter managers react when they find about these game-changing capabilities.
We say that â€świth the LSI Syncro architecture you take DAS out of the box and make it sharable and scalable.Â The LSI Syncro architecture helps make your storage Simple. Smart. On.â€ťÂ Our tag line for Syncro is â€śThe Smarter Way to ON.â„˘â€ťÂ It really is.
To learn more, please visit the LSI Shared Storage Solutions web page:Â http://www.lsi.com/solutions/Pages/SharedStorage.aspx