When looking at the near future of data center technology, there are two very important trends to consider. First—the adoption of public and private cloud computing continues to become much more pervasive. Enterprises, software developers, and home users are all making the transition to cloud-based models for services and storage.
Second—devices, data, and network demand are all projected to grow at explosive rates over the next few years. By 2020, the digital universe – the data we create and copy annually – will reach 44 zettabytes, or 44 trillion gigabytes1. There will be an estimated 24 billion IP-connected devices online (up from 13 billion in 2013). Network expansion is expected to more than triple in that same timeframe, from 63 million new server ports to 206 million2.
And it’s these sorts of looming demand increases in particular that are driving the development of a software-defined datacenter (SDDC) that can deliver cloud-based services with optimized capacity, efficiency, and flexibility.
The evolution of SDDC is being enabled by the standardization of IT hardware infrastructure. Core IT resources (compute, network, and storage) are abstracted from the underlying hardware that resides in resource-specific pools, potentially across multiple physical locations.
These virtualized resources are overlaid with advanced management capabilities, which allow IT resources such as computing cycles, storage, and network to be allocated on-demand and at-scale for specific software requirements. Automated provisioning and orchestration functions boost the efficiency of cloud-based applications while reducing the burden on IT.
Software-defined datacenters will enable IT administrators to efficiently allocate resources on demand and track usage for ease-of-billing to internal business units. It also offers developers potential time-to-market advantages with the ability to quickly release new software, rapidly scaling capacity up and down with demand over an app’s natural lifecycle.
How does SDDC work?
The primary technical objective of SDDC is to create a virtualized pool of the three main component silos in the traditional IT infrastructure stack (compute, network, and storage) with the ability to scale across these components as needed. This re-imagined data center architecture will allow IT managers to deploy hardware resources in support of applications and more effectively manage the lifecycles of individual hardware components, without ever disrupting application uptime.
Another benefit of a comprehensively virtual architecture is that it can offer capabilities beyond those of a top-down control structure, where the software merely simplifies the functions of subordinate hardware systems. Instead, software-defined datacenters offers a dynamic feedback loop between the resource layers of the data center and the operating software. These layers can interact through applied analytics enabling automated controls and real-time IT management. To achieve this, however, the underlying hardware platform must be sufficiently intelligent and have the capabilities to integrate with centralized software control.
In 2014, the digital universe will equal an astounding 1.7 megabytes a minute for every person on Earth1. With explosive growth in user demand a practical inevitability over the next few years, SDDC will provide a standards-based, converged infrastructure that can offer new data center capabilities, greater efficiency, and improved flexibility for administrators of private and public cloud services.
To learn more about SDDC and the changes it’s driving in the datacenter, download our whitepaper.
Where did my email go?
This week I was dragged into to the virtualized cloud kicking and screaming … well, sort of. LSI has moved me, and all my co-workers, from my nice, safe local Exchange server to one in the amorphous, mysterious cloud. Scary. My IT department says the new cloud email is a great thing. They are now promising me unlimited email storage. Long gone will be the days of harrowing emails telling me I am approaching my storage limit and soon will be unable to send new email.
With cloud email, I can keep everything forever! I am not quite sure that saving mountains of email will be a good thing :-). Other than having to redirect my tablet and smartphone to this new service, update my webmail bookmark and empty my email inbox, there was not much I had to do. So far, so good. I have not experienced any challenges or performance issues. A key reason is flash storage.
To be sure, virtualization is a great tool for improving physical server utilization and flexibility as well as reducing power, cooling and datacenter footprint costs. That’s why the use of virtualization for email, databases and desktops is growing dramatically. But virtualized servers are only as effective as the storage performance that supports them. If, as a datacenter manager, your clients cannot access their application data quickly or boot their virtual desktop in a reasonable time, your company’s productivity and client satisfaction can drop dramatically.
Today, applications most likely run on virtualized servers. The upside of server virtualization is that a company can improve server utilization and run more applications on fewer physical servers. This can reduce power consumption, make more efficient use of datacenter floor space and make it easier to configure servers and deploy applications. The cloud also helps streamline application development, allowing companies to more efficiently and cost effectively test software applications across a broad set of configurations and operating systems.
A heated dispute – storage contention
Once application testing is complete, a virtual server’s configuration and image can be put on a virtual shelf until they are needed again, freeing up memory, processing, storage and other resources on the physical server for new virtual servers with just a few keystrokes. But with virtualization and the cloud there can be downsides, like slow performance – especially storage performance.
When a number of virtual servers are all using the same physical storage, there can be infighting for storage capacity, generally known as storage contention. These internecine battles can slow application response to a frustrating glacial pace and lead to issues like VDI Boot and Login Storm that can extend the time it takes for users to login to tens of minutes.
Flash helps alleviate slowdowns in storage performance
Here is where flash comes to the rescue. New flash storage solutions are being deployed to help improve virtualized storage performance and alleviate productivity-sapping slowdowns caused by VDI Boot and Login Storm — the crush of end users booting up or logging in within a small window that overwhelms the server with data requests and degrades response times. Flash can be used as primary storage inside servers running virtual machines to dramatically speed storage response time. Flash can also be deployed as an intelligent cache for DAS- or SAN-connected storage and even as an external shared storage pool.
It’s clear that virtualization will require higher storage performance and better, more cost-effective ways to deploy flash storage. But how much flash you need depends on your particular virtualization challenge, configuration and of course budget: while flash storage is extremely fast, it is costlier than disk-based storage. So choosing the right storage acceleration solution – one is LSI® Nytro™ Application Acceleration – can be as important as choosing the right cloud provider for your company’s email.
While my email is now stored in the cloud in Timbuktu, I know the flash storage solutions in that datacenter help keep my mail quickly accessible 24/7 whether I access it from my computer, tablet or smartphone, giving my productivity a boost. I can be assured that every action item I am sent will quickly make it to my inbox and be added to my ever-growing to-do list. Now my next big challenge is to improve my own response performance to those email requests!