It seems like our smartphones are getting bigger and bigger with each generation. Sometimes I see people holding what appears to be a tablet computer up next to their head. I doubt they know how ridiculous they look to the rest of us, and I wonder what pants today have pockets that big.
I certainly do like the convenience of the instant-on capabilities my smart phone gives me, but I still need my portable computer with its big screen and keyboard separate from my phone.
A few years ago, SATA-IO, the standards body, added a new feature to the Serial ATA (SATA) specification designed to further reduce battery consumption in portable computer products. This new feature, DevSleep, enables solid state drives (SSDs) to act more like smartphones, allowing you to go days without plugging in to recharge and then instantly turn them on and see all the latest email, social media updates, news and events.
Why not just switch the system off?
When most PC users think about switching off their system, they dread waiting for the operating system to boot back up. That is one of the key advantages of replacing the hard disk drive (HDD) in the system with a faster SSD. However, in our instant gratification society, we hate to wait even seconds for web pages to come up, so waiting minutes for your PC to turn on and boot up can feel like an eternity. Therefore, many people choose to leave the system on to save those precious momentsâ€¦ but at the expense of battery life.
Can I get this today?
To further extend battery life, the new DevSleep feature requires a signal change on the SATA connector. This change is currently supported only in new IntelÂ® Haswell chipset-based platforms announced this June. Whatâ€™s more, the SSD in these systems must support the DevSleep feature and monitor the signal on the SATA connector. Most systems that support DevSleep will likely be very low-power notebook systems and will likely already ship with an SSD installed using a small mSATA, M.2, or similar edge connector. Therefore, the signal change on the SATA interface will not immediately affect the rest of the SSDs designed for desktop systems shipping through retail and online sources. Note that not all SSDs are created equal and, while many claim support for DevSleep, be sure to look at the fine print to compare the actual power draw when in DevSleep.
At Computex last month, LSI announced support for the DevSleep feature and staged demonstrations showing a 400x reduction in idle power. It should be noted that a 400x reduction in power does not directly translate to a 400x increase in battery life, but any reduction in power will give you more time on the battery, and that will certainly benefit any user who often works without a power cord.
Not likely. But you might think that solving your computer data security problems is very well possible when someone tells you that TCG Opal is the key. According to its website, â€śThe Trusted Computing Group (TCG) is a not-for-profit organization formed to develop, define and promote open, vendor-neutral, global industry standards, supportive of a hardware-based root of trust, for interoperable trusted computing platforms.â€ť
That might take a bit to digest, but think about TCG as a group of companies creating standards to simplify deployment and increase adoption of data security. The consortium has two better known specifications called TCG Enterprise and TCG Opal.
Sorting through the alphabet soup of data security
â€śOur SED with TCG Opal provides FDE.â€ť While this might look like a spoonful of alphabet soup, it is music to the ears of a corporate IT manager. Let me break it down for those who just hear fingernails on the chalkboard. A self-encrypting drive (SED) is one that embeds a hardware-based encryption engine in the storage device. One chief benefit is that the hardware engine performs the encryption, preserving full performance of the host CPU. An SED can be a hard disk drive (HDD) or a solid state drive (SSD). True, traditional software encryption can secure data going to the storage device, but it consumes precious host CPU bandwidth. The related term, full drive encryption (FDE), is used to describe any drive (HDD or SSD) that stores data in an encrypted form. This can be through either software-based (host CPU) or hardware-based (an SED) encryption.
Most people would assume that if their work laptop were lost or stolen, they would suffer only some lost productivity for a short time and about $1,500 in hardware costs. However, a study by Intel and the Ponemon Institute found that the cost of a lost laptop totaled nearly $50,000 when you account for lost IP, legal costs, customer notifications, lost business, harm to reputation, and damages associated with compromising confidential customer information. When the data stored on the laptop is encrypted, this cost is reduced by nearly $20,000. This difference certainly supports the need for better security for these mobile platforms.
When considering a security solution for this valuable data, you have to decide between a hardware-based SED and a host-based software solution. The primary problem with software solutions is they require the host CPU to do all of the encryption. This detracts from the CPUâ€™s core computing work, leaving users with a slower computer or forcing them to pay for greater CPU performance. Another drawback of many software encryption solutions is that they can be turned off by the computer user, leaving data in the clear and vulnerable. Since hardware-based encryption is native to the HDD or SSD, it cannot be disabled by the end user.
In April 2013, LSI and a few other storage companies worked with the Ponemon Institute to better understand the value of hardware-based encryption. You can read about the details in the study here, but the quick summary is that hardware-based encryption solutions can offer a 75% total cost savings over software-based solutions, on average.
When is this available?
At the Computex Taipei 2013 show earlier this month, LSI announced availability of a firmware update for SandForceÂ® controllers that adds support for TCG Opal. The LSI suite at the show featured TCG Opal demonstrations using self-encrypting SSDs provided by SandForce Drivenâ„˘ member companies, including Kingston, A-DATA, Avant and Edge. (Contact SSD manufacturers directly for product availability.)
I often think about green, environmental impact, and what weâ€™re doing to the environment. One major reason I became an engineer was to leave the world a little better than when I arrived. Iâ€™ve gotten sidetracked a few times, but Iâ€™ve tried to help, even if just a little.
The good people in LSIâ€™s EHS (Environment, Health & Safety) asked me a question the other day about carbon footprint, energy impact, and materials use. Which got me thinking â€¦ OK â€“ I know most people in LSI donâ€™t really think of ourselves as a â€śgreen techâ€ť company. But we are â€“ really. No foolinâ€™. We are having a big impact on the global power consumption and material consumption of the IT industry. And I mean that in a good way.
There are many ways to look at this, both from what we enable datacenters to do, to what we enable integrators to do, all the way to hard core technology improvements and massive changes in what itâ€™s possible to do.
Back in 2008 I got to speak at the AlwaysOn GoingGreen conference. (I was lucky enough to be just after Elon Muskâ€“ heâ€™s a lot more famous now with Tesla doing so well.
http://www.smartplanet.com/video/making-the-case-for-green-it/305467Â (at 2:09 in video)
The massive consumption of IT equipment, all the ancillary metal, plastic wiring, etc. that goes with them, consumes energy as its being shipped and moved halfway around the world, and, more importantly, then gets scrapped out quickly. This has been a concern for me for quite a while. I mean â€“ think about that. As an industry we are generating about 9 million servers a year, about 3 million go intoÂ hyperscale datacenters (or hyperscale if you prefer). Many of those are scrapped on a 2, 3 or 4 year cycle â€“ so in steady state, maybe 1 million to 2 million a year are scrapped. Worse â€“ there is amazing use of energy by that many servers (even as they have advanced the state of the art unbelievably since 2008). And frankly, you and I are responsible for using all that power. Did you know thousands of servers are activated every time you make a GoogleÂ® query from your phone?
I want to take a look at basic silicon improvements we make, the impact of disk architecture improvement, SSDs, system and improvements, efficiency improvements, and also where weâ€™re going in the near future with eliminating scrap in hard drives and batteries. In reality, itâ€™s the massive pressure on work/$ that has made us optimize everything â€“ being able to do much more work at a lower cost, when a lot of cost is the energy and material that goes into the products that forces our hand. But the result is a real, profound impact on our carbon footprint that we should be proud of.
Sure we have a general silicon roadmap where each node enables reduced power, even as some standards and improvements actually increase individual device power. For example, our transition from 28nm semi process to 14 FinFET can literally cut the power consumption of a chip in half. But thatâ€™s small potatoes.
How about Ethernet? Itâ€™s everywhere â€“ right? Did you know servers often have 4 ethernet ports, and that there are a matching 4 ports on a network switch? LSI pioneered something called Energy Efficient Ethernet (EEE). Weâ€™re also one of the biggest manufacturers of Ethernet PHYs â€“ the part that drives the cable â€“ and we come standard in everything from personal computers to servers to enterprise switches. The savings are hard to estimate, because they depend very much on how much traffic there is, but you can realistically save Watts per interface link, and there are often 256 links in a rack. Â 500 Watts per rack is no joke, and in some datacenters it adds up to 1 or 2 MegaWatts.
How about something a little bigger and more specific? Hard disk drives. Did you know a typicalÂ hyperscale datacenter has between 1 million and 1.5 million disk drives? Each one of those consumes about Â 9 Watts, and most have 2 TBytes of capacity. So for easy math, 1 million drives is about 9 MegaWatts (!?) and about 2 Exabytes of capacity (remember â€“ data is often replicated 3 or more times). Data capacities in these facilities are needed to grow about 50% per year. So if we did nothing, we would need to go from 1 million drives to 1.5 million drives: 9 MegaWatts goes to 13.5 MegaWatts. Wow! Instead â€“ our high linearity, low noise PA and read channel designs are allowing drives to go to 4 TBytes per drives. (Sure the chip itself may use slightly more power, but thatâ€™s not the point, what it enables is a profound difference.) So to get that 50% increase in capacity we could actually reduce the number of drives deployed, with a net savings of 6.75 MegaWatts. Consider an average US home, with air conditioning, uses 1 kiloWatt. Thatâ€™s almost 7,000 homes. In reality â€“ they wonâ€™t get deployed that way â€“ but it will still be a huge savings. Instead of buying another 0.5 million drives they would buy 0.25 million drives with a net savings of 2.2 MegaWatts. Thatâ€™s still HUGE! (way to go, guys!) How many datacenters are doing that? Dozens. So thatâ€™s easily 20 or 30 MegaWatts globally. Did I say we saved them money too? A lot of money.
SSDs donâ€™t always get the credit they deserve. Yes, they really are fast, and they are awesome in your laptop, but they also end up being much lower power than hard drives. Our controllers were in about half the flash solutions shipped last year. Think tens of millions. If you just assume they were all laptop SSDs (at least half were not) then thatâ€™s another 20 MegaWatts in savings.
Did you know that in a traditional datacenter, about 30% of the power going into the building is used for air conditioning? It doesnâ€™t actually get used on the IT equipment at all, but is used to remove the heat that the IT equipment generates. We design our solutions so they can accommodate 40C ambient inlet air (thatâ€™s a little over 100Fâ€¦ hot). What that means is that the 30% of power used for the air conditioners disappears. Gone. Thatâ€™s not theoretical either. Most of the large social media, search engine, web shopping, and web portal companies are using our solutions this way. Thatâ€™s a 30% reduction in the power of storage solutions globally. Again, its MegaWatts in savings. And mega money savings too.
But letâ€™s really get to the big hitters: improved work per server. Yep â€“ we do that. In fact adding a Nytroâ„˘ MegaRAIDÂ® solution will almost always give you 4x the work out of a server. Itâ€™s a slam dunk if youâ€™re running a database. You heard me â€“ 1 server doing the work that it previously took 4 servers to do. Not only is that a huge savings in dollars (especially if you pay for software licenses!) but itâ€™s a massive savings in power. You can replace 4 servers with 1, saving at least 900 Watts, and that lone server thatâ€™s left is actually dissipating less power too, because itâ€™s actively using fewer HDDs, and using flash for most traffic instead. If you go a step further and use Nytro WarpDrive Flash cards in the servers, you can get much more â€“ 6 to 8 times the work. (Yes, sometimes up to 10x, but letâ€™s not get too excited). If you think thatâ€™s just theoretical again, check your FacebookÂ® account, or download something from iTunesÂ®. Those two services are the biggest users of PCIeÂ® flash in the world. Why? It works cost effectively. And in case you havenâ€™t noticed those two companies like to make money, not spend it. So again, weâ€™re talking about MegaWatts of savings. Arguably on the order of 150 MegaWatts. Yea â€“ thatâ€™s pretty theoretical, because they couldnâ€™t really do the same work otherwise, but still, if you had to do the work in a traditional way, it would be around that.
Itâ€™s hard to be more precise than giving round numbers at these massive scales, but the numbers are definitely in the right zone. I can say with a straight face we save the world 10â€™s, and maybe even 100â€™s of MegaWatts per year. But no one sees that, and not many people even think about it. Still â€“ Iâ€™d say LSI is a green hero.
Hey â€“ weâ€™re not done by a long shot. Letâ€™s just look at scrap. If you read my earlier post on false disk failure, youâ€™ll see some scary numbers. (http://blog.lsi.com/what-is-false-disk-failure-and-why-is-it-a-problem/ ) A normalÂ hyperscale datacenter can expect 40-60 disks per day to be mistakenly scrapped out. Thatâ€™s around 20,000 disk drives a year that should not have been scrapped, from just one web company. Think of the material waste, shipping waste, manufacturing waste, and eWaste issues. Wow â€“ all for nothing. Weâ€™re working on solutions to that. And batteries.Â Ugly, eWaste, recycle only, heavy metal batteries. They are necessary for RAID protected storage systems. And much of the worldâ€™s data is protected that way â€“ the battery is needed to save meta-data and transient writes in the event of a power failure, or server failure. We ship millions a year. (Sorry, mother earth). But weâ€™re working diligently to make that a thing of the past. And that will also result in big savings for datacenters in both materials and recycling costs.
Can we do more? Sure. I know I am trying to get us the core technologies that will help reduce power consumption, raise capability and performance, and reduce waste. But weâ€™ll never be done with that march of technology. (Which is a good thing if engineering is your careerâ€¦)
I still often think about green, environmental impact, and what weâ€™re doing to the environment. And I guess in my own small way, I am leaving the world a little better than when I arrived. And I think we at LSI should at least take a moment and pat ourselves on the back for that. You have to celebrate the small victories, you know? Even as the fight goes on.
I want to warn you, there is some thick background information here first. But donâ€™t worry. Iâ€™ll get to the meat of the topic and thatâ€™s this: Ultimately, I think thatÂ PCIeÂ® cards will evolve to more external, rack-level, pooled flash solutions, without sacrificing all their great attributes today. This is just my opinion, but other leaders in flash are going down this path tooâ€¦
Iâ€™ve been working on enterprise flash storage since 2007 â€“ mulling over how to make it work. Endurance, capacity, cost, performance have all been concerns that have been grappled with. Of course the flash is changing too as the nodes change: 60nm, 50nm, 35nm, 24nm, 20nmâ€¦ and single level cell (SLC) to multi level cell (MLC) to triple level cell (TLC) and all the variants of these â€śtrimmedâ€ť for specific use cases. The spec â€śenduranceâ€ť has gone from 1 million program/erase cycles (PE) to 3,000, and in some cases 500.
Itâ€™s worth pointing out that almost all the â€śmagicâ€ť that has been developed around flash was already scoped out in 2007. It just takes a while for a whole new industry to mature. Individual die capacity increased, meaning fewer die are needed for a solution â€“ and that means less parallel bandwidth for data transferâ€¦ And the â€śrequirementâ€ť for state-of-the-art single operation write latency has fallen well below the write latency of the flash itself. (What the ?? Yea â€“ talk about that later in some other blog. But flash is ~1500uS write latency, where state of the art flash cards are ~50uS.) When I describe the state of technology it sounds pretty pessimistic. Â Iâ€™m not. Weâ€™ve overcome a lot.
We built our first PCIe card solution at LSI in 2009. It wasnâ€™t perfect, but it was better than anything else out there in many ways. Weâ€™ve learned a lot in the years since â€“ both from making them, and from dealing with customer and users â€“ both of our own solutions and our competitors.Â Weâ€™re lucky to be an important player in storage, so in general the big OEMs, large enterprises and theÂ hyperscale datacenters all want to talk with us â€“ not just about what we have or can sell, but what we could have and what we could do. Theyâ€™re generous enough to share what works and what doesnâ€™t. What the values of solutions are and what the pitfalls are too. Honestly? Itâ€™s theÂ hyperscale datacenters in the lead both practically and in vision.
If you havenâ€™tÂ nodded off to sleep yet, thatâ€™s a long-winded way of saying â€“ things have changed fast, and, boy, weâ€™ve learned a lot in just a few years.
Most important thing weâ€™ve learnedâ€¦
Most importantly, weâ€™ve learned itâ€™s latency that matters. No one is pushing the IOPs limits of flash, and no one is pushing the bandwidth limits of flash. But they sure are pushing the latency limits.
PCIe cards are great, butâ€¦
Weâ€™ve gotten lots of feedback, and one of the biggest things weâ€™ve learned is â€“ PCIe flash cards are awesome. They radically change performance profiles of most applications, especially databases allowing servers to run efficiently and actual work done by that server to multiply 4x to 10x (and in a few extreme cases 100x). So the feedback we get from large users is â€śPCIe cards are fantastic. Weâ€™re so thankful they came along. Butâ€¦â€ť Thereâ€™s always a â€śbut,â€ť right??
It tends to be a pretty long list of frustrations, and they differ depending on the type of datacenter using them. Weâ€™re not the only ones hearing it. To be clear, none of these are stopping people from deploying PCIe flashâ€¦ the attraction is just too compelling. But the problems are real, and they have real implications, and the market is asking for real solutions.
Of course, everyone wants these fixed without affecting single operation latency, or increasing cost, etc. Thatâ€™s what weâ€™re here for though â€“ right? Solve the impossible?
A quick summary is in order. Itâ€™s not looking good. For a given solution, flash is getting less reliable, there is less bandwidth available at capacity because there are fewer die, weâ€™re driving latency way below the actual write latency of flash, and weâ€™re not satisfied with the best solutions we have for all the reasons above.
If you think these through enough, you start to consider one basic path. It also turns out weâ€™re not the only ones realizing this. Where will PCIe flash solutions evolve over the next 2, 3, 4 years? The basic goals are:
One easy answer would be â€“ thatâ€™s a flash SAN or NAS. But thatâ€™s not the answer. Not many customers want a flash SAN or NAS â€“ not for their new infrastructure, but more importantly, all the data is at the wrong end of the straw. The poor server is left sucking hard. Remember â€“ this is flash, and people use flash for latency. Today these SAN type of flash devices have 4x-10x worse latency than PCIe cards. Ouch. You have to suck the data through a relatively low bandwidth interconnect, after passing through both the storage and network stacks. And there is interaction between the I/O threads of various servers and applications â€“ you have to wait in line for that resource. Itâ€™s true there is a lot of startup energy in this space. Â It seems to make sense if youâ€™re a startup, because SAN/NAS is what people use today, and thereâ€™s lots of money spent in that market today. However, itâ€™s not what the market is asking for.
Another easy answer is NVMe SSDs. Right? Everyone wants them â€“ right? Well, OEMs at least. Front bay PCIe SSDs (HDD form factor or NVMe â€“ lots of names) that crowd out your disk drive bays. But they donâ€™t fix the problems. The extra mechanicals and form factor are more expensive, and just make replacing the cards every 5 years a few minutes faster. Wow. With NVME SSDs, you can fit fewer HDDs â€“ not good. They also provide uniformly bad cooling, and hard limit power to 9W or 25W per device. But to protect the storage in these devices, you need to have enough of them that you can RAID or otherwise protect. Once you have enough of those for protection, they give you awesome capacity, IOPs and bandwidth, too much in fact, but thatâ€™s not what applications need â€“ they need low latency for the working set of data.
What do I think the PCIe replacement solutions in the near future will look like? You need to pool the flash across servers (to optimize bandwidth and resource usage, and allocate appropriate capacity). You need to protect against failures/errors and limit the span of failure,Â commit writes at very low latency (lower than native flash) and maintain low latency, bottleneck-free physical links to each serverâ€¦ To me that implies:
That means the performance looks exactly as if each server had multiple PCIe cards. But the capacity and bandwidth resources are shared, and systems can remain resilient. So ultimately, I think that PCIe cards will evolve to more external, rack level, pooled flash solutions, without sacrificing all their great attributes today. This is just my opinion, but as I say â€“ other leaders in flash are going down this path tooâ€¦
Whatâ€™s your opinion?
Tags: DAS, datacenter, direct attached storage, enterprise IT, flash, hard disk drive, HDD, hyperscale, latency, NAS, network attached storage, NVMe, PCIe, SAN, solid state drive, SSD, storage area network
It may sound crazy, but hard disk drives (HDDs) do not actually have a delete command. Now we all know HDDs have a fixed capacity, so over time the older data must somehow get removed, right? Actually it is not removed, but overwritten. The operating system (OS) uses a reference table to track the locations (addresses) of all data on the HDD. This table tells the OS which spots on the HDD are used and which are free. When the OS or a user deletes a file from the system, the OS simply marks the corresponding spot in the table as free, making it available to store new data.
The HDD is told nothing about this change, and it does not need to know since it would not do anything with that information. When the OS is ready to store new data in that location, it just sends the data to the HDD and tells it to write to that spot, directly overwriting the prior data. It is simple and efficient, and no delete command is required.
However, with the advent of NAND flash-based solid state drives (SSDs) a new problem emerged. In my blog, Gassing up your SSD, I explain how NAND flash memory pages cannot be directly overwritten with new data, but must first be erased at the block level through a process called garbage collection (GC). I further describe how the SSD uses non-user space in the flash memory (over provisioning or OP) to improve performance and longevity of the SSD. In addition, any user space not consumed by the user becomes what we call dynamic over provisioning â€“ dynamic because it changes as the amount of stored data changes. When less data is stored by the user, the amount of dynamic OP increases, further improving performance and endurance. The problem I alluded to earlier is caused by the lack of a delete command. Without a delete command, every SSD will eventually fill up with data, both valid and invalid, eliminating any dynamic OP. The result would be the lowest possible performance at that factory OP level. So unlike HDDs, SSDs need to know what data is invalid in order to provide optimum performance and endurance.
Keeping your SSD TRIM
A number of years ago, the storage industry got together and developed a solution between the OS and the SSD by creating a new SATA command called TRIM. It is not a command that forces the SSD to immediately erase data like some people believe. Actually the TRIM command can be thought of as a message from the OS about what previously used addresses on the SSD are no longer holding valid data. The SSD takes those addresses and updates its own internal map of its flash memory to mark those locations as invalid. With this information, the SSD no longer moves that invalid data during the GC process, eliminating wasted time rewriting invalid data to new flash pages. It also reduces the number of write cycles on the flash, increasing the SSDâ€™s endurance. Another benefit of the TRIM command is that more space is available for dynamic OP.
Today, most current operating systems and SSDs support TRIM, and all SandForce Drivenâ„˘ member SSDs have always supported TRIM. Note that most RAID environments do not support TRIM, although some RAID 0 configurations have claimed to support it. I have presented on this topic in detail previously. You can view the presentation in full here. In my next blog I will explain how there may be an alternate solution using SandForce Driven member SSDs.
The term global warming can be very polarizing in a conversation and both sides of the argument have mountains of material that support or discredit the overall situation. The most devout believers in global warming point to the average temperature increases in the Earthâ€™s atmosphere over the last 100+ years. They maintain the rise is primarily caused by increased greenhouse gases from humans burning fossil fuels and deforestation.
The opposition generally agrees with the measured increase in temperature over that time, but claims that increase is part of a natural cycle of the planet and not something humans can significantly impact one way or another. The US Energy Information Administration estimates that 90% of worldâ€™s marketed energy consumption is from non-renewable energy sources like fossil fuels. Our internet-driven lives run through datacenters that are well-known to consume large quantities of power. No matter which side of the global warming argument you support, most people agree that wasting power is not a good long-term position. Therefore, if the power consumed by datacenters can be reduced, especially as we live in an increasingly digitized world, this would benefit all mankind.
When we look at the most power-hungry components of a datacenter, we find mainly server and storage systems. However, people sometimes forget that those systems require cooling to counteract the heat generated. But the cooling itself consumes even more energy. So anything that can store data more efficiently and quickly will reduce both the initial energy consumption and the energy to cool those systems. As datacenters demand faster data storage, they are shifting to solid state drives (SSDs). SSDs generally provide higher performance per watt of power consumed over hard disk drives, but there is still more that can be done.
Reducing data to help turn down the heat
The good news is that thereâ€™s a way to reduce the amount of data that reaches the flash memory of the SSD. The unique DuraWriteâ„˘ technology found in all LSIÂ® SandForceÂ®Â flash controllersÂ reduces the amount of data written to the flash memory to cut the time it takes to complete the writes and therefore reduce power consumption, below levels of other SSD technologies. That, in turn, reduces the cooling needed to further reduce overall power consumption. Now this data reduction is â€śloss-less,â€ť meaning 100% of what is saved is returned to the host, unlike MPEG, JPEG, and MP3 files, which tolerate some amount of data loss to reduce file sizes.
Today you can find many datacenters already using SandForce Driven SSDs and LSI Nytroâ„˘ application acceleration products (which use DuraWrite technology as well). When we start to see datacenters deploying these flash storage products by the millions, you will certainly be able to measure the reduction in power consumed by datacenters. Unfortunately, LSI will not be able to claim it stopped global warming, but at least we, and our customers, can say we did something to help defer the end result.
Have you ever run out of gas in your car? Do you often risk running your gas tank dry? Hopefully you are more cautious than that and you start searching for a gas station when you get down to a ÂĽ tank. You do this because you want plenty of cushion in case something comes up that prevents you from getting to a station before it is too late.
The reason most people stretch their tank is to maximize travel between station visits. The downside to pushing the envelope to â€śEâ€ť is you can end up stranded with a dead vehicle waiting for AAA to bring you some gas.
Now most people know you donâ€™t put gas in a solid state drive (SSD), but the pros and cons associated with how much you leave in the â€śtankâ€ť is very relevant to SSDs.
To understand how these two seemingly unrelated technologies are similar, we first need to drill into some technical SSD details. To start, SSDs act, and often look, like traditional hard disk drives (HDDs), but they do not record data in the same way. SSDs today typically use NAND flash memory to store data and aÂ flash controllerÂ to connect the memory with the host computer. The flash controller can write a page of data (often 4,096 bytes) directly to the flash memory, but cannot overwrite the same page of data without first erasing it. The erase cycle cannot expunge only a single page. Instead, it erases a whole block of data (usually 128 pages). Because the stored data is sometimes updated randomly across the flash, the erase cycle for NAND flash requires a process called garbage collection.
Garbage collection is just dumping the trash
Garbage collection starts when a flash block is full of data, usually a mix of valid (good) and invalid (older, replaced) data. The invalid data must be tossed out to make room for new data, so the flash controller copies the valid data of a flash block to a previously erased block, and skips copying the invalid data of that block. The final step is to erase the original whole block, preparing it for new data to be written.
Before and during garbage collection, some data â€“ valid data copied during garbage collection and the (typically) multiple copies of the invalid data â€“ is in two or more locations at once, a phenomenon known as write amplification. To store this extra data not counted by the operating system, the flash controller needs some spare capacity beyond what the operating system knows. This is called over-provisioning (OP), and it is a critical part of every NAND flash-based SSD.
Over-provisioning is like the gas that remains in your tank
While every SSD has some amount of OP, some will have more or less than others. The amount of OP varies depending on trade-offs made between total storage capacity and benefits in performance and endurance. The less OP allocated in an SSD, the more information a user can store. This is like the driver who will take their tank of gas clear down to near-empty just to maximize the total number of miles between station visits.
What many SSD users donâ€™t realize is there are major benefits to NOT stretching this OP area too thin. When you allocate more space for OP, you achieve a lower write amplification, which translates to a higher performance during writes and longer endurance of the flash memory. This is like the driver who is more cautious and visits the gas station more often to enable greater flexibility in selecting a more cost-effective station, and allows for last-minute deviations in travel plans that end up burning more fuel than originally anticipated.
The choice is yours
Most SSD users do not realize they have full control of how much OP is configured in their SSD. So even if you buy an SSD with â€ś0%â€ť OP, you can dedicate some of the user space back to OP for the SSD.
A more detailed presentation of how OP works and what 0% OP really means was presented at the Flash Memory Summit 2012 and can be viewed with this link for your convenience: http://www.lsi.com/downloads/Public/Flash%20Storage%20Processors/LSI_PRS_FMS2012_TE21_Smith.pdf
It pays to be the cautious driver who fills the gas tank long before you get to empty. When it comes to both performance and endurance, your SSD will cover a lot more ground if you treat the over-provisioning space the same way â€“ keeping more in reserve.
Iâ€™ve been travelling to China quite a bit over the last year or so. Iâ€™m sitting in Shenzhen right now (If you know Chinese internet companies, youâ€™ll know who Iâ€™m visiting). The growth is staggering. Iâ€™ve had a bit of a trains, planes, automobiles experience this trip, and thatâ€™s exposed me to parts of China I never would have seen otherwise. Just to accommodate sheer population growth and the modest increase in wealth, there is construction everywhere â€“ a press of people and energy, constant traffic jams, unending urban centers, and most everything is new. Very new. It must be exciting to be part of that explosive growth. What a market. Â I mean â€“ come on â€“ there are 1.3 billion potential users in China.
The amazing thing for me is the rapid growth ofÂ hyperscale datacenters in China, which is truly exponential. Their infrastructure growth has been 200%-300% CAGR for the past few years. Itâ€™s also fantastic walking into a building in China, say Baidu, and feeling very much at home â€“ just like you walked into Facebook or Google. Itâ€™s the same young vibe, energy, and ambition to change how the world does things. And itâ€™s also the same pleasure â€“ talking to architects who are super-sharp, have few technical prejudices, and have very little vanity â€“ just a will to get to business and solve problems. Polite, but blunt. Weâ€™re lucky that they recognize LSI as a leader, and are willing to spend time to listen to our ideas, and to give us theirs.
Even their infrastructure has a similar feel to the USÂ hyperscale datacenters. The same only different. Â ;-)
A lot of these guys are growing revenue at 50% per year, several getting 50% gross margin. Those are nice numbers in any country. One has $100â€™s of billions in revenue. Â And theyâ€™re starting to push out of China. Â So far their pushes into Japan have not gone well, but other countries should be better. They all have unique business models. â€śWeâ€ť in the US like to say things like â€śAlibaba is the Chinese eBayâ€ť or â€śSina Weibo is the Chinese Twitterâ€ťâ€¦. But thatâ€™s not true â€“ they all have more hybrid business models, unique, and so their datacenter goals, revenue and growth have a slightly different profile. And there are some very cool services that simply are not available elsewhere. (You listening AppleÂ®, GoogleÂ®, TwitterÂ®, FacebookÂ®?) But they are all expanding their services, products and user base.Â Interestingly, there is very little public cloud in China. So there are no real equivalents to Amazonâ€™s services or Microsoftâ€™s Azure. I have heard about current development of that kind of model with the government as initial customer. Weâ€™ll see how that goes.
100â€™s of thousands of servers. Theyâ€™re not the scale of Google, but they sure are the scale of Facebook, Amazon, Microsoftâ€¦. Itâ€™s a serious market for an outfit like LSI. Really itâ€™s a very similar scale now to the US market. Close to 1 million servers installed among the main 4 players, and exabytes of data (weâ€™ve blown past mere petabytes). Interestingly, they still use many co-location facilities, but that will change. More important â€“ theyâ€™re all planning to probably double their infrastructure in the next 1-2 years â€“ they have to â€“ their growth rates are crazy.
Often 5 or 6 distinct platforms, just like the USÂ hyperscale datacenters. Database platforms, storage platforms, analytics platforms, archival platforms, web server platformsâ€¦. But they tend to be a little more like a rack of traditional servers that enterprise buys with integrated disk bays, still a lot of 1G Ethernet, and they are still mostly from established OEMs. In fact I just ran into one OEMâ€™s American GM, who I happen to know, in Tencentâ€™s offices today. The typical servers have 12 HDDs in drive bays, though they are starting to look at SSDs as part of the storage platform. They do use PCIeÂ® flash cards in some platforms, but the performance requirements are not as extreme as you might imagine. Reasonably low latency and consistent latency are the premium they are looking for from these flash cards â€“ not maximum IOPs or bandwidth â€“ very similar to their American counterparts. I thinkÂ hyperscale datacenters are sophisticated in understanding what they need from flash, and not requiring more than that. Enterprise could learn a thing or two.
Some server platforms have RAIDed HDDs, but most are direct map drives using a high availability (HA) layer across the server center â€“ HadoopÂ® HDFS or self-developed Hadoop like platforms. Some have also started to deploy microserver archival â€śbit buckets.â€ť A small ARMÂ® SoC with 4 HDDs totaling 12 TBytes of storage, giving densities like 72 TBytes of file storage in 2U of rack. While I can only find about 5,000 of those in China that are the first generation experiments, itâ€™s the first of a growing wave of archival solutions based on lower performance ARM servers. The feedback is clear – theyâ€™re not perfect yet, but the writing is on the wall. (If youâ€™re wondering about the math, thatâ€™s 5,000 x 12 TBytes = 60 Petabytesâ€¦.)
Yes, itâ€™s important, but maybe more than weâ€™re used to. Itâ€™s harder to get licenses for power in China. So itâ€™s really important to stay within the envelope of power your datacenter has. You simply canâ€™t get more. That means they have to deploy solutions that do more in the same power profile, especially as they move out of co-located datacenters into private ones. Annually, 50% more users supported, more storage capacity, more performance, more services, all in the same power. Thatâ€™s not so easy. I would expect solar power in their future, just as Apple has done.
Hereâ€™s where it gets interesting. They are developing a cousin to OpenCompute thatâ€™s called Scorpio. Itâ€™s Tencent, Alibaba, Baidu, and China Telecom so far driving the standard. Â The goals are similar to OpenCompute, but more aligned to standardized sub-systems that can be co-mingled from multiple vendors. There is some harmonization and coordination between OpenCompute and Scorpio, and in fact the Scorpio companies are members of OpenCompute. But where OpenCompute is trying to change the complete architecture of scale-out clusters, Scorpio is much more pragmatic â€“ some would say less ambitious. Theyâ€™ve finished version 1 and rolled out about 200 racks as a â€śtest caseâ€ť to learn from. Baidu was the guinea pig. Thatâ€™s around 6,000 servers. They werenâ€™t expecting more from version 1. Theyâ€™re trying to learn. Theyâ€™ve made mistakes, learned a lot, and are working on version 2.
Even if itâ€™s not exciting, it will have an impact because of the sheer size of deployments these guys are getting ready to roll out in the next few years. They see the progression as 1) they were using standard equipment, 2) theyâ€™re experimenting and learning from trial runs ofÂ Scorpio versions 1 and 2, and then theyâ€™ll work on 3) new architectures that are efficient and powerful, and different.
Information is pretty sketchy if you are not one of the member companies or one of their direct vendors. We were just invited to join Scorpio by one of the founders, and would be the first group outside of China to do so. If that all works out, Iâ€™ll have a much better idea of the details, and hopefully can influence the standards to be better for theseÂ hyperscale datacenter applications. Between OpenCompute and Scorpio weâ€™ll be seeing a major shift in the industry â€“ a shift that will undoubtedly be disturbing to a lot of current players. It makes me nervous, even though Iâ€™m excited about it. One thing is sure â€“ just as the server market volume is migrating from traditional enterprise toÂ hyperscale datacenter (25-30% of the server market and growing quickly), weâ€™re starting to see a migration to ChineseÂ hyperscale datacenters from US-based ones. They have to grow just to stay still. I mean â€“ come on â€“ there are 1.3 billion potential users in Chinaâ€¦.
Tags: Alibaba, Amazon, Apple, ARM, Baidu, China, China Telecom, datacenter, Facebook, Google, Hadoop, hard disk drive, HDD, hyperscale, Microsoft, OpenCompute, Scorpio, Shenzhen, Sina Weibo, solid state drive, SSD, Tencent, Twitter
Thereâ€™s no need to wait for higher speed. Server builders can take advantage of 12Gb/s SAS now. And this is even as HDD and SSD makers continue to tweak, tune and otherwise prepare their 12Gb/s SAS products for market. The next generation of 12Gb/s SAS without supporting drives? What gives?
Itâ€™s simple. LSI is already producing 12Gb/s ROC and IOC solutions, meaning that customers can take advantage of 12Gb/s SAS performance today with currently shipping systems and storage.Â As for the numbers, LSI 12Gb/s SAS enables performance increases of up to 45% in throughput and up to 58% in IOPS when compared to 6Gb/s SAS.
True, 12Gb/s SAS isnâ€™t a Big Bang Disruption in storage systems; rather itâ€™s an evolutionary change, but a big step forward.Â It may not be clear why it matters so much, so I want to briefly explain.Â In latest generation PCIe 3 systems, 6Gb/s SAS is the bottleneck that prevents systems from achieving full PCIe 3 throughput of 6,400 MB/s.
With 12Gb/s SAS, customers will be able to take full advantage of the performance of PCIe 3 systems.Â Earlier this month at CeBIT computer expo in Hanover, Germany, we announced that we are the first to ship production-level 12Gb/s SAS ROC (RAID on Chip) and IOC (I/O Controllers) to OEM customers.Â This convergence of new technologies and the expansion of existing capabilities create significant improvements for datacenters of all kinds.
At CeBIT, we demonstrated our 12Gb/s SAS solutions with the unique DataBoltTM feature and how, with DataBolt, Â systems with 6Gb/s SAS HDDs can achieve 12Gb/s SAS performance.
DataBolt uses bandwidth aggregation to create throughput performance acceleration. Â Most importantly, customers donâ€™t have to wait for the next inflection in drive design to get the highest possible performance and connectivity.
Most people fully understand that electronics are useless without power, but what happens when devices lose power in the middle of operating? That answer is highly dependent on a number of variables including what type of electronic device is in question.
For solid state drives (SSDs) the answer depends on factors such as whether an uninterruptable power supply (UPS) is connected, what controller or flash processor is used, the design of the power circuit of the SSD, and the type of memory. If an SSD is in the middle of a write operation to the flash memory and power to the SSD is disconnected, many bad things could happen if the right safety measures are not in place. Many users do not think about all the non-user initiated operations the SSDs may be performing like background garbage collection that could be under way when the power fails. Without the correct protection, in most cases data will be corrupted.
According to the Nielsen company, 108.4 million viewers were tuned into the 2013 Super Bowl in New Orleans only to be shocked to witness the power go down for 34 minutes in the middle of the game. If power can be lost during such an incredibly high profile event such as this, it can happen just about anywhere.
Inside the New Orleans Superdome stadium operations and broadcast server rooms
Enterprise computing environments typically have multiple servers with data connections and lots of storage. Over the past few years, a larger percentage of the storage is kept on SSDs for the very active or â€śhotâ€ť data. This greatly improves data access time and reduces overall latency of the system. Enterprise servers are often connected to UPS systems that supply the connected devices with temporary power during a power failure.
Usually this is enough power to support uninterrupted system operations until power is restored, or at least until technicians can complete their current work and shut down safely. However there are many times when UPS systems are not deployed or fail to operate properly themselves. In those cases, the server will experience a power failure as abrupt as if someone had yanked the power cord from the wall socket.
The LSIÂ® SandForceÂ® flash controllersÂ are at the heart of many popular SSDs sold today. The flash controller connects the host computer with the flash memory to store user data in fast non-volatile memory. The SandForce flash controllers are specifically engineered to operate in different environments, and the SF-2500/2600 FSPs are designed to provide the high level of data integrity required for enterprise applications. In the area of power failure protection, they include a combination of firmware (FW) and hardware circuitry that monitors the power coming into the SSD. In the event of a power failure or even a brown-out, the flash controller is alerted to the situation and hold-up capacitors in the SSD provide the necessary power and time for the controller to complete pending writes to the flash memory. This same circuit is also designed to prevent the risk of lower page corruption with Multi-level Cell (MLC) memory.
Watch out for SSD solutions that provide backup capacitors, but lack the necessary support circuitry and special firmware to ensure the data is fully committed to the flash memory before the power runs out. Even if these other special circuits are present, only truly enterprise SSDs that are meticulously designed and tested to withstand power failures are up to the task of storing and protecting highly critical data.
In the control room and down on the field
The usage patterns of non-enterprise systems like notebooks and ultrabooks call for a different power failure support mechanism. Realize that when you have a notebook or ultrabook system, you have a built in mini-UPS system. A power outage from the wall socket has no impact to the system until the battery gets low. At that point the operating system will tell the computer to shut down and that will be ample warning for the SSD to safely shut down and ensure the integrity of the data. But what if the operating system locks up and does not warn the SSD or the system is an A/C-powered desktop without a battery?
The LSI SF-2100/2200 FSPs are purpose-built for these client environments and operate with the assumption that power could disappear at any point in time. They use special FW techniques so that even without a battery present, as is the case with desktop systems, they greatly limit the potential for data loss.
The naked facts
It should be clear that the answer to the original question is highly dependent upon the flash controller at the heart of the SSD. Without having the critical features discussed above and designed into the LSI SandForce flash controllers, it is very possible to lose data during a power failure. The LSI SandForce flash controllers are engineered to withstand power failures like the one that hit New Orleans at the Super Bowl, but donâ€™t expect them to fix wardrobe malfunctions.