It is always good to hear the opinions of your customers and end users, and in that respect June was a banner month for LSI® SandForce® flash controllers.

In a survey soliciting responses from more than 1 million members of on-line groups and other sources by IT Brand Pulse, an independent product testing and validation lab, LSI SandForce controllers ranked at the top of all six SSD controller chip sub-categories: market, price, performance, reliability, service and support, and innovation. Last August, the LSI SandForce controllers won in three of the six sub-categories, so we’re thrilled to see momentum building.

Winning all six awards is no easy task. Some of the sub-categories could be considered mutually exclusive, requiring customers to make trade-offs among product attributes. For example, often a product with the best price is considered to have skimped on quality compared to pricier solutions. A product with screaming performance, ironically, is seen as something of a market laggard because it usually does not carry the best price. So it is exciting to strike the right balance among all six measures and sweep the product category. You can find more details on the awards here: http://itbrandpulse.com/research/brand-leader-program/225-ssd-controller-chips-2013

For those of us in product marketing, winning product awards voted on by your peers can bring on a  feeling similar to that warm afterglow parents bask in when they hear their child has made the honor roll or was named the valedictorian for his or her graduating class.

So please pardon us, for a moment, as we beam with pride.

Tags: , , , , , , , ,
Views: (5809)


It seems like our smartphones are getting bigger and bigger with each generation. Sometimes I see people holding what appears to be a tablet computer up next to their head. I doubt they know how ridiculous they look to the rest of us, and I wonder what pants today have pockets that big.

I certainly do like the convenience of the instant-on capabilities my smart phone gives me, but I still need my portable computer with its big screen and keyboard separate from my phone.

A few years ago, SATA-IO, the standards body, added a new feature to the Serial ATA (SATA) specification designed to further reduce battery consumption in portable computer products. This new feature, DevSleep, enables solid state drives (SSDs) to act more like smartphones, allowing you to go days without plugging in to recharge and then instantly turn them on and see all the latest email, social media updates, news and events.

Why not just switch the system off?
When most PC users think about switching off their system, they dread waiting for the operating system to boot back up. That is one of the key advantages of replacing the hard disk drive (HDD) in the system with a faster SSD. However, in our instant gratification society, we hate to wait even seconds for web pages to come up, so waiting minutes for your PC to turn on and boot up can feel like an eternity. Therefore, many people choose to leave the system on to save those precious moments… but at the expense of battery life.

Can I get this today?
To further extend battery life, the new DevSleep feature requires a signal change on the SATA connector. This change is currently supported only in new Intel® Haswell chipset-based platforms announced this June. What’s more, the SSD in these systems must support the DevSleep feature and monitor the signal on the SATA connector. Most systems that support DevSleep will likely be very low-power notebook systems and will likely already ship with an SSD installed using a small mSATA, M.2, or similar edge connector. Therefore, the signal change on the SATA interface will not immediately affect the rest of the SSDs designed for desktop systems shipping through retail and online sources. Note that not all SSDs are created equal and, while many claim support for DevSleep, be sure to look at the fine print to compare the actual power draw when in DevSleep.

At Computex last month, LSI announced support for the DevSleep feature and staged demonstrations showing a 400x reduction in idle power. It should be noted that a 400x reduction in power does not directly translate to a 400x increase in battery life, but any reduction in power will give you more time on the battery, and that will certainly benefit any user who often works without a power cord.

 

Tags: , , , , , , , , ,
Views: (12514)


Not likely. But you might think that solving your computer data security problems is very well possible when someone tells you that TCG Opal is the key. According to its website, “The Trusted Computing Group (TCG) is a not-for-profit organization formed to develop, define and promote open, vendor-neutral, global industry standards, supportive of a hardware-based root of trust, for interoperable trusted computing platforms.”

That might take a bit to digest, but think about TCG as a group of companies creating standards to simplify deployment and increase adoption of data security. The consortium has two better known specifications called TCG Enterprise and TCG Opal.

Sorting through the alphabet soup of data security
“Our SED with TCG Opal provides FDE.” While this might look like a spoonful of alphabet soup, it is music to the ears of a corporate IT manager. Let me break it down for those who just hear fingernails on the chalkboard. A self-encrypting drive (SED) is one that embeds a hardware-based encryption engine in the storage device. One chief benefit is that the hardware engine performs the encryption, preserving full performance of the host CPU. An SED can be a hard disk drive (HDD) or a solid state drive (SSD). True, traditional software encryption can secure data going to the storage device, but it consumes precious host CPU bandwidth. The related term, full drive encryption (FDE), is used to describe any drive (HDD or SSD) that stores data in an encrypted form. This can be through either software-based (host CPU) or hardware-based (an SED) encryption.

Most people would assume that if their work laptop were lost or stolen, they would suffer only some lost productivity for a short time and about $1,500 in hardware costs. However, a study by Intel and the Ponemon Institute found that the cost of a lost laptop totaled nearly $50,000 when you account for lost IP, legal costs, customer notifications, lost business, harm to reputation, and damages associated with compromising confidential customer information. When the data stored on the laptop is encrypted, this cost is reduced by nearly $20,000. This difference certainly supports the need for better security for these mobile platforms.

When considering a security solution for this valuable data, you have to decide between a hardware-based SED and a host-based software solution. The primary problem with software solutions is they require the host CPU to do all of the encryption. This detracts from the CPU’s core computing work, leaving users with a slower computer or forcing them to pay for greater CPU performance. Another drawback of many software encryption solutions is that they can be turned off by the computer user, leaving data in the clear and vulnerable. Since hardware-based encryption is native to the HDD or SSD, it cannot be disabled by the end user.

In April 2013, LSI and a few other storage companies worked with the Ponemon Institute to better understand the value of hardware-based encryption. You can read about the details in the study here, but the quick summary is that hardware-based encryption solutions can offer a 75% total cost savings over software-based solutions, on average.

When is this available?
At the Computex Taipei 2013 show earlier this month, LSI announced availability of a firmware update for SandForce® controllers that adds support for TCG Opal. The LSI suite at the show featured TCG Opal demonstrations using self-encrypting SSDs provided by SandForce Driven™ member companies, including Kingston, A-DATA, Avant and Edge. (Contact SSD manufacturers directly for product availability.)

 

Tags: , , , , , , , , , ,
Views: (8460)


Imagine a bathtub full of water and asking someone to empty the tub while you turn your back for a moment. When you look again and see the water is gone, do you just assume someone pulled the drain plug?

I think most people would, but what about the other methods of removing the water like with a siphon, or using buckets to bail out the water? In a typical bathroom you are not likely to see these other methods used, but that does not mean they do not exist. The point is that just because you see a certain result does not necessarily mean the obvious solution was used.

I see a lot of confusion in forum posts from SandForce Driven™ SSD users and reviewers over how the LSI® DuraWrite™ data reduction and advanced garbage collection technology relates to the SATA TRIM command. In my earlier blog on TRIM I covered this topic in great detail, but in simple terms the operating system uses the TRIM command to inform an SSD what information is outdated and invalid. Without the TRIM command the SSD assumes all of the user capacity is valid data. I explained in my blog Gassing up your SSD that creating more free space through over provisioning or using less of the total capacity enables the SSD to operate more efficiently by reducing the write amplification, which leads to increased performance and flash memory endurance. So without TRIM the SSD operates at its lowest level of efficiency for a particular level of over provisioning.

Will you drown in invalid data without TRIM?
TRIM is a way to increase the free space on an SSD – what we call “dynamic over provisioning” – and DuraWrite technology is another method to increase the free space. Since DuraWrite technology is dependent upon the entropy (randomness) of the data, some users will get more free space than others depending on what data they store. Since the technology works on the basis of the aggregate of all data stored, boot SSDs with operating systems can still achieve some level of dynamic over provisioning even when all other files are at the highest entropy, e.g., encrypted or compressed files.

With an older operating system or in an environment that does not support TRIM (most RAID configurations), DuraWrite technology can provide enough free space to offer the same benefits as having TRIM fully operational. In cases where both TRIM and DuraWrite technology are operating, the combined result may not be as noticeable as when they’re working independently since there are diminishing returns when the free space grows to greater than half of the SSD storage capacity.

So the next time you fill your bathtub, think about all the ways you can get the water out of the tub without using the drain. That will help you remember that both TRIM and DuraWrite technology can improve SSD performance using different approaches to the same problem. If that analogy does not work for you, consider the different ways to produce a furless feline, and think about what opening graphic image I might have used for a more jolting effect. Although in that case you might not have seen this blog since that image would likely have gotten us banned from Google® “safe for work” searches.

I presented on this topic in detail at the Flash Memory Summit in 2011. You can read it in full here: http://www.lsi.com/downloads/Public/Flash%20Storage%20Processors/LSI_PRS_FMS2011_T1A_Smith.pdf

 

Tags: , , , , , , , , , , , , , , ,
Views: (12315)


I want to warn you, there is some thick background information here first. But don’t worry. I’ll get to the meat of the topic and that’s this: Ultimately, I think that PCIe® cards will evolve to more external, rack-level, pooled flash solutions, without sacrificing all their great attributes today. This is just my opinion, but other leaders in flash are going down this path too…

I’ve been working on enterprise flash storage since 2007 – mulling over how to make it work. Endurance, capacity, cost, performance have all been concerns that have been grappled with. Of course the flash is changing too as the nodes change: 60nm, 50nm, 35nm, 24nm, 20nm… and single level cell (SLC) to multi level cell (MLC) to triple level cell (TLC) and all the variants of these “trimmed” for specific use cases. The spec “endurance” has gone from 1 million program/erase cycles (PE) to 3,000, and in some cases 500.

It’s worth pointing out that almost all the “magic” that has been developed around flash was already scoped out in 2007. It just takes a while for a whole new industry to mature. Individual die capacity increased, meaning fewer die are needed for a solution – and that means less parallel bandwidth for data transfer… And the “requirement” for state-of-the-art single operation write latency has fallen well below the write latency of the flash itself. (What the ?? Yea – talk about that later in some other blog. But flash is ~1500uS write latency, where state of the art flash cards are ~50uS.) When I describe the state of technology it sounds pretty pessimistic.  I’m not. We’ve overcome a lot.

We built our first PCIe card solution at LSI in 2009. It wasn’t perfect, but it was better than anything else out there in many ways. We’ve learned a lot in the years since – both from making them, and from dealing with customer and users – about our own solutions and our competitors.  We’re lucky to be an important player in storage, so in general the big OEMs, large enterprises and the hyperscale datacenters all want to talk with us – not just about what we have or can sell, but what we could have and what we could do. They’re generous enough to share what works and what doesn’t. What the values of solutions are and what the pitfalls are too. Honestly? It’s the hyperscale datacenters in the lead both practically and in vision.

If you haven’t  nodded off to sleep yet, that’s a long-winded way of saying – things have changed fast, and, boy, we’ve learned a lot in just a few years.

Most important thing we’ve learned…
Most importantly, we’ve learned it’s latency that matters. No one is pushing the IOPs limits of flash, and no one is pushing the bandwidth limits of flash. But they sure are pushing the latency limits.

PCIe cards are great, but…
We’ve gotten lots of feedback, and one of the biggest things we’ve learned is – PCIe flash cards are awesome. They radically change performance profiles of most applications, especially databases allowing servers to run efficiently and actual work done by that server to multiply 4x to 10x (and in a few extreme cases 100x). So the feedback we get from large users is “PCIe cards are fantastic. We’re so thankful they came along. But…” There’s always a “but,” right??

It tends to be a pretty long list of frustrations, and they differ depending on the type of datacenter using them. We’re not the only ones hearing it. To be clear, none of these are stopping people from deploying PCIe flash… the attraction is just too compelling. But the problems are real, and they have real implications, and the market is asking for real solutions.

  • Stranded capacity & IOPs
    • Some “leftover” space is always needed in a PCIe card. Databases don’t do well when they run out of storage! But you still pay for that unused capacity.
    • All the IOPs and bandwidth are rarely used – sure latency is met, but there is capability left on the table.
    • Not enough capacity on a card – It’s hard to figure out how much flash a server/application will need. But there is no flexibility. If my working set goes one byte over the card capacity, well, that’s a problem.
  • Stranded data on server fail
    • If a server fails – all that valuable hot data is unavailable. Worse – it all needs to be re-constructed when the server does come online because it will be stale. It takes quite a while to rebuild 2TBytes of interesting data. Hours to days.
  • PCIe flash storage is a separate storage domain vs. disks and boot.
    • Have to explicitly manage LUNs, move data to make it hot.
    • Often have to manage via different API’s and management portals.
    • Applications may even have to be re-written to use different APIs, depending on the vendor.
  • Depending on the vendor, performance doesn’t scale.
    • One card gives awesome performance improvement. Two cards don’t  give quite the same improvement.
    • Three or four cards don’t give any improvement at all. Performance maxed out somewhere below 2 cards.  It turns out drivers and server onloaded code create resource bottlenecks, but this is more a competitor’s problem than ours.
  • Depending on the vendor, performance sags over time.
    • More and more computation (latency)  is needed in the server as flash wears and needs more error correction.
    • This is more a competitor’s problem than ours.
  • It’s hard to get cards in servers.
    • A PCIe card is a card – right? Not really. Getting a high capacity card in a half height, half length PCIe form factor is tough, but doable. However, running that card has problems.
    • It may need more than 25W of power  to run at full performance – the slot may or may not provide it. Flash burns power proportionately to activity, and writes/erases are especially intense on power. It’s really hard to remove more than 25W air cooling in a slot.
    • The air is preheated, or the slot doesn’t get good airflow. It ends up being a server by server/slot by slot qualification process. (yes, slot by slot…) As trivial as this sounds, it’s actually one of the biggest problems

Of course, everyone wants these fixed without affecting single operation latency, or increasing cost, etc. That’s what we’re here for though – right? Solve the impossible?

A quick summary is in order. It’s not looking good. For a given solution, flash is getting less reliable, there is less bandwidth available at capacity because there are fewer die, we’re driving latency way below the actual write latency of flash, and we’re not satisfied with the best solutions we have for all the reasons above.

The implications
If you think these through enough, you start to consider one basic path. It also turns out we’re not the only ones realizing this. Where will PCIe flash solutions evolve over the next 2, 3, 4 years? The basic goals are:

  • Unified storage infrastructure for boot, flash, and HDDs
  • Pooling of storage so that resources can be allocated/shared
  • Low latency, high performance as if those resources were DAS attached, or PCIe card flash
  • Bonus points for file store with a global name space

One easy answer would be – that’s a flash SAN or NAS. But that’s not the answer. Not many customers want a flash SAN or NAS – not for their new infrastructure, but more importantly, all the data is at the wrong end of the straw. The poor server is left sucking hard. Remember – this is flash, and people use flash for latency. Today these SAN type of flash devices have 4x-10x worse latency than PCIe cards. Ouch. You have to suck the data through a relatively low bandwidth interconnect, after passing through both the storage and network stacks. And there is interaction between the I/O threads of various servers and applications – you have to wait in line for that resource. It’s true there is a lot of startup energy in this space.  It seems to make sense if you’re a startup, because SAN/NAS is what people use today, and there’s lots of money spent in that market today. However, it’s not what the market is asking for.

Another easy answer is NVMe SSDs. Right? Everyone wants them – right? Well, OEMs at least. Front bay PCIe SSDs (HDD form factor or NVMe – lots of names) that crowd out your disk drive bays. But they don’t fix the problems. The extra mechanicals and form factor are more expensive, and just make replacing the cards every 5 years a few minutes faster. Wow. With NVME SSDs, you can fit fewer HDDs – not good. They also provide uniformly bad cooling, and hard limit power to 9W or 25W per device. But to protect the storage in these devices, you need to have enough of them that you can RAID or otherwise protect. Once you have enough of those for protection, they give you awesome capacity, IOPs and bandwidth, too much in fact, but that’s not what applications need – they need low latency for the working set of data.

What do I think the PCIe replacement solutions in the near future will look like? You need to pool the flash across servers (to optimize bandwidth and resource usage, and allocate appropriate capacity). You need to protect against failures/errors and limit the span of failure,  commit writes at very low latency (lower than native flash) and maintain low latency, bottleneck-free physical links to each server… To me that implies:

    • Small enclosure per rack handling ~32 or more servers
    • Enclosure manages temperature and cooling optimally for performance/endurance
    • Remote configuration/management of the resources allocated to each server
    • Ability to re-assign resources from one server to another in the event of server/VM blue-screen
    • Low-latency/high-bandwidth physical cable or backplane from each server to the enclosure
    • Replaceable inexpensive flash modules in case of failure
    • Protection across all modules (erasure coding) to allow continuous operation at very high bandwidth
    • NV memory to commit writes with extremely low latency
    • Ultimately – integrated with the whole storage architecture at the rack, the same APIs, drivers, etc.

That means the performance looks exactly as if each server had multiple PCIe cards. But the capacity and bandwidth resources are shared, and systems can remain resilient. So ultimately, I think that PCIe cards will evolve to more external, rack level, pooled flash solutions, without sacrificing all their great attributes today. This is just my opinion, but as I say – other leaders in flash are going down this path too…

What’s your opinion?

Tags: , , , , , , , , , , , , , , , ,
Views: (15143)


Big data and Hadoop are all about exploiting new value and opportunities with data. In financial trading, business and some areas of science, it’s all about being fastest or first to take advantage of the data. The bigger the data sets, the smarter the analytics. The next competitive edge with big data comes when you layer in flash acceleration. The challenge is scaling performance in Hadoop clusters.

The most cost-effective option emerging for breaking through disk-to-I/O bottlenecks to scale performance is to use high-performance read/write flash cache acceleration cards for caching. This is essentially a way to get more work for less cost, by bringing data closer to the processing. The LSI® Nytro™ product has been shown during testing to improve the time it takes to complete Hadoop software framework jobs up to a 33%.

Flash cache cards increase Hadoop application performance
Combining flash cache acceleration cards with Hadoop software is a big opportunity for end users and suppliers. LSI estimates that less than 10% of Hadoop software installations today incorporate flash acceleration1.  This will grow rapidly as companies see the increased productivity and ROI of flash to accelerate their systems.  And Hadoop software adoption is also growing fast. IDC predicts a CAGR of as much as 60% by 20162. Drivers include IT security, e-commerce, fraud detection and mobile data user management. Gartner predicts that Hadoop software will be in two-thirds of advanced analytics products by 20153. Many thousands of Hadoop software clusters are already deployed.

Where flash makes the most immediate sense is with those who have smaller clusters doing lots of in-place batch processing. Hadoop is purpose-built for analyzing a variety of data, whether structured, semi-structured or unstructured, without the need to define a schema or otherwise anticipate results in advance. Hadoop enables scaling that allows an unprecedented volume of data to be analyzed quickly and cost-effectively on clusters of commodity servers. Speed gains are about data proximity. This is why flash cache acceleration typically delivers the highest performance gains when the card is placed directly in the server on the PCI Express® (PCIe) bus.

Combining the best of flash and HDDs to drive higher performance and storage capacity
PCIe flash cache cards are now available with multiple terabytes of NAND flash storage, which substantially increases the hit rate. We offer a solution with both onboard flash modules and Serial-Attached SCSI (SAS) interfaces to enable high-performance direct-attached storage (DAS) configurations consisting of solid state and hard disk drive storage. This couples the low-latency performance benefits of flash with the capacity and cost-per-gigabyte advantages of HDDs.

To keep the processor close to the data, Hadoop uses servers with DAS. And to get the data even closer to the processor, the servers are usually equipped with significant amounts of random access memory (RAM). An additional benefit: Smart implementation of Hadoop and flash components can reduce the overall server footprint and simplify scaling, with some solutions enabling up to 128 devices to share a very high bandwidth interface. Most commodity servers provide 8 or less SATA ports for disks, reducing expandability.

Hadoop is great, but flash-accelerated Hadoop is best. It’s an effective way, as you work to extract full value from big data, to secure a competitive edge.

  1. Based on internal LSI research.
  2. “IDC Worldwide Hadoop-MapReduce Ecosystem Software 2012-2016 Forecast,” May 2012.
  3. “Gartner Predicts 2013: Business Intelligence and Analytics Need to Scale Up to Support Explosive Growth in Data Sources,” December 2012.

Tags: , , , , , , , , , , , , , ,
Views: (19262)


It may sound crazy, but hard disk drives (HDDs) do not have a delete command. Now we all know HDDs have a fixed capacity, so over time the older data must somehow get removed, right? Actually it is not removed, but overwritten. The operating system (OS) uses a reference table to track the locations (addresses) of all data on the HDD. This table tells the OS which spots on the HDD are used and which are free. When the OS or a user deletes a file from the system, the OS simply marks the corresponding spot in the table as free, making it available to store new data.

The HDD is told nothing about this change, and it does not need to know since it would not do anything with that information. When the OS is ready to store new data in that location, it just sends the data to the HDD and tells it to write to that spot, directly overwriting the prior data. It is simple and efficient, and no delete command is required.

Enter SSDs
However, with the advent of NAND flash-based solid state drives (SSDs) a new problem emerged. In my blog, Gassing up your SSD, I explain how NAND flash memory pages cannot be directly overwritten with new data, but must first be erased at the block level through a process called garbage collection (GC). I further describe how the SSD uses non-user space in the flash memory (over provisioning or OP) to improve performance and longevity of the SSD. In addition, any user space not consumed by the user becomes what we call dynamic over provisioning – dynamic because it changes as the amount of stored data changes.

When less data is stored by the user, the amount of dynamic OP increases, further improving performance and endurance. The problem I alluded to earlier is caused by the lack of a delete command. Without a delete command, every SSD will eventually fill up with data, both valid and invalid, eliminating any dynamic OP. The result would be the lowest possible performance at that factory OP level. So unlike HDDs, SSDs need to know what data is invalid in order to provide optimum performance and endurance.

Keeping your SSD TRIM
A number of years ago, the storage industry got together and developed a solution between the OS and the SSD by creating a new SATA command called TRIM. It is not a command that forces the SSD to immediately erase data like some people believe. Actually the TRIM command can be thought of as a message from the OS about what previously used addresses on the SSD are no longer holding valid data. The SSD takes those addresses and updates its own internal map of its flash memory to mark those locations as invalid. With this information, the SSD no longer moves that invalid data during the GC process, eliminating wasted time rewriting invalid data to new flash pages. It also reduces the number of write cycles on the flash, increasing the SSD’s endurance. Another benefit of the TRIM command is that more space is available for dynamic OP.

Today, most current operating systems and SSDs support TRIM, and all SandForce Driven™ member SSDs have always supported TRIM. Note that most RAID environments do not support TRIM, although some RAID 0 configurations have claimed to support it. I have presented on this topic in detail previously. You can view the presentation in full here. In my next blog I will explain how there may be an alternate solution using SandForce Driven member SSDs.

Tags: , , , , , , , , , , , , ,
Views: (21236)


The term global warming can be very polarizing in a conversation and both sides of the argument have mountains of material that support or discredit the overall situation. The most devout believers in global warming point to the average temperature increases in the Earth’s atmosphere over the last 100+ years. They maintain the rise is primarily caused by increased greenhouse gases from humans burning fossil fuels and deforestation.

The opposition generally agrees with the measured increase in temperature over that time, but claims that increase is part of a natural cycle of the planet and not something humans can significantly impact one way or another. The US Energy Information Administration estimates that 90% of world’s marketed energy consumption is from non-renewable energy sources like fossil fuels. Our internet-driven lives run through datacenters that are well-known to consume large quantities of power. No matter which side of the global warming argument you support, most people agree that wasting power is not a good long-term position. Therefore, if the power consumed by datacenters can be reduced, especially as we live in an increasingly digitized world, this would benefit all mankind.

When we look at the most power-hungry components of a datacenter, we find mainly server and storage systems. However, people sometimes forget that those systems require cooling to counteract the heat generated. But the cooling itself consumes even more energy. So anything that can store data more efficiently and quickly will reduce both the initial energy consumption and the energy to cool those systems. As datacenters demand faster data storage, they are shifting to solid state drives (SSDs). SSDs generally provide higher performance per watt of power consumed over hard disk drives, but there is still more that can be done.

Reducing data to help turn down the heat
The good news is that there’s a way to reduce the amount of data that reaches the flash memory of the SSD. The unique DuraWrite™ technology found in all LSI® SandForce® flash controllers reduces the amount of data written to the flash memory to cut the time it takes to complete the writes and therefore reduce power consumption, below levels of other SSD technologies. That, in turn, reduces the cooling needed to further reduce overall power consumption. Now this data reduction is “loss-less,” meaning 100% of what is saved is returned to the host, unlike MPEG, JPEG, and MP3 files, which tolerate some amount of data loss to reduce file sizes.

Today you can find many datacenters already using SandForce Driven SSDs and LSI Nytro™ application acceleration products (which use DuraWrite technology as well). When we start to see datacenters deploying these flash storage products by the millions, you will certainly be able to measure the reduction in power consumed by datacenters. Unfortunately, LSI will not be able to claim it stopped global warming, but at least we, and our customers, can say we did something to help defer the end result.

 

Tags: , , , , , , , , ,
Views: (19413)


Have you ever run out of gas in your car? Do you often risk running your gas tank dry? Hopefully you are more cautious than that and you start searching for a gas station when you get down to a ¼ tank. You do this because you want plenty of cushion in case something comes up that prevents you from getting to a station before it is too late.

The reason most people stretch their tank is to maximize travel between station visits. The downside to pushing the envelope to “E” is you can end up stranded with a dead vehicle waiting for AAA to bring you some gas.

Now most people know you don’t put gas in a solid state drive (SSD), but the pros and cons associated with how much you leave in the “tank” is very relevant to SSDs.

To understand how these two seemingly unrelated technologies are similar, we first need to drill into some technical SSD details. To start, SSDs act, and often look, like traditional hard disk drives (HDDs), but they do not record data in the same way. SSDs today typically use NAND flash memory to store data and a flash controller to connect the memory with the host computer. The flash controller can write a page of data (often 4,096 bytes) directly to the flash memory, but cannot overwrite the same page of data without first erasing it. The erase cycle cannot expunge only a single page. Instead, it erases a whole block of data (usually 128 pages). Because the stored data is sometimes updated randomly across the flash, the erase cycle for NAND flash requires a process called garbage collection.

Garbage collection is just dumping the trash
Garbage collection starts when a flash block is full of data, usually a mix of valid (good) and invalid (older, replaced) data. The invalid data must be tossed out to make room for new data, so the flash controller copies the valid data of a flash block to a previously erased block, and skips copying the invalid data of that block. The final step is to erase the original whole block, preparing it for new data to be written.

Before and during garbage collection, some data – valid data copied during garbage collection and the (typically) multiple copies of the invalid data – is in two or more locations at once, a phenomenon known as write amplification. To store this extra data not counted by the operating system, the flash controller needs some spare capacity beyond what the operating system knows. This is called over-provisioning (OP), and it is a critical part of every NAND flash-based SSD.

Over-provisioning is like the gas that remains in your tank
While every SSD has some amount of OP, some will have more or less than others. The amount of OP varies depending on trade-offs made between total storage capacity and benefits in performance and endurance. The less OP allocated in an SSD, the more information a user can store. This is like the driver who will take their tank of gas clear down to near-empty just to maximize the total number of miles between station visits.

What many SSD users don’t realize is there are major benefits to NOT stretching this OP area too thin. When you allocate more space for OP, you achieve a lower write amplification, which translates to a higher performance during writes and longer endurance of the flash memory. This is like the driver who is more cautious and visits the gas station more often to enable greater flexibility in selecting a more cost-effective station, and allows for last-minute deviations in travel plans that end up burning more fuel than originally anticipated.

The choice is yours
Most SSD users do not realize they have full control of how much OP is configured in their SSD. So even if you buy an SSD with “0%” OP, you can dedicate some of the user space back to OP for the SSD.

A more detailed presentation of how OP works and what 0% OP really means was presented at the Flash Memory Summit 2012 and can be viewed with this link for your convenience: http://www.lsi.com/downloads/Public/Flash%20Storage%20Processors/LSI_PRS_FMS2012_TE21_Smith.pdf

It pays to be the cautious driver who fills the gas tank long before you get to empty. When it comes to both performance and endurance, your SSD will cover a lot more ground if you treat the over-provisioning space the same way – keeping more in reserve.

Tags: , , , , , , , , , ,
Views: (23728)


Most people fully understand that electronics are useless without power, but what happens when devices lose power in the middle of operating? That answer is highly dependent on a number of variables including what type of electronic device is in question.

For solid state drives (SSDs) the answer depends on factors such as whether an uninterruptable power supply (UPS) is connected, what controller or flash processor is used, the design of the power circuit of the SSD, and the type of memory. If an SSD is in the middle of a write operation to the flash memory and power to the SSD is disconnected, many bad things could happen if the right safety measures are not in place. Many users do not think about all the non-user initiated operations the SSDs may be performing like background garbage collection that could be under way when the power fails. Without the correct protection, in most cases data will be corrupted.

According to the Nielsen company, 108.4 million viewers were tuned into the 2013 Super Bowl in New Orleans only to be shocked to witness the power go down for 34 minutes in the middle of the game. If power can be lost during such an incredibly high profile event such as this, it can happen just about anywhere.

Inside the New Orleans Superdome stadium operations and broadcast server rooms
Enterprise computing environments typically have multiple servers with data connections and lots of storage. Over the past few years, a larger percentage of the storage is kept on SSDs for the very active or “hot” data. This greatly improves data access time and reduces overall latency of the system. Enterprise servers are often connected to UPS systems that supply the connected devices with temporary power during a power failure.

Usually this is enough power to support uninterrupted system operations until power is restored, or at least until technicians can complete their current work and shut down safely. However there are many times when UPS systems are not deployed or fail to operate properly themselves. In those cases, the server will experience a power failure as abrupt as if someone had yanked the power cord from the wall socket.

The LSI® SandForce® flash controllers are at the heart of many popular SSDs sold today. The flash controller connects the host computer with the flash memory to store user data in fast non-volatile memory. The SandForce flash controllers are specifically engineered to operate in different environments, and the SF-2500/2600 FSPs are designed to provide the high level of data integrity required for enterprise applications. In the area of power failure protection, they include a combination of firmware (FW) and hardware circuitry that monitors the power coming into the SSD. In the event of a power failure or even a brown-out, the flash controller is alerted to the situation and hold-up capacitors in the SSD provide the necessary power and time for the controller to complete pending writes to the flash memory. This same circuit is also designed to prevent the risk of lower page corruption with Multi-level Cell (MLC) memory.

Watch out for SSD solutions that provide backup capacitors, but lack the necessary support circuitry and special firmware to ensure the data is fully committed to the flash memory before the power runs out. Even if these other special circuits are present, only truly enterprise SSDs that are meticulously designed and tested to withstand power failures are up to the task of storing and protecting highly critical data.

In the control room and down on the field
The usage patterns of non-enterprise systems like notebooks and ultrabooks call for a different power failure support mechanism. Realize that when you have a notebook or ultrabook system, you have a built in mini-UPS system. A power outage from the wall socket has no impact to the system until the battery gets low. At that point the operating system will tell the computer to shut down and that will be ample warning for the SSD to safely shut down and ensure the integrity of the data. But what if the operating system locks up and does not warn the SSD or the system is an A/C-powered desktop without a battery?

The LSI SF-2100/2200 FSPs are purpose-built for these client environments and operate with the assumption that power could disappear at any point in time. They use special FW techniques so that even without a battery present, as is the case with desktop systems, they greatly limit the potential for data loss.

The naked facts
It should be clear that the answer to the original question is highly dependent upon the flash controller at the heart of the SSD. Without having the critical features discussed above and designed into the LSI SandForce flash controllers, it is very possible to lose data during a power failure. The LSI SandForce flash controllers are engineered to withstand power failures like the one that hit New Orleans at the Super Bowl, but don’t expect them to fix wardrobe malfunctions.

Tags: , , , , , , , , , , , , ,
Views: (19786)