All NAND flash-based SSDs use a process called garbage collection (GC) so the flash memory can be rewritten with fresh data, enabling the SSD to function like any other rewritable storage device. The number of rewrites (program/erase cycles) possible with NAND flash memory is finite. That’s why it’s essential to ensure that each P/E cycle really counts – that is, each is performed with top efficiency – to help preserve optimum SSD performance.

Collecting the garbage takes time
In my 2011 Flash Memory Summit presentation , I went into great detail about how GC – the automatic memory management process of clearing invalid data from memory to give new data a clean slate – works in an SSD. Here’s a recap: flash memory is organized in groups of pages where data can be written. Once a page is written, it cannot be rewritten until it is erased. Simple enough. But a page can only be erased within a group of typically 128 pages called a block. But wait. The complexity of writing data really starts to escalate in the case of random writes replacing previously written data. Random writes put the new data in previously erased pages elsewhere, peppering a block of valid data with “patches of invalid data.” In order to write new data to these patches, the whole block – all 128 pages – must be erased. But first all surrounding pages with valid data must be read and then rewritten to blank pages. The newly erased block of blank pages is then ready to save new data.

So why’s this a problem? This rewriting process shares the same path to the flash memory as new data arriving from the host system. You see the issue – a bottleneck. What you may not know is that this traffic jam can severely degrade overall write performance, sometimes as much as 90%.

Why not collect the garbage when the SSD is idle?
To improve write performance, many SSDs perform idle-time GC or background GC. When the SSD is idle – not performing reads or writes from the host system – the data paths to the flash memory are open. In a perfect world, the SSD controller would move all valid data into a contiguous group of new blocks so that all the free space would be consolidated into a few very large areas. Then, when new data arrived, the controller would send it directly to the fresh blocks and be spared from having to move data around just to free up space on pages of invalid data. But the world is far from perfect.

No free lunch, even from the garbage can
As might be expected, background or idle-time GC has drawbacks. The two main downsides are:

1.      For users of Ultrabooks and other mobile systems, battery power is precious. The longer users can work unplugged between battery charges, the better. To make the most of a single charge, these systems use features like DevSleep to drastically reduce power to internal components not in use. At times when no data is being stored to or retrieved from the SSD, the host system gears down the SSD into a low power state (like DevSleep) to reduce power draw. In this state, an SSD with background or idle-time GC has no power to perform GC.

The upshot is that the SSD will be very slow when the system turns it back on and starts sending new data that must be saved in the spaces not yet cleared out by garbage-collected while the SSD was asleep. Alternately, the SSD may temporarily override the low power command from the host in order to perform the background GC, pulling more power from the battery and shortening the time remaining before it needs to be recharged.

2.      When the SSD is performing GC, invalid data is ignored and only valid data is moved before the block is erased. Now imagine a large 2GB file on the SSD that the user plans to delete tomorrow. The SSD has no clue this will happen, so it automatically performs background GC around the 2GB file – and all other data – today, consuming one more of the very limited, precious program/erase cycles for all the flash holding that data. Ideally, the SSD would have waited one more day to garbage collect, the user would have already deleted the file, and the SSD wouldn’t have had to move all that data to new locations. No unnecessary data movement. No unnecessary use of a precious program/erase cycle.

Many people don’t realize that the number of background reads and writes initiated by the operating system, virus checkers, browsers, etc., far outstrip the number initiated by the computer user. Some users rarely delete files, believing they’ll extend the life of their SSD. The truth is, user file deletions are a drop in the bucket. It’s not their use of the computer storage that causes the most wear and tear. Background action from applications and the operating system is the real culprit.

Is there a better option?
What’s an SSD user to do? A super-fast foreground GC engine is the best solution. The key is special hardware and firmware integrated into the flash controller that streamlines garbage collection so it can run in the foreground with incoming data. The engine also enables high-speed writes to the flash memory. By maintaining high write speeds for GC operations, the SSD can afford to leave all valid data mixed with invalid data. That way the blocks are not recycled until absolutely necessary, dramatically reducing wear. Plus, the longer the SSD waits to GC pages, the higher the likelihood other pages of data will have already been made invalid by the operating system or user. The result is lower write amplification, longer flash endurance, and even higher performance.

All LSI® SandForce® Flash Controllers employ foreground GC to provide these invaluable benefits to the user.

Tags: , , , , , , , ,
Views: (3712)


It may sound crazy, but hard disk drives (HDDs) do not have a delete command. Now we all know HDDs have a fixed capacity, so over time the older data must somehow get removed, right? Actually it is not removed, but overwritten. The operating system (OS) uses a reference table to track the locations (addresses) of all data on the HDD. This table tells the OS which spots on the HDD are used and which are free. When the OS or a user deletes a file from the system, the OS simply marks the corresponding spot in the table as free, making it available to store new data.

The HDD is told nothing about this change, and it does not need to know since it would not do anything with that information. When the OS is ready to store new data in that location, it just sends the data to the HDD and tells it to write to that spot, directly overwriting the prior data. It is simple and efficient, and no delete command is required.

Enter SSDs
However, with the advent of NAND flash-based solid state drives (SSDs) a new problem emerged. In my blog, Gassing up your SSD, I explain how NAND flash memory pages cannot be directly overwritten with new data, but must first be erased at the block level through a process called garbage collection (GC). I further describe how the SSD uses non-user space in the flash memory (over provisioning or OP) to improve performance and longevity of the SSD. In addition, any user space not consumed by the user becomes what we call dynamic over provisioning – dynamic because it changes as the amount of stored data changes.

When less data is stored by the user, the amount of dynamic OP increases, further improving performance and endurance. The problem I alluded to earlier is caused by the lack of a delete command. Without a delete command, every SSD will eventually fill up with data, both valid and invalid, eliminating any dynamic OP. The result would be the lowest possible performance at that factory OP level. So unlike HDDs, SSDs need to know what data is invalid in order to provide optimum performance and endurance.

Keeping your SSD TRIM
A number of years ago, the storage industry got together and developed a solution between the OS and the SSD by creating a new SATA command called TRIM. It is not a command that forces the SSD to immediately erase data like some people believe. Actually the TRIM command can be thought of as a message from the OS about what previously used addresses on the SSD are no longer holding valid data. The SSD takes those addresses and updates its own internal map of its flash memory to mark those locations as invalid. With this information, the SSD no longer moves that invalid data during the GC process, eliminating wasted time rewriting invalid data to new flash pages. It also reduces the number of write cycles on the flash, increasing the SSD’s endurance. Another benefit of the TRIM command is that more space is available for dynamic OP.

Today, most current operating systems and SSDs support TRIM, and all SandForce Driven™ member SSDs have always supported TRIM. Note that most RAID environments do not support TRIM, although some RAID 0 configurations have claimed to support it. I have presented on this topic in detail previously. You can view the presentation in full here. In my next blog I will explain how there may be an alternate solution using SandForce Driven member SSDs.

Tags: , , , , , , , , , , , , ,
Views: (19778)


Have you ever run out of gas in your car? Do you often risk running your gas tank dry? Hopefully you are more cautious than that and you start searching for a gas station when you get down to a ¼ tank. You do this because you want plenty of cushion in case something comes up that prevents you from getting to a station before it is too late.

The reason most people stretch their tank is to maximize travel between station visits. The downside to pushing the envelope to “E” is you can end up stranded with a dead vehicle waiting for AAA to bring you some gas.

Now most people know you don’t put gas in a solid state drive (SSD), but the pros and cons associated with how much you leave in the “tank” is very relevant to SSDs.

To understand how these two seemingly unrelated technologies are similar, we first need to drill into some technical SSD details. To start, SSDs act, and often look, like traditional hard disk drives (HDDs), but they do not record data in the same way. SSDs today typically use NAND flash memory to store data and a flash controller to connect the memory with the host computer. The flash controller can write a page of data (often 4,096 bytes) directly to the flash memory, but cannot overwrite the same page of data without first erasing it. The erase cycle cannot expunge only a single page. Instead, it erases a whole block of data (usually 128 pages). Because the stored data is sometimes updated randomly across the flash, the erase cycle for NAND flash requires a process called garbage collection.

Garbage collection is just dumping the trash
Garbage collection starts when a flash block is full of data, usually a mix of valid (good) and invalid (older, replaced) data. The invalid data must be tossed out to make room for new data, so the flash controller copies the valid data of a flash block to a previously erased block, and skips copying the invalid data of that block. The final step is to erase the original whole block, preparing it for new data to be written.

Before and during garbage collection, some data – valid data copied during garbage collection and the (typically) multiple copies of the invalid data – is in two or more locations at once, a phenomenon known as write amplification. To store this extra data not counted by the operating system, the flash controller needs some spare capacity beyond what the operating system knows. This is called over-provisioning (OP), and it is a critical part of every NAND flash-based SSD.

Over-provisioning is like the gas that remains in your tank
While every SSD has some amount of OP, some will have more or less than others. The amount of OP varies depending on trade-offs made between total storage capacity and benefits in performance and endurance. The less OP allocated in an SSD, the more information a user can store. This is like the driver who will take their tank of gas clear down to near-empty just to maximize the total number of miles between station visits.

What many SSD users don’t realize is there are major benefits to NOT stretching this OP area too thin. When you allocate more space for OP, you achieve a lower write amplification, which translates to a higher performance during writes and longer endurance of the flash memory. This is like the driver who is more cautious and visits the gas station more often to enable greater flexibility in selecting a more cost-effective station, and allows for last-minute deviations in travel plans that end up burning more fuel than originally anticipated.

The choice is yours
Most SSD users do not realize they have full control of how much OP is configured in their SSD. So even if you buy an SSD with “0%” OP, you can dedicate some of the user space back to OP for the SSD.

A more detailed presentation of how OP works and what 0% OP really means was presented at the Flash Memory Summit 2012 and can be viewed with this link for your convenience: http://www.lsi.com/downloads/Public/Flash%20Storage%20Processors/LSI_PRS_FMS2012_TE21_Smith.pdf

It pays to be the cautious driver who fills the gas tank long before you get to empty. When it comes to both performance and endurance, your SSD will cover a lot more ground if you treat the over-provisioning space the same way – keeping more in reserve.

Tags: , , , , , , , , , ,
Views: (19734)