My “Size matters: Everything you need to know about SSD form factors” blog in January spawned some interesting questions, a number of them on Z-height.

What is a Z-height anyway?
For a solid state drive (SSD), Z-height describes its thickness and is generally its smallest dimension. Z-height is a redundant term, since Z is a variable representing the height of an SSD. The “Z” is one of the variables – X, Y and Z, synonymous with length, width and height – that describe the measurements of a 3-dimensional object. Ironically, no one says X-length or Y-width, but Z-height is widely used.

What’s the state of affairs with SSD Z-height?
The Z-height has typically been associated with the 2.5″ SSD form factor. As I covered in my January form factor blog, the initial dimensions of SSDs were modeled after hard disk drives (HDDs). The 2.5” HDD form factor featured various heights depending on the platter count – the more disks, the greater the capacity and the thicker the HDD. The first 2.5” full capacity HDDs had a maximum Z-height of 19mm, but quickly dropped to a 15mm maximum to enable full-capacity HDDs in thinner laptops. By the time SSDs hit high-volume market acceptance, the dimensional requirements for storage had shrunk even more, to a maximum height of 12.5mm in the 2.5” form factor. Today, the Z-height of most 2.5″ SSDs generally ranges from 5.0mm to 9.5mm.

With printed circuit board (PCB) form factor SSDs—those with no outer case—the Z-height is defined by the thickness of the board and its components, which can be 3mm or less. Some laptops have unique shape or height restrictions for the SSD space allocation. For example, the MacBook Air’s ultra-thin profile requires some of the thinnest SSDs produced.

A new standard in SSD thickness
The platter count of an HDD determines its Z-height. In contrast, an SSD’s Z-height is generally the same regardless of capacity. The proportion of SSD form factors deployed in systems is shifting from the traditional, encased SSDs to the new bare PCB SSDs. As SSDs drift away from the older form factors with different heights, consumers and OEM system designers will no longer need to consider Z-height because the thickness of most bare PCB SSDs will be standard.

Tags: , , , , , , , , , , ,
Views: (1137)


The introduction of LSI® SF3700 flash controllers has prompted many questions about the PCIe® (PCI Express) interface and how it benefits solid state storage, and there’s no better person to turn to for insights than our resident expert, Jeremy Werner, Sr. Director of Product and Customer Management in LSI’s Flash Components Division (SandForce):

Most client-based SSDs have used SATA in the past, while PCIe was mainly used for enterprise applications. Why is the PCIe interface becoming so popular for the client market?

Jeremy: Over the past few decades, the performance of host interfaces for client devices has steadily climbed. Parallel ATA (PATA) interface speed grew from 33MB/s to 100MB/s, while the performance of the Serial ATA (SATA) connection rose from 1.5Gb/s to 6Gb/s. Today, some solid state drives (SSDs) use the PCIe Gen2 x4 (second-generation speeds with four data communication lanes) interface, supporting up to 20Gb/s (in each direction). Because the PCIe interface can simultaneously read and write (full duplex) and SATA can only read or write at one time (half-duplex), PCIe can potentially double the 20Gb/s speeds in a mixed (read and write) workload, making it nearly seven times faster than SATA.

Will the PCIe interface replace SATA for SSDs?

Jeremy: Eventually the replacement is likely, but it will probably take many years in the single-drive client PC market given two hindrances. First, some single-drive client platforms must use a common HDD and SSD connection to give users the choice between the two devices. And because the 6Gb/s SATA interface delivers much higher speeds than than hard disk drives, there is no immediate need for HDDs to move to the faster PCIe connection, leaving SATA as the sole interface for the client market. And, secondly, the older personal computers already in consumers’ homes that need an SSD upgrade support only SATA storage devices, so there’s no opportunity for PCIe in that upgrade market.

By contrast, the enterprise storage market, and even some higher-end client systems, will migrate quickly to PCIe since they will see significant speed increases and can more easily integrate PCIe SSD solutions available now.

It is noteworthy that some standards, like M.2 and SATA Express, have defined a single connector that supports SATA or PCIe devices.  The recently announced LSI SF3700 is one example of an SSD controller that supports both of those interfaces on an M.2 board.

What is meant by the terms “x1, x2, x4, x16” when referencing a particular PCIe interface?

Jeremy: These numbers are the PCIe lane counts in the connection. Either the host (computer) or the device (SSD) could limit the number of lanes used. The theoretical maximum speed of the connection (not including protocol overhead) is the number of lanes multiplied by the speed of each lane.

What is protocol overhead?

Jeremy: PCIe, like many bus interfaces, uses a transfer encoding scheme – a set number of data bits represented by a slightly larger number of bits called a symbol. The additional bits in the symbol constitute the inefficient overhead of metadata required to manage the transmitted user data. PCIe Gen3 features a more efficient data transfer encoding with 128b/132b (3% overhead) instead of the 8b/10b (20% overhead) of PCIe Gen2, increasing data transfer speeds by up to 21%.

What is defined in the PCIe 2.0 and 3.0 specifications, and do end users really care?

Jeremy: Although each PCIe Gen3 lane is faster than PCIe Gen2 (8Gb/s vs 5Gb/s, respectively), lanes can be combined to boost performance in both versions. The changes most relevant to consumers pertain to higher speeds. For example, today consumer SSDs top out at 150K random read IOPS at 4KB data transfer sizes. That translates to about 600MB/s, which is insufficient to saturate a PCIe Gen2 x2 link, so consumers would see little benefit from a PCIe Gen3 solution over PCIe Gen2. The maximum performance of PCIe Gen2 x4 and PCIe Gen3 x2 devices is almost identical because of the different transfer encoding schemes mentioned previously.

Are there mandatory features that must be supported in any of these specifications?

Jeremy: Yes, but nearly all of these features have little impact on performance, so most users have no interest in the specs. It’s important to keep in mind that the PCIe speeds I’ve cited are defined as the maximums, and the spec has no minimum speed requirement. This means a PCIe Gen3 solution might support only a maximum of 5Gb/s, but still be considered a PCIe Gen3 solution if it meets the necessary specifications. So buyers need to be aware of the actual speed rating of any PCIe solution.

Is a PCIe Gen3 SSD faster than a PCIe Gen2 SSD?

Jeremy: Not necessarily. For example, a PCIe Gen2 x4 SSD is capable of higher speeds than a PCIe Gen3 x1 SSD. However, bottlenecks other than the front-end PCIe interface will limit the performance of many SSDs. Examples of other choke points include the bandwidth of the flash, the processing/throughput of the controller, the power or thermal limitations of the drive and its environment, and the ability to remove heat from that environment. All of these factors can, and typically do, prevent the interface from reaching its full steady-state performance potential.

In what form factors are PCIe cards available?

Jeremy: PCIe cards are typically referred to as plug-in products, much like SSDs, graphics cards and host-bus adapters. PCIe SSDs come in many form factors, with the most popular called “half-height, half-length.” But the popularity of the new, tiny M.2 form factors is growing, driven by rising demand for smaller consumer computers. There are other PCIe form factors that resemble traditional hard disk drives, such as the SFF-8639, a 2.5” hard disk drive form factor that features four PCIe lanes and is hot pluggable. What’s more, its socket is compatible with the SAS and SATA interfaces. The adoption of the SATA Express 2.5” form factor has been limited, but could be given a boost with the availability of new capabilities like SRIS (Separate Refclk with Independent SSC), which enables the use of lower cost interconnection cables between the device and host.

Are all M.2 cards the same?

Jeremy: No. All SSD M.2 cards are 22 mm wide (while some WAN cards are 30 mm wide), but the specification allows for different lengths (30, 42, 60, 80, and 110 mm). What’s more, the cards can be single- or double-sided to account for differences in the thickness of the products. Also, they are compatible with two different sockets (socket 2 and socket 3). SSDs compatible with both socket types, or only socket 2, can connect only two lanes  (x2), while SSDs compatible with only socket 3 can connect up to four (x4).

Summary
In my last few blogs, I covered various aspects of SSD form factors and included many images of the types that Jeremy mentioned above. I also delve deeper into details of the M.2 form factor in my blog “M.2: Is this the Prince of SSD form factors?” One thing about PCIe is certain: It is the next step in the evolution of computer interfaces and will give rise to more SSDs with higher performance, lower power consumption and better reliability.

 

Tags: , , , , , , , , , ,
Views: (1685)


Solid state drive (SSD) makers have introduced many new layout form factors that are not possible with hard disk drives (HDDs). My blog Size matters: Everything you need to know about SSD form factors talks about the many current SSD form factors, but I gave the new M.2 form factor only a glimpse. The specification and its history merit a deeper look.

The history
A few years ago the PCI Special Interest Group (PCI-SIG), teaming with The Serial ATA International Organization (SATA-IO), started to develop a new form factor standard to replace Mini-PCIe and mSATA since specifications from both of these organizations are required to build SATA M.2 SSDs.  The new layout and connector would be used for applications including WiFi, WWAN, USB, PCIe and SATA, with SSD implementations using either PCIe or SATA interfaces. The groups set out to create a narrower connector that supports higher data rates, a lower profile and boards of varying lengths to accommodate various very small notebook computers.

This new form factor also aimed to support micro servers and similar high-density systems by enabling the deployment of dozens of M.2 boards. Unique notches in the edge connector known as “keys” would be used to differentiate the wide array of products using the M.2 connector and prevent the insertion of incompatible products into the wrong socket.

The name change
Initially the M.2 form factor was called Next Generation Form Factor, or NGFF for short. NGFF was designed to follow the dimensional specifications of M.2, a different specification from NGFF, which at that time was being defined by the PCI SIG. Soon after NGFF was announced, confusion between the identical form factors reigned, prompting the name change of NGFF to M.2. Many people in the industry have been slow to adopt the new M.2 name and you often see articles that describe these solutions as M.2, formally known as NGFF.

The keys
In the world of connectors or sockets, a “key” prevents the insertion of a connector into an incompatible socket to ensure the proper mating of connectors and sockets. The M.2 specification has defined 11 key configurations, seven for use sometime in the future. A socket can only have one key, but the plug-in cards can have keyways cut for multiple keys if they support those socket types. Of the four defined keys available for current use, two support SSDs. Key ID B (pins 12-19) gives PCIe SSDs up to two lanes of connectivity and key ID M (pins 59-66) provides PCIe SSDs with up to four lanes of connectivity. Both can accommodate SATA devices. All of the key patterns are uniquely configured so that the card cannot be flipped over and inserted incorrectly.

Unfortunately these keys alone do not tell the user enough about an SSD to help in the selection of replacement or upgrade drives. For example, a computer with an M.2 socket for PCIe x2 support features a B key so that no M.2 boards with PCIe x4 requirements (M key) can fit. However, even though a SATA M.2 card with a B key can fit in the same system, the host will not recognize SATA signals from the motherboard’s PCIe socket. With this signal incompatibility, users need to carefully read other socket specifications either printed on the motherboard or included in the system configuration information to see if the socket is PCIe or SATA.

The profile and lengths
Pin spacing on the M.2 card connector is higher in density than prior connector specifications, enabling a narrower board and thinner, lighter mobile computing systems that are smaller and weigh less. What’s more, the M.2 specification defines a module with components populating only one side of the board, leaving enough space between the main system board and the module for other components. The number of flash chips used by SSDs varies with storage capacity. The less the storage capacity requirement of an SSD, the shorter the module can be used, leaving system manufacturers more space for other components.

It’s all in the name
When I hear people call this specification by the name M.2 formally known as NGFF, I cannot help but think about the time when the rock artist Prince changed his name to an unpronounceable symbol and everyone was stuck calling him The Artist Formerly Known as Prince. In his case I believe he was going for the publicity of the confusion.

As for the renaming of NGFF to M.2, I really don’t think that was the goal. In fact I believe it was intended to simplify brand identity by eliminating a second name for the same specification. No matter what we call this new form factor, it appears destined to thrive in both the mobile computing and high-density server markets.

Tags: , , , , , , , , , , , ,
Views: (2776)


Most users have no idea that reading electronic information from a data storage medium like a hard disk drive (HDD) or solid state drive (SSD) is plagued with read errors. For this reason error correction codes (ECC) are used to fix the random bit errors that arise during the reading process before the incorrect data is returned to the user. But the error correction codes can only handle so many errors at one time. If data errors exceed the ECC limits, the data goes uncorrected and is lost forever.  More recent ECC algorithms like the LSI SHIELD error correction technology go a lot farther to protect user data than prior solutions.

What happens to the data when the ECC fails?
If the ECC fails, only a backup protection mechanism will recover the data. There are three alternatives.  First, users should always back up their critical data since ECC failure and other threats can destroy data or render it inaccessible such as natural disasters (earthquakes, tornadoes, flooding etc.) that cause heavy damage to buildings and their contents, lightning overloading that can burn up a computer without adequate electrical protection, and of course computer theft. Any backup system should be either automated or at least run consistently if it is manual. Industry reports cite that less than 10% of computer users back up their data. That is not very comforting.

The second solution is to employ a RAID (Redundant Array of Independent Disks) array that uses multiple storage devices with one or more of the drives acting as a parity device to provide redundancy. That way if one drive fails, the redundant drive provides enough parity information to restore the original data. This type of system is very common in enterprise environments—a work computer—but hardly used in home systems or laptop PC.

Is the third solution simple, automatic, and operable in a single-drive environment?
Yes. Yes. And Yes. LSI® SandForce® flash and SSD controllers have a feature called RAISE™ data protection that meets all of these needs. Introduced in 2009 with the first SandForce controller, RAISE technology stands for Redundant Array of Independent Silicon Elements. It sounds like RAID, and acts something like RAID, but protects data using a single drive. With RAISE technology, the individual flash die act like the drives in a RAID array. The original RAISE level 1 technology protects against single page and block failures in the flash. These types of failures are beyond the protection of the ECC, but RAISE technology can recover the data.

With the introduction of the SF3700 this month, RAISE technology now offers more flexibility to deliver greater data protection. With the original RAISE level 1, the space of a full flash die had to be allocated solely to protect user data. In small-capacity configurations, like 64GB, RAISE level 1 required too much over provisioning and therefore had to be disabled or, with RAISE left on, its available capacity reduced to 60GB or 55GB. With a new enhancement to the SF3700, no such tradeoff is necessary. The new Fractional RAISE option for this first level of protection uses only a small portion of a die to protect user data in even the smallest configurations and preserve over provisioning (OP). This is important because, as I explained in my blog titled Gassing up your SSD, the more space you allocate for OP, the lower the write amplification, which translates to higher performance during writes and longer endurance of the flash memory. 

Stronger data protection with RAISE level 2
A new RAISE level 2 capability offers even stronger data protection, safeguarding against multiple, simultaneous page and block failures, as well as a full die failure. If a die fails, the SandForce controller recovers the user data. RAISE level 2 includes Auto-Reallocation that can be set up to automatically redistribute and protect user data in the event of a subsequent die failure. Because the option to protect against a second die failure would reduce the available OP area, the RAISE level 2 feature can be set up to simply drop back to RAISE level 1 protection without sacrificing any OP space. .

Another new capability is an additional (9th) flash channel that enables the manufacturer to populate an extra flash package with one die that enables full RAISE level 1 protection while maintaining maximum user data capacity such as 64GB, 128GB, 256GB, etc. Without the 9th channel option, the SSD capacity would be forced to sacrifice a few GBs of capacity (reducing available user capacity to 60GB, 120GB, 240GB, respectively) because RAISE requires extra storage space.

Although all these new features cannot protect against the would-be thief or catastrophic drive failures from electrical surges or natural disasters, the probability of those events is much lower than a simple ECC failure. That’s why you would be best suited to have an SSD with RAISE technology to automatically protect against the more common ECC failures and then make a backup copy of your system at least periodically to protect your data against those far more serious events.

Tags: , , , , , , , , , , , ,
Views: (1064)


Each new generation of NAND flash memory reduces the fabrication geometry – the dimension of the smallest part of an integrated circuit used to build up the components inside the chip. That means there are fewer electrons storing the data, leading to increased errors and a shorter life for the flash memory. No need to worry. Today’s flash memory depends upon the intelligence and capabilities of the solid state drive (SSD) controller to help keep errors in check and get the longest life possible from flash memory, making it usable in compute environments like laptop computers and enterprise datacenters.

Today’s volume NAND flash memory uses a 20nm and 19nm manufacturing process, but the next generation will be in the 16nm range. Some experts speculate that today’s controllers will struggle to work with this next generation of flash memory to support the high number of write cycles required in datacenters. Also, the current multi-level cell (MLC) flash memory is transitioning to triple-level cell (TLC), which has an even shorter life expectancy and higher error rates.

Can sub-20nm flash survive in the datacenter?
Yes, but it will take a flash memory controller with smarts the industry has never seen before. How intelligent? Sub-20nm flash will need to stretch the life of the flash memory beyond the flash manufacturer’s specifications and correct far more errors than ever before, while still maintaining high throughput and very low latency. And to protect against periodic error correction algorithm failures, the flash will need some kind of redundancy (backup) of the data inside the SSD itself.

When will such a controller materialize?
Now.

LSI this week introduced the third generation of its flagship SSD and flash memory controller, called the SandForce SF3700. The controller is newly engineered and architected to solve the lifespan, performance, and reliability challenges of deploying sub-20nm flash memory in today’s performance-hungry enterprise datacenters. The SandForce SF3700 also enables longer periods between battery recharges for power-sipping client laptop and ultrabook systems. It all happens in a single ASIC package. The SandForce SF3700 is the first SSD controller to include both PCIe and SATA host interfaces natively in one chip to give customers of SSD manufacturers an easy migration path as more of them move to the faster PCIe host interface.

How does the SandForce SF3700 controller make sub-20nm flash excel in the datacenter?
Our new controller builds on the award-winning capabilities of the current SandForce SSD and flash controllers. We’ve refined our DuraWrite™ data reduction technology to streamline the way it picks blocks, collects garbage and reduces the write count. You’ll like the result: longer flash endurance and higher read and write speeds.

The SandForce SF3700 includes SHIELD™ error correction, which applies LDPC and DSP technology in unique ways to correct the higher error rates from the new generations of flash memory. SHIELD technology uses a multi-level error correction schema to optimize the time to get the correct data. Also, with its exclusive Adaptive Code Rate feature, SHIELD leverages DuraWrite technology’s ability to span internal NAND flash boundaries between the user data space and the flash manufacturer’s dedicated ECC field. Other controllers only use one size of ECC code rate for flash memory – the one largest size designed to support the end of the flash’s life. Early in the flash life, a much smaller size ECC is required, and SHIELD technology scales down the ECC accordingly, diverting the remaining free space as additional over provisioning. SHIELD partially increases the ECC size over time as the flash ages to correct the increasing failures, but does not use the largest ECC size until the flash is nearly at the end of its life.

Why is this good? Greater over provisioning over the life of the SSD improves performance and increases the endurance. SHIELD also allows the ECC field to grow even larger after it reaches its specified end of life. The big takeaway: All of these SHIELD capabilities increase flash write endurance many times beyond the manufacturer’s specification. In fact  at the 2013 Flash Memory Summit exposition in Santa Clara, CA, SHIELD was shown to extend the endurance of a particular Micron NAND flash by nearly six times.

That’s not all. The SandForce SF3700 controller’s RAISE™ data reliability feature now offers stronger protection, including full die failure and more options for protecting data on SSDs with low (e.g., 32GB & 64GB) and binary (256GB vs. 240GB) capacities.

So what about end user systems?
The beauty of all SandForce flash and SSD controllers is its onboard firmware, which takes the one common hardware component – the ASIC – and adapts it to the user’s storage environment. For example, in client applications the firmware helps the controller preserve SSD power to enable users of laptop and ultrabook systems to remain unplugged longer between battery recharges. In contrast, enterprise environments require the highest possible performance and lowest latency. This higher performance draws more power, a tradeoff the enterprise is willing to make for the fastest time-to-data. The firmware makes other similar tradeoffs based on which storage environment it is serving.

Although most people consider the enterprise and client storage needs are very diverse, we think the new SandForce SF3700 Flash and SSD controller showcases the perfect balance of power and performance that any user hanging ten can appreciate.

Tags: , , , , , , , , , , , ,
Views: (720)