A customer recently asked me if the SF3700, our latest flash controller, supports SATA Express and fired away with a bunch of other questions about the standard. The depth of his curiosity suggested a broader need for education on the basics of the standard.

To help me with the following overview of SATA Express, I recruited Sumit Puri, Sr. Director of Strategic Marketing for the Flash Components Division at LSI (SandForce). Sumit is a longtime contributor to many storage standards bodies and has been working with SATA- IO – the group responsible for SATA Express – for many years. He has first-hand knowledge of SATA- IO’s work.

Here are his insights into some of the fundamentals of SATA Express.

What is SATA Express?
Sumit: There’s quite a bit of confusion in the industry about what SATA Express defines. In simple terms, SATA Express is a specification for a new connector type that enables the routing of both PCIe® and SATA signals. SATA Express is not a command or signaling protocol. It should really be thought of as a connector that mates with legacy SATA cables and new PCIe cables.

Sumit Puri 

Why was SATA Express created?
Sumit: SATA Express was developed to help smooth the transition from the legacy SATA interface to the new PCIe interface. SATA Express gives system vendors a common connector that supports both traditional SATA and PCIe signaling and helps OEMs streamline connector inventory and reduce related costs.

What is the protocol used in SATA Express?
Sumit: One of the misconceptions about SATA Express is that it’s a protocol specification. Rather, as I mentioned, it’s a mechanical specification for a connector and the matching cabling. Protocols that support SATA Express include SATA, AHCI and NVME.

What are the form factors for SATA Express?
Sumit: SATA Express defines connectors for both a 2.5” drive and the host system. SATA Express connects the drive and system using SATA cables or the newly defined PCIe cables.

What connector configuration is used for SATA Express?
Sumit: Because SATA Express supports both SATA and PCIe signaling as well as the legacy SATA connectors, there are multiple configuration options available to motherboard and device manufacturers. The image below shows plug (a) which is built for attaching to a PCIe device. Socket (b) would be part of a cable assembly for receiving plug (a) or a standard SATA plug, and Socket (c) would mount to a backplane or motherboard fir receiving plug (a) or a standard SATA plug. The last two connectors are a mating pair designed to enable cabling (e) to connect to motherboards (d).

When will hosts begin supporting SATA Express?

Sumit: We expect systems to begin using SATA Express connectors early this year. They will primarily be deployed in desktop environments, which require cabling. In contrast, we expect limited use of SATA Express in notebook and other portable systems that are moving to cableless card-edge connector designs like the recently minted M.2 form factor. We also expect to see scant use of SATA Express in enterprise backplanes. Enterprise customers will likely transition to other connectors that support higher speed PCIe signaling like the SFF-8639, a new connector that was originally included in the SATA Express specification but has since been removed.

Will LSI support SATA Express?
Sumit: Absolutely. Our SF3700 flash controller will be fully compatible with the newly defined SATA Express connector and support either SATA or PCIe. Our current SF-2000 SATA flash controllers support SATA cabling used on SATA Express, but not PCIe.

Will LSI also support SRIS?
Sumit: PCIe devices enabled with SRIS (Separate Refclk Independent SSC) can self-clock so need no reference clock from the host, allowing system builders to use lower cost PCIe cables. SRIS is an important cost-saving feature for cabling that supports PCIe signaling. It doesn’t support card-edge connector designs. Today the SF3700 supports PCIe connectivity, and LSI will support SRIS in future releases of SF3700 and other products.

Why is it called SATA Express?
Sumit: SATA Express blends the names of the two connectors and captures the hybridization of the physical interconnects. The name reflects the ability of legacy SATA connectors to support higher PCIe data rates to simplify the transition to PCIe devices. SATA Express can pull double duty, supporting both PCIe and SATA signaling in the same motherboard socket. The same SATA Express socket accepts both traditional SATA and new PCIe cables and links to either a legacy SATA or SATA Express device connector.

How fast can SATA Express run?
Sumit: The PCIe interface defines the top SATA Express speed. A PCIe Gen2 x2 device supports up to 900 MB/s of throughput, a PCIe Gen3 x2 device up to 1800 MB/s of throughput – both significantly higher than 550 Mb/s speed ceiling of today’s SATA devices.

Is SATA Express similar to M.2?
Sumit: There are two key similarities. Both support SATA and PCIe on the same host connector, and both are designed to help transition from SATA to PCIe over time.

SATA Express delivers the future of connector speeds today
SATA Express was born of the stuff of all great inventions. Necessity. The challenge SATA-IO faced in doubling SATA 6 Gb/s speeds was herculean. The undertaking would have been too time-consuming to support the next-generation connection speeds that PCIe answers. It would have been too involved, requiring an overhaul of the SATA standard. Even in the brightest scenario, the effort would have produced a power guzzler at a time when greater power efficiency is a must for system builders. SATA-IO found a better path, an elegant bridge to PCIe speeds in the form of SATA Express.

Tags: , , , , , , , , , ,
Views: (3776)


Solid state drive (SSD) makers have introduced many new layout form factors that are not possible with hard disk drives (HDDs). My blog Size matters: Everything you need to know about SSD form factors talks about the many current SSD form factors, but I gave the new M.2 form factor only a glimpse. The specification and its history merit a deeper look.

The history
A few years ago the PCI Special Interest Group (PCI-SIG), teaming with The Serial ATA International Organization (SATA-IO), started to develop a new form factor standard to replace Mini-PCIe and mSATA since specifications from both of these organizations are required to build SATA M.2 SSDs.  The new layout and connector would be used for applications including WiFi, WWAN, USB, PCIe and SATA, with SSD implementations using either PCIe or SATA interfaces. The groups set out to create a narrower connector that supports higher data rates, a lower profile and boards of varying lengths to accommodate various very small notebook computers.

This new form factor also aimed to support micro servers and similar high-density systems by enabling the deployment of dozens of M.2 boards. Unique notches in the edge connector known as “keys” would be used to differentiate the wide array of products using the M.2 connector and prevent the insertion of incompatible products into the wrong socket.

The name change
Initially the M.2 form factor was called Next Generation Form Factor, or NGFF for short. NGFF was designed to follow the dimensional specifications of M.2, a different specification from NGFF, which at that time was being defined by the PCI SIG. Soon after NGFF was announced, confusion between the identical form factors reigned, prompting the name change of NGFF to M.2. Many people in the industry have been slow to adopt the new M.2 name and you often see articles that describe these solutions as M.2, formally known as NGFF.

The keys
In the world of connectors or sockets, a “key” prevents the insertion of a connector into an incompatible socket to ensure the proper mating of connectors and sockets. The M.2 specification has defined 11 key configurations, seven for use sometime in the future. A socket can only have one key, but the plug-in cards can have keyways cut for multiple keys if they support those socket types. Of the four defined keys available for current use, two support SSDs. Key ID B (pins 12-19) gives PCIe SSDs up to two lanes of connectivity and key ID M (pins 59-66) provides PCIe SSDs with up to four lanes of connectivity. Both can accommodate SATA devices. All of the key patterns are uniquely configured so that the card cannot be flipped over and inserted incorrectly.

Unfortunately these keys alone do not tell the user enough about an SSD to help in the selection of replacement or upgrade drives. For example, a computer with an M.2 socket for PCIe x2 support features a B key so that no M.2 boards with PCIe x4 requirements (M key) can fit. However, even though a SATA M.2 card with a B key can fit in the same system, the host will not recognize SATA signals from the motherboard’s PCIe socket. With this signal incompatibility, users need to carefully read other socket specifications either printed on the motherboard or included in the system configuration information to see if the socket is PCIe or SATA.

The profile and lengths
Pin spacing on the M.2 card connector is higher in density than prior connector specifications, enabling a narrower board and thinner, lighter mobile computing systems that are smaller and weigh less. What’s more, the M.2 specification defines a module with components populating only one side of the board, leaving enough space between the main system board and the module for other components. The number of flash chips used by SSDs varies with storage capacity. The less the storage capacity requirement of an SSD, the shorter the module can be used, leaving system manufacturers more space for other components.

It’s all in the name
When I hear people call this specification by the name M.2 formally known as NGFF, I cannot help but think about the time when the rock artist Prince changed his name to an unpronounceable symbol and everyone was stuck calling him The Artist Formerly Known as Prince. In his case I believe he was going for the publicity of the confusion.

As for the renaming of NGFF to M.2, I really don’t think that was the goal. In fact I believe it was intended to simplify brand identity by eliminating a second name for the same specification. No matter what we call this new form factor, it appears destined to thrive in both the mobile computing and high-density server markets.

Tags: , , , , , , , , , , , ,
Views: (3095)


The term ”form factor” is used in the computer industry to describe the shape and size of computer components, like drives, motherboards and power supplies. When hard disk drives (HDDs) initially made their way into microprocessor-based computers, they used magnetic platters up to 8 inches in diameter. Because that was the largest single component inside the HDD, it defined the minimum width of the HDD housing—the metal box around the guts of the drive.

The height was dictated by the number of platters stacked on the motor (about 14 for the largest configurations). Over time the standard size of the magnetic patter diameter shrank, which allowed the HDD width to decrease as well. The computer industry used the platter diameter dimensions to describe the HDD form factors, and those contours shrank over the years. Those 8” HDDs for datacenter storage and desktop PCs shed size to 5” to today’s 3.5”, and laptop HDDs, starting at 2.5”, are now as small as 1.8”.

What defines an SSD form factor?
When solid state drives (SSDs) first started replacing HDDs, they had to fit into computer chassis or laptop drive bays (mounting location) built for HDDs, so they had to conform to HDD dimensions. The two SSDs shown below are form factor identical twins—without the outer casing—to 1.8” and 2.5” HDDs. The SSDs also use standard SATA connectors, but note that the SATA connector for 1.8” devices is narrower than the 2.5” devices to accommodate the smaller width.

Internal circuit board of 1.8″ and 2.5″ SSD without the case

 

However, there’s no requirement for the SSD to match the shape of a typical HDD form factor. In fact some of the early SSDs slid into the high-speed PCIe slots inside the computer chassis, not into the drive bays.  A PCIe® SSD card solution resembles an add-in graphic card and installs the same way in the PCIe slot since the physical interface is PCIe.

SSD manufactured as a PCIe expansion card

 

The largest component of an SSD is a flash memory chip so, depending on how many flash chips are used, manufacturers have virtually limitless options in defining dimensions. JEDEC (Joint Electronic Device Engineering Council) defines technical standards for the electronic industry including SSD form factors. JEDEC defined the MO-297 standard, which establishes parameters for the layout, connector locations and dimensions of 54mm x 39mm Serial ATA (SATA) SSDs, so they can use the same connector as standard 2.5” HDDs, but fit into a much smaller space.

MO-297 with standard width (2.5”) SATA connector

 

The most important element of an SSD form factor is the interface connector, the conduit to the host computer. In the early days of SSDs, that connector was typically the same SATA connector used with HDDs. But over time the width of some SSDs became smaller than the SATA connector itself, driving the need for new connectors.

Standard SATA connector is too large for small form factors

Card edge connectors – the part of a computer board that plugs into a computer – emerged to enable smaller designs and to further reduce manufacturing and component costs by requiring the installation of only a single female socket on the host as a receptor for the edge of the SSD’s printed circuit board. (The original 2.5” and 1.8” SSD SATA connector required both a male and female plastic connector to mate the SSD to the computer).

With standardization of these connectors critical to ensuring interoperability among different manufacturers, a few organizations have defined standards for these new connectors. JEDEC defined the MO-300 (50.8mm x 29.85mm), which uses a mini-SATA (mSATA) connector, the same physical connector as mini PCI Express, although the two are not electrically compatible. SSD manufacturers have used that same mSATA edge connector and board width, but customized the length to accommodate more flash chips for higher capacity SSDs.

MO-300 mSATA (left) and custom-length mSATA (right)

 

In 2012 a new, even smaller form factor was introduced as Next Generation Form Factor (NGFF), but was later renamed to M.2. The M.2 standard defines a long list of optional board sizes, and the connector supports both SATA and PCIe electrical interfaces. The keyways or notches on the connector can help determine the interface and number of PCIe lanes possible to the board. However, that gets into more details than we have space to cover here, so we will save that for a future blog.

M.2 form factor supports PCIe and SATA, with many board size options

Apple® MacBook Air® and some MacBook Pro systems use an SSD with a connector and dimensions that closely resemble those of the M.2 form factor. In fact Apple MacBook systems have used a number of different connectors and interfaces for its SSD over the years. Apple used a custom connector with SATA signals from 2010 through 2012 and in 2013 switched to a custom connector with PCIe signals. 

 

MacBook Air SSDs with custom SATA and PCIe connectors

 

 

In some cases, standard SSD form factor configurations are not an option, so SSD manufacturers have taken it upon themselves to create custom board and interface configurations that meet those less typical needs.

Various custom SSD form factors and connectors

And finally there’s the ubiquitous USB-based connection. While USB flash drives have been around for nearly a decade, many people do not realize the performance of these devices can vary by 10 to 20 times. Typically a USB flash drive is used to make data portable—replacing the old floppy disk. In those cases the speed of the device is not critical since it is used infrequently.

Now with the high speed USB 3 interface, a SATA-to-USB 3 bridge chip, and a high performance flash controller like the LSI® SandForce® controller, these external devices can operate as a primary system SSDs, performing as fast as a standard SSD inside the system. The primary advantages of these SSDs are removability and transportability while providing high-speed operation.

USB host interface containing high-speed SSDs

If there’s one constant in life, it’s demand for ever smaller storage form factors that prompt changes in circuit layout, connector position and, of course, dimensions. New connectors proposed for future generations of storage devices like the SFF-8639 specification will enable multiple interfaces and data path channels on the same connector. While the SFF-8639 does not technically define the device to which it connects, the connector itself is rather large, so the form factor of the SSD will need to be big enough to hold the connector. That’s why the primary SFF-8639 market is datacenters that use back-plane connectors and racks of storage devices. A similar connector – like SFF-8639, very large and built to support multiple data paths – is the SATA Express connector. I will save the details of that connector for an upcoming blog.

The sky’s the limit for SSD shapes and sizes. Without a spinning platter inside a box, designers can let their imaginations run wild. Creative people in the industry will continue to find new applications for SSDs that were previously restricted by the internal components of HDDs. That creativity and flexibility will take on growing importance as we continue to press datacenters and consumer electronics to do more with less, reminding us that size does in fact matter.

Tags: , , , , , , , , , , , , ,
Views: (3540)


In the spirit of Christmas and the holidays, I thought it would be appropriate for a slight change from the usual blog entry today. In this case you need to think about the classic song “The Twelve Days of Christmas.” I promise not to dive into the origins of the song and the meaning behind it, but I will provide an alternate version of the words to think about when you are with your family singing Christmas carols.

Rather than write out all the 12 separate choruses, I decided to start with the final chorus on the 12th day.

Sung to the tune of “The 12 Days of Christmas”

On the 12th day of Christmas my true love gave to me

Twelve second boot times,

Eleven data reduction benefits,

Ten nm-class flash,

Nine flash channels,

Eight flash chips,

Seven percent over provisioning,

Six gigabits per second,

Five golden edge connectors,

Four PCIe lanes,

Third generation controller,

Two interfaces in one package,

And a SandForce Driven SSD.

Maybe if you were good this year, Santa will bring you a SandForce Driven SSD to revitalize your computer too! Merry Christmas and happy holidays to all!

Tags: , , , , , , , ,
Views: (673)


Most users have no idea that reading electronic information from a data storage medium like a hard disk drive (HDD) or solid state drive (SSD) is plagued with read errors. For this reason error correction codes (ECC) are used to fix the random bit errors that arise during the reading process before the incorrect data is returned to the user. But the error correction codes can only handle so many errors at one time. If data errors exceed the ECC limits, the data goes uncorrected and is lost forever.  More recent ECC algorithms like the LSI SHIELD error correction technology go a lot farther to protect user data than prior solutions.

What happens to the data when the ECC fails?
If the ECC fails, only a backup protection mechanism will recover the data. There are three alternatives.  First, users should always back up their critical data since ECC failure and other threats can destroy data or render it inaccessible such as natural disasters (earthquakes, tornadoes, flooding etc.) that cause heavy damage to buildings and their contents, lightning overloading that can burn up a computer without adequate electrical protection, and of course computer theft. Any backup system should be either automated or at least run consistently if it is manual. Industry reports cite that less than 10% of computer users back up their data. That is not very comforting.

The second solution is to employ a RAID (Redundant Array of Independent Disks) array that uses multiple storage devices with one or more of the drives acting as a parity device to provide redundancy. That way if one drive fails, the redundant drive provides enough parity information to restore the original data. This type of system is very common in enterprise environments—a work computer—but hardly used in home systems or laptop PC.

Is the third solution simple, automatic, and operable in a single-drive environment?
Yes. Yes. And Yes. LSI® SandForce® flash and SSD controllers have a feature called RAISE™ data protection that meets all of these needs. Introduced in 2009 with the first SandForce controller, RAISE technology stands for Redundant Array of Independent Silicon Elements. It sounds like RAID, and acts something like RAID, but protects data using a single drive. With RAISE technology, the individual flash die act like the drives in a RAID array. The original RAISE level 1 technology protects against single page and block failures in the flash. These types of failures are beyond the protection of the ECC, but RAISE technology can recover the data.

With the introduction of the SF3700 this month, RAISE technology now offers more flexibility to deliver greater data protection. With the original RAISE level 1, the space of a full flash die had to be allocated solely to protect user data. In small-capacity configurations, like 64GB, RAISE level 1 required too much over provisioning and therefore had to be disabled or, with RAISE left on, its available capacity reduced to 60GB or 55GB. With a new enhancement to the SF3700, no such tradeoff is necessary. The new Fractional RAISE option for this first level of protection uses only a small portion of a die to protect user data in even the smallest configurations and preserve over provisioning (OP). This is important because, as I explained in my blog titled Gassing up your SSD, the more space you allocate for OP, the lower the write amplification, which translates to higher performance during writes and longer endurance of the flash memory. 

Stronger data protection with RAISE level 2
A new RAISE level 2 capability offers even stronger data protection, safeguarding against multiple, simultaneous page and block failures, as well as a full die failure. If a die fails, the SandForce controller recovers the user data. RAISE level 2 includes Auto-Reallocation that can be set up to automatically redistribute and protect user data in the event of a subsequent die failure. Because the option to protect against a second die failure would reduce the available OP area, the RAISE level 2 feature can be set up to simply drop back to RAISE level 1 protection without sacrificing any OP space. .

Another new capability is an additional (9th) flash channel that enables the manufacturer to populate an extra flash package with one die that enables full RAISE level 1 protection while maintaining maximum user data capacity such as 64GB, 128GB, 256GB, etc. Without the 9th channel option, the SSD capacity would be forced to sacrifice a few GBs of capacity (reducing available user capacity to 60GB, 120GB, 240GB, respectively) because RAISE requires extra storage space.

Although all these new features cannot protect against the would-be thief or catastrophic drive failures from electrical surges or natural disasters, the probability of those events is much lower than a simple ECC failure. That’s why you would be best suited to have an SSD with RAISE technology to automatically protect against the more common ECC failures and then make a backup copy of your system at least periodically to protect your data against those far more serious events.

Tags: , , , , , , , , , , , ,
Views: (1115)


Last week at LSI’s annual Accelerating Innovation Summit (AIS) the company took the wraps off a vision that should lead its technical direction for the next few years.

The LSI keynote featured a video of three situations as they might evolve in the future:

  • A man falls from a bicycle in a foreign country and needs medical attention
  • A bullet train stops before hitting a tree that fell across its tracks
  • A hacker is prevented from accessing secure information using identity theft

I’ll focus on just one of these to show how LSI expects the future to develop.  In the bicycle accident scenario, a businessman falls to the ground while riding a bicycle in a foreign country.  Security cameras that have been upgraded to understand what they see notify an emergency services agency which sends an ambulance to the scene.  The paramedic performs a retinal scan on the victim, using it to retrieve his medical records, including his DNA sequence, from the web.

The businessman’s wearable body monitoring system also communicates with the paramedic’s instruments to share his vital signs.  All of this information is used by cloud-based computers to determine a course of action which, in the video, requires an injection that has been custom-tuned to the victim’s current situation, his medical history, and his genetic makeup.

That’s a pretty tall order, and it will require several advances in the state of the art, but LSI is using this and other scenarios to work with its clients and translate this vision into the products of the future.

What are the key requirements to make this happen? Talwalkar told the audience that we need to create a society that is supported by preventive, predictive and assisted analytics to move in a direction where the general welfare is assisted by all that the Internet and advanced computing have to offer.  Since data is growing at an exponential rate, he argued that this will require the instant retrieval of interlinked data objects at scale. Everything that is key to solving the task must be immediately available, and must be quickly analyzed to provide a solution to the problem at hand. The key will be the ability to process interlinked pieces of data that have not been previously structured to handle any particular situation.

To achieve this we will need larger-scale computing resources than are currently available, all closely interconnected, that all operate at very high speeds.  LSI hopes to tap into these needs through its strengths in networking and communications chips for the communications, its HDD and server and storage connectivity array chips and boards for large-scale data, and its flash controller memory and PCIe SSD expertise for high performance.

LSI brought to AIS several of the customers and partners it is working with using to develop these technologies. Speakers from Intel, Microsoft, IBM, Toshiba, Ericsson and others showed how they are working with LSI’s various technologies to improve the performance of their own systems.  On the exhibition floor booths from LSI and many of its clients demonstrated new technologies that performed everything from high-speed stock market analysis to fast flash management.

It’s pretty exciting to see a company that has a clear vision of its future and is committed to moving its entire ecosystem ahead to make that happen and help companies manage their business more effectively during what LSI calls the “Datacentric Era.” LSI has certainly put a lot of effort into creating a vision and determining where its talents can be brought to bear to improve our lives in the future.

Tags: , , , , , , , , , , , , , , , , , ,
Views: (2366)


The lifeblood of any online retailer is the speed of its IT infrastructure. Shoppers aren’t infinitely patient. Sluggish infrastructure performance can make shoppers wait precious seconds longer than they can stand, sending them fleeing to other sites for a faster purchase. Our federal government’s halting rollout of the Health Insurance Marketplace website is a glaring example of what can happen when IT infrastructure isn’t solid. A few bad user experiences that go viral can be damaging enough. Tens of thousands can be crippling.  

In hyperscale datacenters, any number of problems including network issues, insufficient scaling and inconsistent management can undermine end users’ experience. But one that hits home for me is the impact of slow storage on the performance of databases, where the data sits. With the database at the heart of all those online transactions, retailers can ill afford to have their tier of database servers operating at anything less than peak performance.

Slow storage undermines database performance
Typically, Web 2.0 and e-commerce companies run relational databases (RDBs) on these massive server-centric infrastructures. (Take a look at my blog last week to get a feel for the size of these hyperscale datacenter infrastructures). If you are running that many servers to support millions of users, you are likely using some kind of open-sourced RDB such as MySQL or other variations. Keep in mind that Oracle 11gR2 likely retails around $30K per core but MSQL is free. But the performance of both, and most other relational databases, suffer immensely when transactions are retrieving data from storage (or disk). You can only throw so much RAM and CPU power at the performance problem … sooner rather than later you have to deal with slow storage.

Almost everyone in industry – Web 2.0, cloud, hyperscale and other providers of massive database infrastructures – is lining up to solve this problem the best way they can. How? By deploying flash as the sole storage for database servers and applications. But is low-latency flash enough? For sheer performance it beats rotational disk hands down. But … even flash storage has its limitations, most notably when you are trying to drive ultra-low latencies for write IOs. Most IO accesses by RDBs, which do the transactional processing, are a mix or read/writes to the storage. Specifically, the mix is 70%/30% reads/writes. These are also typically low q-depth accesses (less than 4). It is those writes that can really slow things down.

PCIe flash reduces write latencies
The good news is that the right PCIe flash technology in the mix can solve the slowdowns. Some interesting PCIe flash technologies designed to tackle this latency problem are on display at AIS this week. DRAM and in particular NVDRAM are being deployed as a tier in front of flash to really tackle those nasty write latencies.

Among other demos, we’re showing how a Nytro™ 6000 series PCIe flash card helps solve the MySQL database performance issues. The typical response time for a small data read (this is what the database will see for a Database IO) from an HDD is 5ms. Flash-based devices such as the Nytro WarpDrive® card can complete the same read in less than 50μs on average during testing, an improvement of several orders-of-magnitude in response time. This response time translates to getting much higher transactions out of the same infrastructure – but with less space (flash is denser) and a lot less power (flash consumes a lot lower power than HDDs).

We’re also showing the Nytro 7000 series PCIe flash cards. They reach even lower write latencies than the 6000 series and very low q-depths.  The 7000 series cards also provide DRAM buffering while maintaining data-integrity even in the event of a power loss.

For online retailers and other businesses, higher database speeds mean more than just faster transactions. They can help keep those cash registers ringing.

Tags: , , , , , , , , , , , , , , , , , , , ,
Views: (983)


Try using a sledgehammer to pack 15 pounds of potatoes into a bag with a 5-pound capacity, and what do you end up with? Too much messy and disgusting material crammed into a vessel too small for the job and a lot of sloppy overspill.

Unless you have the right sledgehammer. What does this have to do with computer storage? Plenty. And it all starts with a new data-reshaping capability of LSI® DuraWrite™ technology. Keep the numbers in mind: 15 pounds of data in a 5-pound bag. Or three units of data in a space designed for one. It’s key. More on that later.

What does DuraWrite technology do again?
My blog Write Amplification (Part 2) talks about how DuraWrite data reduction technology can make more space for over provisioning. That translates into faster data transfers, longer flash memory endurance and lower power draw. What it has NOT done is increase the space available for data storage.

Until now.

DuraWrite Virtual Capacity is like the Dr. Who TARDIS
Excuse my Dr. Who phone booth reference, but for those who know the TARDIS, it provides a great analogy. For those unfamiliar with the TARDIS, it is a fictional time machine that looks like a British police call box. It is very small by external appearances, but inside it is vast in its carrying capacity, taking occupants on an odyssey through time.

DuraWrite Virtual Capacity (DVC) is a new feature of our SandForce® SSD controllers, and it’s a bit like the TARDIS. While there is no time travel involved, it does provide a lot more than can be seen. DVC takes advantage of data entropy (randomness) as data is written to the SSD. Some people like to think of it as data compression. Whatever you call it, the end result is the same—less data is written to the flash than what is written from the host. DuraWrite technology alone will increase the over provisioning of an SSD, but DVC increases the user data storage available in an SSD (not the over provisioning).

How much more space can it add?
The efficiency of DVC is inversely related to the entropy. High entropy data like JPEG, encrypted and similar compressed files do little to increase data capacity. In contrast, files like Microsoft OfficeOutlook® PST, Oracle® databases, EXE, and DLL (operating system) files have much lower entropy and can increase usable storage space on the order of two to three times for the same physical flash memory.  Yes, I said two to three times. Better still, that translates into a two to three times reduction in the net cost of the flash storage. Again, no typo: two to three times more affordable. Since most enterprise deployments of flash memory are limited by the cost per GB of the flash, this kind of advance has the potential to further accelerate flash memory deployments in the enterprise.

Why hasn’t this been done in the past?
Think TARDIS again. Step into the booth, and take a joy ride through space and time. It happens on-the-fly. Simple … but, only in a fictional world. With on-the-fly data reduction and compression, the process is filled with complexities. The biggest problem is most operating systems don’t understand that the maximum capacity of a primary storage device (hard disk or solid state drive) can increase or decrease over time. However, open source operating systems can address the issue with customized drivers.

The other problem is any storage device that includes data reduction or compression must use a variable mapping table to track the location of the data on the device once data is reduced. Hard disk drives (HDDs) do not require any kind of mapping table because the operating system can write new data over old data. However, the lack of a mapping table prevents HDDs from supporting an on-the-fly data reduction and compression system.

All solid state drives (SSDs) using NAND flash feature  a basic mapping table, typically called the flash translation layer (FTL). This mapping table is required because NAND flash memory pages cannot be rewritten directly, but must first be erased in larger blocks. The SSD controller needs to relocate valid data while the old data gets erased. This process, called garbage collection, uses the mapping table. However, the data reduction and compression system requires a mapping table that is variable in size. Most SSDs lack that capability, but not those using a SandForce controller, making SSDs with SandForce controllers perfectly suited to the job.

What use cases can be applied to DVC?
DVC can be used to increase usable data storage space or provide more cache capacity flexibility by two to three times. To create more usable data storage space, the operating system must be altered with new primary storage device drivers for it to understand the drive’s maximum capacity, which can fluctuate over time based on how much the data is reduced or compressed.

To support greater cache capacity flexibility, a host controller would manage the flash memory directly as a cache. The controller would isolate the flash memory capacity from the host so the operating system does not even see it. The dynamic cache capacity would increase cache performance at a lower price per GB depending upon the entropy of the data. The LSI Nytro product line and some SandForce Driven® program SSDs already support both of these use cases.

When will this appear in my personal computer?
While DVC is already being deployed and evaluated in enterprise datacenters around the world, the use in personal computers will take a bit longer due to the need to have the operating system changed with new storage device drivers that understand the fluctuating maximum capacity.

When these operating system changes come together, you will not need that sledgehammer to pack more data into your TARDIS (SSD). Now that’s a space odyssey to write home about.

Tags: , , , , , , , , ,
Views: (1554)


Each new generation of NAND flash memory reduces the fabrication geometry – the dimension of the smallest part of an integrated circuit used to build up the components inside the chip. That means there are fewer electrons storing the data, leading to increased errors and a shorter life for the flash memory. No need to worry. Today’s flash memory depends upon the intelligence and capabilities of the solid state drive (SSD) controller to help keep errors in check and get the longest life possible from flash memory, making it usable in compute environments like laptop computers and enterprise datacenters.

Today’s volume NAND flash memory uses a 20nm and 19nm manufacturing process, but the next generation will be in the 16nm range. Some experts speculate that today’s controllers will struggle to work with this next generation of flash memory to support the high number of write cycles required in datacenters. Also, the current multi-level cell (MLC) flash memory is transitioning to triple-level cell (TLC), which has an even shorter life expectancy and higher error rates.

Can sub-20nm flash survive in the datacenter?
Yes, but it will take a flash memory controller with smarts the industry has never seen before. How intelligent? Sub-20nm flash will need to stretch the life of the flash memory beyond the flash manufacturer’s specifications and correct far more errors than ever before, while still maintaining high throughput and very low latency. And to protect against periodic error correction algorithm failures, the flash will need some kind of redundancy (backup) of the data inside the SSD itself.

When will such a controller materialize?
Now.

LSI this week introduced the third generation of its flagship SSD and flash memory controller, called the SandForce SF3700. The controller is newly engineered and architected to solve the lifespan, performance, and reliability challenges of deploying sub-20nm flash memory in today’s performance-hungry enterprise datacenters. The SandForce SF3700 also enables longer periods between battery recharges for power-sipping client laptop and ultrabook systems. It all happens in a single ASIC package. The SandForce SF3700 is the first SSD controller to include both PCIe and SATA host interfaces natively in one chip to give customers of SSD manufacturers an easy migration path as more of them move to the faster PCIe host interface.

How does the SandForce SF3700 controller make sub-20nm flash excel in the datacenter?
Our new controller builds on the award-winning capabilities of the current SandForce SSD and flash controllers. We’ve refined our DuraWrite™ data reduction technology to streamline the way it picks blocks, collects garbage and reduces the write count. You’ll like the result: longer flash endurance and higher read and write speeds.

The SandForce SF3700 includes SHIELD™ error correction, which applies LDPC and DSP technology in unique ways to correct the higher error rates from the new generations of flash memory. SHIELD technology uses a multi-level error correction schema to optimize the time to get the correct data. Also, with its exclusive Adaptive Code Rate feature, SHIELD leverages DuraWrite technology’s ability to span internal NAND flash boundaries between the user data space and the flash manufacturer’s dedicated ECC field. Other controllers only use one size of ECC code rate for flash memory – the one largest size designed to support the end of the flash’s life. Early in the flash life, a much smaller size ECC is required, and SHIELD technology scales down the ECC accordingly, diverting the remaining free space as additional over provisioning. SHIELD partially increases the ECC size over time as the flash ages to correct the increasing failures, but does not use the largest ECC size until the flash is nearly at the end of its life.

Why is this good? Greater over provisioning over the life of the SSD improves performance and increases the endurance. SHIELD also allows the ECC field to grow even larger after it reaches its specified end of life. The big takeaway: All of these SHIELD capabilities increase flash write endurance many times beyond the manufacturer’s specification. In fact  at the 2013 Flash Memory Summit exposition in Santa Clara, CA, SHIELD was shown to extend the endurance of a particular Micron NAND flash by nearly six times.

That’s not all. The SandForce SF3700 controller’s RAISE™ data reliability feature now offers stronger protection, including full die failure and more options for protecting data on SSDs with low (e.g., 32GB & 64GB) and binary (256GB vs. 240GB) capacities.

So what about end user systems?
The beauty of all SandForce flash and SSD controllers is its onboard firmware, which takes the one common hardware component – the ASIC – and adapts it to the user’s storage environment. For example, in client applications the firmware helps the controller preserve SSD power to enable users of laptop and ultrabook systems to remain unplugged longer between battery recharges. In contrast, enterprise environments require the highest possible performance and lowest latency. This higher performance draws more power, a tradeoff the enterprise is willing to make for the fastest time-to-data. The firmware makes other similar tradeoffs based on which storage environment it is serving.

Although most people consider the enterprise and client storage needs are very diverse, we think the new SandForce SF3700 Flash and SSD controller showcases the perfect balance of power and performance that any user hanging ten can appreciate.

Tags: , , , , , , , , , , , ,
Views: (777)


Back in the 1990s, a new paradigm was forced into space exploration. NASA faced big cost cuts. But grand ambitions for missions to Mars were still on its mind. The problem was it couldn’t dream and spend big. So the NASA mantra became “faster, better, cheaper.” The idea was that the agency could slash costs while still carrying out a wide variety of programs and space missions. This led to some radical rethinks, and some fantastically successful programs that had very outside-the-box solutions. (Bouncing Mars landers anyone?)

That probably sounds familiar to any IT admin. And that spirit is alive at LSI’s AIS – The Accelerating Innovation Summit, which is our annual congress of customers and industry pros, coming up Nov. 20-21 in San Jose. Like the people at Mission Control, they all want to make big things happen… without spending too much.

Take technology and line of business professionals. They need to speed up critical business applications. A lot. Or IT staff for enterprise and mobile networks, who must deliver more work to support the ever-growing number of users, devices and virtualized machines that depend on them. Or consider mega datacenter and cloud service providers, whose customers demand the highest levels of service, yet get that service for free. Or datacenter architects and managers, who need servers, storage and networks to run at ever-greater efficiency even as they grow capability exponentially.

(LSI has been working on many solutions to these problems, some of which I spoke about in this blog.)

It’s all about moving data faster, better, and cheaper. If NASA could do it, we can too. In that vein, here’s a look at some of the topics you can expect AIS to address around doing more work for fewer dollars:

  • Emerging solid state technologies – Flash is dramatically enhancing datacenter efficiency and enabling new use cases. Could emerging solid state technologies such as Phase Change Memory (PCM) and Spin-Torque Transfer (STT) RAM radically change the way we use storage and memory?
  • Hyperscale deployments – Traditional SAN and NAS lack the scalability and economics needed for today’s hyperscale deployments. As businesses begin to emulate hyperscale deployments, they need to scale and manage datacenter infrastructure more effectively. Will software increasingly be used to both manage storage and provide storage services on commodity hardware?
  • Sub-20nm flash – The emergence of sub-20nm flash promises new cost savings for the storage industry. But with reduced data reliability, slower overall access times and much lower intrinsic endurance, is it ready for the datacenter?
  • Triple-Level Cell flash – The move to Multi-Level Cell (MLC) flash helped double the capacity per square millimeter of silicon, and Triple-Level Cell (TLC) promises even higher storage density. But TCL comes at a cost: its working life is much shorter than MLC. So what, if any role will TLC play in the datacenter? Remember – it wasn’t long ago no one believed MLC could be used in enterprise.
  • Flash for virtual desktop – Virtual desktop technology has seen significant growth in today’s datacenters. However, storage demands on highly utilized VDI servers can cause unacceptable response times. Can flash help virtual desktop environments achieve the best overall performance to improve end-user productivity while lowering total solution cost?
  • Flash caching – Oracle and storage vendors have started enhancing their products to take advantage of flash caching. How can database administrators implement caching technology running on Oracle® Linux with Oracle Unbreakable Enterprise Kernel, utilizing Oracle Database Smart Flash Cache?
  • Software Defined Networks (SDN) – SDNs promise to make networks more flexible, easier to manage, and programmable. How and why are businesses using SDNs today?  
  • Big data analytics – Gathering, interpreting and correlating multiple data streams as they are created can enhance real-time decision making for industries like financial trading, national security, consumer marketing, and network security. How can specialized silicon greatly reduce the compute power necessary, and make the “real-time” part of real-time analytics possible?
  • Sharable DAS – Datacenters of all sizes are struggling to provide high performance and 24/7 uptime, while reducing TCO. How can DAS-based storage sharing and scaling help meet the growing need for reduced cost and greater ease of use, performance, agility and uptime?
  • 12Gb/s SAS – Applications such as Web 2.0/cloud infrastructure, transaction processing and business intelligence are driving the need for higher-performance storage. How can 12Gb/s SAS meet today’s high-performance challenges for IOPS and bandwidth while providing enterprise-class features, technology maturity and investment protection, even with existing storage devices?

And, I think you’ll find some astounding products, demos, proof of concepts and future solutions in the showcase too – not just from LSI but from partners and fellow travelers in this industry. Hey – that’s my favorite part. I can’t wait to see people’s reactions.

Since they rethought how to do business in 2002, NASA has embarked on nearly 60 Mars missions. Faster, better, cheaper. It can work here in IT too.

Tags: , , , , , , , , , , , , , , , , , ,
Views: (781)