The term ”form factor” is used in the computer industry to describe the shape and size of computer components, like drives, motherboards and power supplies. When hard disk drives (HDDs) initially made their way into microprocessor-based computers, they used magnetic platters up to 8 inches in diameter. Because that was the largest single component inside the HDD, it defined the minimum width of the HDD housing—the metal box around the guts of the drive.

The height was dictated by the number of platters stacked on the motor (about 14 for the largest configurations). Over time the standard size of the magnetic patter diameter shrank, which allowed the HDD width to decrease as well. The computer industry used the platter diameter dimensions to describe the HDD form factors, and those contours shrank over the years. Those 8” HDDs for datacenter storage and desktop PCs shed size to 5” to today’s 3.5”, and laptop HDDs, starting at 2.5”, are now as small as 1.8”.

What defines an SSD form factor?
When solid state drives (SSDs) first started replacing HDDs, they had to fit into computer chassis or laptop drive bays (mounting location) built for HDDs, so they had to conform to HDD dimensions. The two SSDs shown below are form factor identical twins—without the outer casing—to 1.8” and 2.5” HDDs. The SSDs also use standard SATA connectors, but note that the SATA connector for 1.8” devices is narrower than the 2.5” devices to accommodate the smaller width.

Internal circuit board of 1.8″ and 2.5″ SSD without the case

 

However, there’s no requirement for the SSD to match the shape of a typical HDD form factor. In fact some of the early SSDs slid into the high-speed PCIe slots inside the computer chassis, not into the drive bays.  A PCIe® SSD card solution resembles an add-in graphic card and installs the same way in the PCIe slot since the physical interface is PCIe.

SSD manufactured as a PCIe expansion card

 

The largest component of an SSD is a flash memory chip so, depending on how many flash chips are used, manufacturers have virtually limitless options in defining dimensions. JEDEC (Joint Electronic Device Engineering Council) defines technical standards for the electronic industry including SSD form factors. JEDEC defined the MO-297 standard, which establishes parameters for the layout, connector locations and dimensions of 54mm x 39mm Serial ATA (SATA) SSDs, so they can use the same connector as standard 2.5” HDDs, but fit into a much smaller space.

MO-297 with standard width (2.5”) SATA connector

 

The most important element of an SSD form factor is the interface connector, the conduit to the host computer. In the early days of SSDs, that connector was typically the same SATA connector used with HDDs. But over time the width of some SSDs became smaller than the SATA connector itself, driving the need for new connectors.

Standard SATA connector is too large for small form factors

Card edge connectors – the part of a computer board that plugs into a computer – emerged to enable smaller designs and to further reduce manufacturing and component costs by requiring the installation of only a single female socket on the host as a receptor for the edge of the SSD’s printed circuit board. (The original 2.5” and 1.8” SSD SATA connector required both a male and female plastic connector to mate the SSD to the computer).

With standardization of these connectors critical to ensuring interoperability among different manufacturers, a few organizations have defined standards for these new connectors. JEDEC defined the MO-300 (50.8mm x 29.85mm), which uses a mini-SATA (mSATA) connector, the same physical connector as mini PCI Express, although the two are not electrically compatible. SSD manufacturers have used that same mSATA edge connector and board width, but customized the length to accommodate more flash chips for higher capacity SSDs.

MO-300 mSATA (left) and custom-length mSATA (right)

 

In 2012 a new, even smaller form factor was introduced as Next Generation Form Factor (NGFF), but was later renamed to M.2. The M.2 standard defines a long list of optional board sizes, and the connector supports both SATA and PCIe electrical interfaces. The keyways or notches on the connector can help determine the interface and number of PCIe lanes possible to the board. However, that gets into more details than we have space to cover here, so we will save that for a future blog.

M.2 form factor supports PCIe and SATA, with many board size options

Apple® MacBook Air® and some MacBook Pro systems use an SSD with a connector and dimensions that closely resemble those of the M.2 form factor. In fact Apple MacBook systems have used a number of different connectors and interfaces for its SSD over the years. Apple used a custom connector with SATA signals from 2010 through 2012 and in 2013 switched to a custom connector with PCIe signals. 

 

MacBook Air SSDs with custom SATA and PCIe connectors

 

 

In some cases, standard SSD form factor configurations are not an option, so SSD manufacturers have taken it upon themselves to create custom board and interface configurations that meet those less typical needs.

Various custom SSD form factors and connectors

And finally there’s the ubiquitous USB-based connection. While USB flash drives have been around for nearly a decade, many people do not realize the performance of these devices can vary by 10 to 20 times. Typically a USB flash drive is used to make data portable—replacing the old floppy disk. In those cases the speed of the device is not critical since it is used infrequently.

Now with the high speed USB 3 interface, a SATA-to-USB 3 bridge chip, and a high performance flash controller like the LSI® SandForce® controller, these external devices can operate as a primary system SSDs, performing as fast as a standard SSD inside the system. The primary advantages of these SSDs are removability and transportability while providing high-speed operation.

USB host interface containing high-speed SSDs

If there’s one constant in life, it’s demand for ever smaller storage form factors that prompt changes in circuit layout, connector position and, of course, dimensions. New connectors proposed for future generations of storage devices like the SFF-8639 specification will enable multiple interfaces and data path channels on the same connector. While the SFF-8639 does not technically define the device to which it connects, the connector itself is rather large, so the form factor of the SSD will need to be big enough to hold the connector. That’s why the primary SFF-8639 market is datacenters that use back-plane connectors and racks of storage devices. A similar connector – like SFF-8639, very large and built to support multiple data paths – is the SATA Express connector. I will save the details of that connector for an upcoming blog.

The sky’s the limit for SSD shapes and sizes. Without a spinning platter inside a box, designers can let their imaginations run wild. Creative people in the industry will continue to find new applications for SSDs that were previously restricted by the internal components of HDDs. That creativity and flexibility will take on growing importance as we continue to press datacenters and consumer electronics to do more with less, reminding us that size does in fact matter.

Tags: , , , , , , , , , , , , ,
Views: (1802)


In the spirit of Christmas and the holidays, I thought it would be appropriate for a slight change from the usual blog entry today. In this case you need to think about the classic song “The Twelve Days of Christmas.” I promise not to dive into the origins of the song and the meaning behind it, but I will provide an alternate version of the words to think about when you are with your family singing Christmas carols.

Rather than write out all the 12 separate choruses, I decided to start with the final chorus on the 12th day.

Sung to the tune of “The 12 Days of Christmas”

On the 12th day of Christmas my true love gave to me

Twelve second boot times,

Eleven data reduction benefits,

Ten nm-class flash,

Nine flash channels,

Eight flash chips,

Seven percent over provisioning,

Six gigabits per second,

Five golden edge connectors,

Four PCIe lanes,

Third generation controller,

Two interfaces in one package,

And a SandForce Driven SSD.

Maybe if you were good this year, Santa will bring you a SandForce Driven SSD to revitalize your computer too! Merry Christmas and happy holidays to all!

Tags: , , , , , , , ,
Views: (524)


Most users have no idea that reading electronic information from a data storage medium like a hard disk drive (HDD) or solid state drive (SSD) is plagued with read errors. For this reason error correction codes (ECC) are used to fix the random bit errors that arise during the reading process before the incorrect data is returned to the user. But the error correction codes can only handle so many errors at one time. If data errors exceed the ECC limits, the data goes uncorrected and is lost forever.  More recent ECC algorithms like the LSI SHIELD error correction technology go a lot farther to protect user data than prior solutions.

What happens to the data when the ECC fails?
If the ECC fails, only a backup protection mechanism will recover the data. There are three alternatives.  First, users should always back up their critical data since ECC failure and other threats can destroy data or render it inaccessible such as natural disasters (earthquakes, tornadoes, flooding etc.) that cause heavy damage to buildings and their contents, lightning overloading that can burn up a computer without adequate electrical protection, and of course computer theft. Any backup system should be either automated or at least run consistently if it is manual. Industry reports cite that less than 10% of computer users back up their data. That is not very comforting.

The second solution is to employ a RAID (Redundant Array of Independent Disks) array that uses multiple storage devices with one or more of the drives acting as a parity device to provide redundancy. That way if one drive fails, the redundant drive provides enough parity information to restore the original data. This type of system is very common in enterprise environments—a work computer—but hardly used in home systems or laptop PC.

Is the third solution simple, automatic, and operable in a single-drive environment?
Yes. Yes. And Yes. LSI® SandForce® flash and SSD controllers have a feature called RAISE™ data protection that meets all of these needs. Introduced in 2009 with the first SandForce controller, RAISE technology stands for Redundant Array of Independent Silicon Elements. It sounds like RAID, and acts something like RAID, but protects data using a single drive. With RAISE technology, the individual flash die act like the drives in a RAID array. The original RAISE level 1 technology protects against single page and block failures in the flash. These types of failures are beyond the protection of the ECC, but RAISE technology can recover the data.

With the introduction of the SF3700 this month, RAISE technology now offers more flexibility to deliver greater data protection. With the original RAISE level 1, the space of a full flash die had to be allocated solely to protect user data. In small-capacity configurations, like 64GB, RAISE level 1 required too much over provisioning and therefore had to be disabled or, with RAISE left on, its available capacity reduced to 60GB or 55GB. With a new enhancement to the SF3700, no such tradeoff is necessary. The new Fractional RAISE option for this first level of protection uses only a small portion of a die to protect user data in even the smallest configurations and preserve over provisioning (OP). This is important because, as I explained in my blog titled Gassing up your SSD, the more space you allocate for OP, the lower the write amplification, which translates to higher performance during writes and longer endurance of the flash memory. 

Stronger data protection with RAISE level 2
A new RAISE level 2 capability offers even stronger data protection, safeguarding against multiple, simultaneous page and block failures, as well as a full die failure. If a die fails, the SandForce controller recovers the user data. RAISE level 2 includes Auto-Reallocation that can be set up to automatically redistribute and protect user data in the event of a subsequent die failure. Because the option to protect against a second die failure would reduce the available OP area, the RAISE level 2 feature can be set up to simply drop back to RAISE level 1 protection without sacrificing any OP space. .

Another new capability is an additional (9th) flash channel that enables the manufacturer to populate an extra flash package with one die that enables full RAISE level 1 protection while maintaining maximum user data capacity such as 64GB, 128GB, 256GB, etc. Without the 9th channel option, the SSD capacity would be forced to sacrifice a few GBs of capacity (reducing available user capacity to 60GB, 120GB, 240GB, respectively) because RAISE requires extra storage space.

Although all these new features cannot protect against the would-be thief or catastrophic drive failures from electrical surges or natural disasters, the probability of those events is much lower than a simple ECC failure. That’s why you would be best suited to have an SSD with RAISE technology to automatically protect against the more common ECC failures and then make a backup copy of your system at least periodically to protect your data against those far more serious events.

Tags: , , , , , , , , , , , ,
Views: (838)


Last week at LSI’s annual Accelerating Innovation Summit (AIS) the company took the wraps off a vision that should lead its technical direction for the next few years.

In his keynote, LSI CEO Abhi Talwalkar shared a video of three situations as they might evolve in the future:

  • A man falls from a bicycle in a foreign country and needs medical attention
  • A bullet train stops before hitting a tree that fell across its tracks
  • A hacker is prevented from accessing secure information using identity theft

I’ll focus on just one of these to show how LSI expects the future to develop.  In the bicycle accident scenario, a businessman falls to the ground while riding a bicycle in a foreign country.  Security cameras that have been upgraded to understand what they see notify an emergency services agency which sends an ambulance to the scene.  The paramedic performs a retinal scan on the victim, using it to retrieve his medical records, including his DNA sequence, from the web.

The businessman’s wearable body monitoring system also communicates with the paramedic’s instruments to share his vital signs.  All of this information is used by cloud-based computers to determine a course of action which, in the video, requires an injection that has been custom-tuned to the victim’s current situation, his medical history, and his genetic makeup.

That’s a pretty tall order, and it will require several advances in the state of the art, but LSI is using this and other scenarios to work with its clients and translate this vision into the products of the future.

What are the key requirements to make this happen? Talwalkar told the audience that we need to create a society that is supported by preventive, predictive and assisted analytics to move in a direction where the general welfare is assisted by all that the Internet and advanced computing have to offer.  Since data is growing at an exponential rate, he argued that this will require the instant retrieval of interlinked data objects at scale. Everything that is key to solving the task must be immediately available, and must be quickly analyzed to provide a solution to the problem at hand. The key will be the ability to process interlinked pieces of data that have not been previously structured to handle any particular situation.

To achieve this we will need larger-scale computing resources than are currently available, all closely interconnected, that all operate at very high speeds.  LSI hopes to tap into these needs through its strengths in networking and communications chips for the communications, its HDD and server and storage connectivity array chips and boards for large-scale data, and its flash controller memory and PCIe SSD expertise for high performance.

LSI brought to AIS several of the customers and partners it is working with using to develop these technologies. Speakers from Intel, Microsoft, IBM, Toshiba, Ericsson and others showed how they are working with LSI’s various technologies to improve the performance of their own systems.  On the exhibition floor booths from LSI and many of its clients demonstrated new technologies that performed everything from high-speed stock market analysis to fast flash management.

It’s pretty exciting to see a company that has a clear vision of its future and is committed to moving its entire ecosystem ahead to make that happen and help companies manage their business more effectively during what LSI calls the “Datacentric Era.” LSI has certainly put a lot of effort into creating a vision and determining where its talents can be brought to bear to improve our lives in the future.

Tags: , , , , , , , , , , , , , , , , , , ,
Views: (2100)


The lifeblood of any online retailer is the speed of its IT infrastructure. Shoppers aren’t infinitely patient. Sluggish infrastructure performance can make shoppers wait precious seconds longer than they can stand, sending them fleeing to other sites for a faster purchase. Our federal government’s halting rollout of the Health Insurance Marketplace website is a glaring example of what can happen when IT infrastructure isn’t solid. A few bad user experiences that go viral can be damaging enough. Tens of thousands can be crippling.  

In hyperscale datacenters, any number of problems including network issues, insufficient scaling and inconsistent management can undermine end users’ experience. But one that hits home for me is the impact of slow storage on the performance of databases, where the data sits. With the database at the heart of all those online transactions, retailers can ill afford to have their tier of database servers operating at anything less than peak performance.

Slow storage undermines database performance
Typically, Web 2.0 and e-commerce companies run relational databases (RDBs) on these massive server-centric infrastructures. (Take a look at my blog last week to get a feel for the size of these hyperscale datacenter infrastructures). If you are running that many servers to support millions of users, you are likely using some kind of open-sourced RDB such as MySQL or other variations. Keep in mind that Oracle 11gR2 likely retails around $30K per core but MSQL is free. But the performance of both, and most other relational databases, suffer immensely when transactions are retrieving data from storage (or disk). You can only throw so much RAM and CPU power at the performance problem … sooner rather than later you have to deal with slow storage.

Almost everyone in industry – Web 2.0, cloud, hyperscale and other providers of massive database infrastructures – is lining up to solve this problem the best way they can. How? By deploying flash as the sole storage for database servers and applications. But is low-latency flash enough? For sheer performance it beats rotational disk hands down. But … even flash storage has its limitations, most notably when you are trying to drive ultra-low latencies for write IOs. Most IO accesses by RDBs, which do the transactional processing, are a mix or read/writes to the storage. Specifically, the mix is 70%/30% reads/writes. These are also typically low q-depth accesses (less than 4). It is those writes that can really slow things down.

PCIe flash reduces write latencies
The good news is that the right PCIe flash technology in the mix can solve the slowdowns. Some interesting PCIe flash technologies designed to tackle this latency problem are on display at AIS this week. DRAM and in particular NVDRAM are being deployed as a tier in front of flash to really tackle those nasty write latencies.

Among other demos, we’re showing how a Nytro™ 6000 series PCIe flash card helps solve the MySQL database performance issues. The typical response time for a small data read (this is what the database will see for a Database IO) from an HDD is 5ms. Flash-based devices such as the Nytro WarpDrive® card can complete the same read in less than 50μs on average during testing, an improvement of several orders-of-magnitude in response time. This response time translates to getting much higher transactions out of the same infrastructure – but with less space (flash is denser) and a lot less power (flash consumes a lot lower power than HDDs).

We’re also showing the Nytro 7000 series PCIe flash cards. They reach even lower write latencies than the 6000 series and very low q-depths.  The 7000 series cards also provide DRAM buffering while maintaining data-integrity even in the event of a power loss.

For online retailers and other businesses, higher database speeds mean more than just faster transactions. They can help keep those cash registers ringing.

Tags: , , , , , , , , , , , , , , , , , , , ,
Views: (824)


Try using a sledgehammer to pack 15 pounds of potatoes into a bag with a 5-pound capacity, and what do you end up with? Too much messy and disgusting material crammed into a vessel too small for the job and a lot of sloppy overspill.

Unless you have the right sledgehammer. What does this have to do with computer storage? Plenty. And it all starts with a new data-reshaping capability of LSI® DuraWrite™ technology. Keep the numbers in mind: 15 pounds of data in a 5-pound bag. Or three units of data in a space designed for one. It’s key. More on that later.

What does DuraWrite technology do again?
My blog Write Amplification (Part 2) talks about how DuraWrite data reduction technology can make more space for over provisioning. That translates into faster data transfers, longer flash memory endurance and lower power draw. What it has NOT done is increase the space available for data storage.

Until now.

DuraWrite Virtual Capacity is like the Dr. Who TARDIS
Excuse my Dr. Who phone booth reference, but for those who know the TARDIS, it provides a great analogy. For those unfamiliar with the TARDIS, it is a fictional time machine that looks like a British police call box. It is very small by external appearances, but inside it is vast in its carrying capacity, taking occupants on an odyssey through time.

DuraWrite Virtual Capacity (DVC) is a new feature of our SandForce® SSD controllers, and it’s a bit like the TARDIS. While there is no time travel involved, it does provide a lot more than can be seen. DVC takes advantage of data entropy (randomness) as data is written to the SSD. Some people like to think of it as data compression. Whatever you call it, the end result is the same—less data is written to the flash than what is written from the host. DuraWrite technology alone will increase the over provisioning of an SSD, but DVC increases the user data storage available in an SSD (not the over provisioning).

How much more space can it add?
The efficiency of DVC is inversely related to the entropy. High entropy data like JPEG, encrypted and similar compressed files do little to increase data capacity. In contrast, files like Microsoft OfficeOutlook® PST, Oracle® databases, EXE, and DLL (operating system) files have much lower entropy and can increase usable storage space on the order of two to three times for the same physical flash memory.  Yes, I said two to three times. Better still, that translates into a two to three times reduction in the net cost of the flash storage. Again, no typo: two to three times more affordable. Since most enterprise deployments of flash memory are limited by the cost per GB of the flash, this kind of advance has the potential to further accelerate flash memory deployments in the enterprise.

Why hasn’t this been done in the past?
Think TARDIS again. Step into the booth, and take a joy ride through space and time. It happens on-the-fly. Simple … but, only in a fictional world. With on-the-fly data reduction and compression, the process is filled with complexities. The biggest problem is most operating systems don’t understand that the maximum capacity of a primary storage device (hard disk or solid state drive) can increase or decrease over time. However, open source operating systems can address the issue with customized drivers.

The other problem is any storage device that includes data reduction or compression must use a variable mapping table to track the location of the data on the device once data is reduced. Hard disk drives (HDDs) do not require any kind of mapping table because the operating system can write new data over old data. However, the lack of a mapping table prevents HDDs from supporting an on-the-fly data reduction and compression system.

All solid state drives (SSDs) using NAND flash feature  a basic mapping table, typically called the flash translation layer (FTL). This mapping table is required because NAND flash memory pages cannot be rewritten directly, but must first be erased in larger blocks. The SSD controller needs to relocate valid data while the old data gets erased. This process, called garbage collection, uses the mapping table. However, the data reduction and compression system requires a mapping table that is variable in size. Most SSDs lack that capability, but not those using a SandForce controller, making SSDs with SandForce controllers perfectly suited to the job.

What use cases can be applied to DVC?
DVC can be used to increase usable data storage space or provide more cache capacity flexibility by two to three times. To create more usable data storage space, the operating system must be altered with new primary storage device drivers for it to understand the drive’s maximum capacity, which can fluctuate over time based on how much the data is reduced or compressed.

To support greater cache capacity flexibility, a host controller would manage the flash memory directly as a cache. The controller would isolate the flash memory capacity from the host so the operating system does not even see it. The dynamic cache capacity would increase cache performance at a lower price per GB depending upon the entropy of the data. The LSI Nytro product line and some SandForce Driven® program SSDs already support both of these use cases.

When will this appear in my personal computer?
While DVC is already being deployed and evaluated in enterprise datacenters around the world, the use in personal computers will take a bit longer due to the need to have the operating system changed with new storage device drivers that understand the fluctuating maximum capacity.

When these operating system changes come together, you will not need that sledgehammer to pack more data into your TARDIS (SSD). Now that’s a space odyssey to write home about.

Tags: , , , , , , , , ,
Views: (985)


Each new generation of NAND flash memory reduces the fabrication geometry – the dimension of the smallest part of an integrated circuit used to build up the components inside the chip. That means there are fewer electrons storing the data, leading to increased errors and a shorter life for the flash memory. No need to worry. Today’s flash memory depends upon the intelligence and capabilities of the solid state drive (SSD) controller to help keep errors in check and get the longest life possible from flash memory, making it usable in compute environments like laptop computers and enterprise datacenters.

Today’s volume NAND flash memory uses a 20nm and 19nm manufacturing process, but the next generation will be in the 16nm range. Some experts speculate that today’s controllers will struggle to work with this next generation of flash memory to support the high number of write cycles required in datacenters. Also, the current multi-level cell (MLC) flash memory is transitioning to triple-level cell (TLC), which has an even shorter life expectancy and higher error rates.

Can sub-20nm flash survive in the datacenter?
Yes, but it will take a flash memory controller with smarts the industry has never seen before. How intelligent? Sub-20nm flash will need to stretch the life of the flash memory beyond the flash manufacturer’s specifications and correct far more errors than ever before, while still maintaining high throughput and very low latency. And to protect against periodic error correction algorithm failures, the flash will need some kind of redundancy (backup) of the data inside the SSD itself.

When will such a controller materialize?
Now.

LSI this week introduced the third generation of its flagship SSD and flash memory controller, called the SandForce SF3700. The controller is newly engineered and architected to solve the lifespan, performance, and reliability challenges of deploying sub-20nm flash memory in today’s performance-hungry enterprise datacenters. The SandForce SF3700 also enables longer periods between battery recharges for power-sipping client laptop and ultrabook systems. It all happens in a single ASIC package. The SandForce SF3700 is the first SSD controller to include both PCIe and SATA host interfaces natively in one chip to give customers of SSD manufacturers an easy migration path as more of them move to the faster PCIe host interface.

How does the SandForce SF3700 controller make sub-20nm flash excel in the datacenter?
Our new controller builds on the award-winning capabilities of the current SandForce SSD and flash controllers. We’ve refined our DuraWrite™ data reduction technology to streamline the way it picks blocks, collects garbage and reduces the write count. You’ll like the result: longer flash endurance and higher read and write speeds.

The SandForce SF3700 includes SHIELD™ error correction, which applies LDPC and DSP technology in unique ways to correct the higher error rates from the new generations of flash memory. SHIELD technology uses a multi-level error correction schema to optimize the time to get the correct data. Also, with its exclusive Adaptive Code Rate feature, SHIELD leverages DuraWrite technology’s ability to span internal NAND flash boundaries between the user data space and the flash manufacturer’s dedicated ECC field. Other controllers only use one size of ECC code rate for flash memory – the one largest size designed to support the end of the flash’s life. Early in the flash life, a much smaller size ECC is required, and SHIELD technology scales down the ECC accordingly, diverting the remaining free space as additional over provisioning. SHIELD partially increases the ECC size over time as the flash ages to correct the increasing failures, but does not use the largest ECC size until the flash is nearly at the end of its life.

Why is this good? Greater over provisioning over the life of the SSD improves performance and increases the endurance. SHIELD also allows the ECC field to grow even larger after it reaches its specified end of life. The big takeaway: All of these SHIELD capabilities increase flash write endurance many times beyond the manufacturer’s specification. In fact  at the 2013 Flash Memory Summit exposition in Santa Clara, CA, SHIELD was shown to extend the endurance of a particular Micron NAND flash by nearly six times.

That’s not all. The SandForce SF3700 controller’s RAISE™ data reliability feature now offers stronger protection, including full die failure and more options for protecting data on SSDs with low (e.g., 32GB & 64GB) and binary (256GB vs. 240GB) capacities.

So what about end user systems?
The beauty of all SandForce flash and SSD controllers is its onboard firmware, which takes the one common hardware component – the ASIC – and adapts it to the user’s storage environment. For example, in client applications the firmware helps the controller preserve SSD power to enable users of laptop and ultrabook systems to remain unplugged longer between battery recharges. In contrast, enterprise environments require the highest possible performance and lowest latency. This higher performance draws more power, a tradeoff the enterprise is willing to make for the fastest time-to-data. The firmware makes other similar tradeoffs based on which storage environment it is serving.

Although most people consider the enterprise and client storage needs are very diverse, we think the new SandForce SF3700 Flash and SSD controller showcases the perfect balance of power and performance that any user hanging ten can appreciate.

Tags: , , , , , , , , , , , ,
Views: (569)


Back in the 1990s, a new paradigm was forced into space exploration. NASA faced big cost cuts. But grand ambitions for missions to Mars were still on its mind. The problem was it couldn’t dream and spend big. So the NASA mantra became “faster, better, cheaper.” The idea was that the agency could slash costs while still carrying out a wide variety of programs and space missions. This led to some radical rethinks, and some fantastically successful programs that had very outside-the-box solutions. (Bouncing Mars landers anyone?)

That probably sounds familiar to any IT admin. And that spirit is alive at LSI’s AIS – The Accelerating Innovation Summit, which is our annual congress of customers and industry pros, coming up Nov. 20-21 in San Jose. Like the people at Mission Control, they all want to make big things happen… without spending too much.

Take technology and line of business professionals. They need to speed up critical business applications. A lot. Or IT staff for enterprise and mobile networks, who must deliver more work to support the ever-growing number of users, devices and virtualized machines that depend on them. Or consider mega datacenter and cloud service providers, whose customers demand the highest levels of service, yet get that service for free. Or datacenter architects and managers, who need servers, storage and networks to run at ever-greater efficiency even as they grow capability exponentially.

(LSI has been working on many solutions to these problems, some of which I spoke about in this blog.)

It’s all about moving data faster, better, and cheaper. If NASA could do it, we can too. In that vein, here’s a look at some of the topics you can expect AIS to address around doing more work for fewer dollars:

  • Emerging solid state technologies – Flash is dramatically enhancing datacenter efficiency and enabling new use cases. Could emerging solid state technologies such as Phase Change Memory (PCM) and Spin-Torque Transfer (STT) RAM radically change the way we use storage and memory?
  • Hyperscale deployments – Traditional SAN and NAS lack the scalability and economics needed for today’s hyperscale deployments. As businesses begin to emulate hyperscale deployments, they need to scale and manage datacenter infrastructure more effectively. Will software increasingly be used to both manage storage and provide storage services on commodity hardware?
  • Sub-20nm flash – The emergence of sub-20nm flash promises new cost savings for the storage industry. But with reduced data reliability, slower overall access times and much lower intrinsic endurance, is it ready for the datacenter?
  • Triple-Level Cell flash – The move to Multi-Level Cell (MLC) flash helped double the capacity per square millimeter of silicon, and Triple-Level Cell (TLC) promises even higher storage density. But TCL comes at a cost: its working life is much shorter than MLC. So what, if any role will TLC play in the datacenter? Remember – it wasn’t long ago no one believed MLC could be used in enterprise.
  • Flash for virtual desktop – Virtual desktop technology has seen significant growth in today’s datacenters. However, storage demands on highly utilized VDI servers can cause unacceptable response times. Can flash help virtual desktop environments achieve the best overall performance to improve end-user productivity while lowering total solution cost?
  • Flash caching – Oracle and storage vendors have started enhancing their products to take advantage of flash caching. How can database administrators implement caching technology running on Oracle® Linux with Oracle Unbreakable Enterprise Kernel, utilizing Oracle Database Smart Flash Cache?
  • Software Defined Networks (SDN) – SDNs promise to make networks more flexible, easier to manage, and programmable. How and why are businesses using SDNs today?  
  • Big data analytics – Gathering, interpreting and correlating multiple data streams as they are created can enhance real-time decision making for industries like financial trading, national security, consumer marketing, and network security. How can specialized silicon greatly reduce the compute power necessary, and make the “real-time” part of real-time analytics possible?
  • Sharable DAS – Datacenters of all sizes are struggling to provide high performance and 24/7 uptime, while reducing TCO. How can DAS-based storage sharing and scaling help meet the growing need for reduced cost and greater ease of use, performance, agility and uptime?
  • 12Gb/s SAS – Applications such as Web 2.0/cloud infrastructure, transaction processing and business intelligence are driving the need for higher-performance storage. How can 12Gb/s SAS meet today’s high-performance challenges for IOPS and bandwidth while providing enterprise-class features, technology maturity and investment protection, even with existing storage devices?

And, I think you’ll find some astounding products, demos, proof of concepts and future solutions in the showcase too – not just from LSI but from partners and fellow travelers in this industry. Hey – that’s my favorite part. I can’t wait to see people’s reactions.

Since they rethought how to do business in 2002, NASA has embarked on nearly 60 Mars missions. Faster, better, cheaper. It can work here in IT too.

Tags: , , , , , , , , , , , , , , , , , ,
Views: (634)


All NAND flash-based SSDs use a process called garbage collection (GC) so the flash memory can be rewritten with fresh data, enabling the SSD to function like any other rewritable storage device. The number of rewrites (program/erase cycles) possible with NAND flash memory is finite. That’s why it’s essential to ensure that each P/E cycle really counts – that is, each is performed with top efficiency – to help preserve optimum SSD performance.

Collecting the garbage takes time
In my 2011 Flash Memory Summit presentation , I went into great detail about how GC – the automatic memory management process of clearing invalid data from memory to give new data a clean slate – works in an SSD. Here’s a recap: flash memory is organized in groups of pages where data can be written. Once a page is written, it cannot be rewritten until it is erased. Simple enough. But a page can only be erased within a group of typically 128 pages called a block. But wait. The complexity of writing data really starts to escalate in the case of random writes replacing previously written data. Random writes put the new data in previously erased pages elsewhere, peppering a block of valid data with “patches of invalid data.” In order to write new data to these patches, the whole block – all 128 pages – must be erased. But first all surrounding pages with valid data must be read and then rewritten to blank pages. The newly erased block of blank pages is then ready to save new data.

So why’s this a problem? This rewriting process shares the same path to the flash memory as new data arriving from the host system. You see the issue – a bottleneck. What you may not know is that this traffic jam can severely degrade overall write performance, sometimes as much as 90%.

Why not collect the garbage when the SSD is idle?
To improve write performance, many SSDs perform idle-time GC or background GC. When the SSD is idle – not performing reads or writes from the host system – the data paths to the flash memory are open. In a perfect world, the SSD controller would move all valid data into a contiguous group of new blocks so that all the free space would be consolidated into a few very large areas. Then, when new data arrived, the controller would send it directly to the fresh blocks and be spared from having to move data around just to free up space on pages of invalid data. But the world is far from perfect.

No free lunch, even from the garbage can
As might be expected, background or idle-time GC has drawbacks. The two main downsides are:

1.      For users of Ultrabooks and other mobile systems, battery power is precious. The longer users can work unplugged between battery charges, the better. To make the most of a single charge, these systems use features like DevSleep to drastically reduce power to internal components not in use. At times when no data is being stored to or retrieved from the SSD, the host system gears down the SSD into a low power state (like DevSleep) to reduce power draw. In this state, an SSD with background or idle-time GC has no power to perform GC.

The upshot is that the SSD will be very slow when the system turns it back on and starts sending new data that must be saved in the spaces not yet cleared out by garbage-collected while the SSD was asleep. Alternately, the SSD may temporarily override the low power command from the host in order to perform the background GC, pulling more power from the battery and shortening the time remaining before it needs to be recharged.

2.      When the SSD is performing GC, invalid data is ignored and only valid data is moved before the block is erased. Now imagine a large 2GB file on the SSD that the user plans to delete tomorrow. The SSD has no clue this will happen, so it automatically performs background GC around the 2GB file – and all other data – today, consuming one more of the very limited, precious program/erase cycles for all the flash holding that data. Ideally, the SSD would have waited one more day to garbage collect, the user would have already deleted the file, and the SSD wouldn’t have had to move all that data to new locations. No unnecessary data movement. No unnecessary use of a precious program/erase cycle.

Many people don’t realize that the number of background reads and writes initiated by the operating system, virus checkers, browsers, etc., far outstrip the number initiated by the computer user. Some users rarely delete files, believing they’ll extend the life of their SSD. The truth is, user file deletions are a drop in the bucket. It’s not their use of the computer storage that causes the most wear and tear. Background action from applications and the operating system is the real culprit.

Is there a better option?
What’s an SSD user to do? A super-fast foreground GC engine is the best solution. The key is special hardware and firmware integrated into the flash controller that streamlines garbage collection so it can run in the foreground with incoming data. The engine also enables high-speed writes to the flash memory. By maintaining high write speeds for GC operations, the SSD can afford to leave all valid data mixed with invalid data. That way the blocks are not recycled until absolutely necessary, dramatically reducing wear. Plus, the longer the SSD waits to GC pages, the higher the likelihood other pages of data will have already been made invalid by the operating system or user. The result is lower write amplification, longer flash endurance, and even higher performance.

All LSI® SandForce® Flash Controllers employ foreground GC to provide these invaluable benefits to the user.

Tags: , , , , , , , ,
Views: (3665)


Where did my email go?
This week I was dragged into to the virtualized cloud kicking and screaming … well, sort of.  LSI has moved me, and all my co-workers, from my nice, safe local Exchange server to one in the amorphous, mysterious cloud. Scary. My IT department says the new cloud email is a great thing. They are now promising me unlimited email storage. Long gone will be the days of harrowing emails telling me I am approaching my storage limit and soon will be unable to send new email.

With cloud email, I can keep everything forever! I am not quite sure that saving mountains of email will be a good thing :-).  Other than having to redirect my tablet and smartphone to this new service, update my webmail bookmark and empty my email inbox, there was not much I had to do.  So far, so good. I have not experienced any challenges or performance issues.  A key reason is flash storage.

To be sure, virtualization is a great tool for improving physical server utilization and flexibility as well as reducing power, cooling and datacenter footprint costs.  That’s why the use of virtualization for email, databases and desktops is growing dramatically. But virtualized servers are only as effective as the storage performance that supports them. If, as a datacenter manager, your clients cannot access their application data quickly or boot their virtual desktop in a reasonable time, your company’s productivity and client satisfaction can drop dramatically.

Today, applications most likely run on virtualized servers.  The upside of server virtualization is that a company can improve server utilization and run more applications on fewer physical servers.  This can reduce power consumption, make more efficient use of datacenter floor space and make it easier to configure servers and deploy applications. The cloud also helps streamline application development, allowing companies to more efficiently and cost effectively test software applications across a broad set of configurations and operating systems.

A heated dispute – storage contention
Once application testing is complete, a virtual server’s configuration and image can be put on a virtual shelf until they are needed again, freeing up memory, processing, storage and other resources on the physical server for new virtual servers with just a few keystrokes. But with virtualization and the cloud there can be downsides, like slow performance – especially storage performance.

When a number of virtual servers are all using the same physical storage, there can be infighting for storage capacity, generally known as storage contention.  These internecine battles can slow application response to a frustrating glacial pace and lead to issues like VDI Boot and Login Storm that can extend the time it takes for users to login to tens of minutes.

Flash helps alleviate slowdowns in storage performance
Here is where flash comes to the rescue. New flash storage solutions are being deployed to help improve virtualized storage performance and alleviate productivity-sapping slowdowns caused by VDI Boot and Login Storm — the crush of end users booting up or logging in within a small window that overwhelms the server with data requests and degrades response times. Flash can be used as primary storage inside servers running virtual machines to dramatically speed storage response time. Flash can also be deployed as an intelligent cache for DAS- or SAN-connected storage and even as an external shared storage pool.

It’s clear that virtualization will require higher storage performance and better, more cost-effective ways to deploy flash storage. But how much flash you need depends on your particular virtualization challenge, configuration and of course budget: while flash storage is extremely fast, it is costlier than disk-based storage. So choosing the right storage acceleration solution – one is LSI® Nytro™ Application Acceleration – can be as important as choosing the right cloud provider for your company’s email.

While my email is now stored in the cloud in Timbuktu, I know the flash storage solutions in that datacenter help keep my mail quickly accessible 24/7 whether I access it from my computer, tablet or smartphone, giving my productivity a boost. I can be assured that every action item I am sent will quickly make it to my inbox and be added to my ever-growing to-do list. Now my next big challenge is to improve my own response performance to those email requests!

 

Tags: , , , , , , , , ,
Views: (1296)