When implementing an LSI Nytro WarpDrive (NWD) or Nytro MegaRAID (NMR) PCIe flash card in a Linux server, you need to modify quite a few variables to get the best performance out of these cards.

In the Linux server, device assignments sometimes change after reboots. Sometimes, the PCIe flash card can be assigned /dev/sda. Other times, it can be assigned /dev/sdd, or any device name. This variability can wreak havoc when modifying the Linux environment variables. To get around this issue, assignments by the SCSI address should be used so all of the Linux performance variables will persist properly across reboots. If using a filesystem, use the device UUID address in the mount statement in /etc/fstab to persist the mount command across reboots.

Cut and paste the script
The first step to solving is to cut and paste the following script, except the SCSI address (highlighted in yellow), into /etc/rc.local. You’ll need to enter the SCSI address of the PCIe card before executing the script.

nwd_getdevice.sh
ls -al /dev/disk/by-id |grep 'scsi-3600508e07e726177965e06849461a804 ' |grep /sd > nwddevice.txt
awk '{split($11,arr,"/"); print arr[3]}' nwddevice.txt > nwd1device.txt
variable1=$(cat nwd1device.txt)
echo "4096" > /sys/block/$variable1/queue/nr_requests
echo "512" > /sys/block/$variable1/device/queue_depth
echo "deadline" > /sys/block/$variable1/queue/scheduler
echo "2" > /sys/block/$variable1/queue/rq_affinity
echo 0 > /sys/block/$variable1/queue/rotational
echo 0 > /sys/block/$variable1/queue/add_random
echo 1024 > /sys/block/$variable1/queue/max_sectors_kb
echo 0 > /sys/block/$variable1/queue/nomerges
blockdev --setra 0 /dev/$variable1

The highlighted SCSI address above needs to be modified with the SCSI address of the PCIe flash card. To get the address, issue this command:

ls –al /dev/disk/by-id

When you install the Nytro PCIe flash card, Linux will assign a name to the device. For example, the device name can be listed as /dev/sdX, and X can be any letter. The output from the ls command above will show the SCSI address for this PCIe device. Don’t use the address containing “-partX” in it. Be sure to note this SCSI address since you will need it to create the script below. Include a single space between the SCSI address and the closing single quote in the script.

Create nwd_getdevice.sh file
Next, copy the code and create a file called “nwd_getdevice.sh” with the modified SCSI address.

After saving this file, change file permissions to “execute” and then place this command in the /etc/rc.local file:

/path/nwd_getdevice.sh

Test the script
To test this script, execute it on the command line exactly how you stated it in the rc.local file. The next time the system reboots, the settings will be set to the appropriate device.

Multiple PCIe flash cards
If you plan to deploy multiple LSI PCIe flash cards in the server, the easiest way is to duplicate all of commands in the nwd_getdevice.sh script and paste them at the end. Then change the SCSI address of the next card and overlay the SCSI address in the newly pasted area. You can follow this procedure for as many LSI PCIe flash cards as are installed in the server. For example:

nwd_getdevice.sh
ls -al /dev/disk/by-id |grep 'scsi-1stscsiaddr83333365e06849461a804 ' |grep /sd > nwddevice.txt
awk '{split($11,arr,"/"); print arr[3]}' nwddevice.txt > nwd1device.txt
variable1=$(cat nwd1device.txt)
echo "4096" > /sys/block/$variable1/queue/nr_requests
echo "512" > /sys/block/$variable1/device/queue_depth
echo "deadline" > /sys/block/$variable1/queue/scheduler
echo "2" > /sys/block/$variable1/queue/rq_affinity
echo 0 > /sys/block/$variable1/queue/rotational
echo 0 > /sys/block/$variable1/queue/add_random
echo 1024 > /sys/block/$variable1/queue/max_sectors_kb
echo 0 > /sys/block/$variable1/queue/nomerges
blockdev --setra 0 /dev/$variable1
ls -al /dev/disk/by-id |grep 'scsi-2ndscsiaddr1234566666654444444444 ' |grep /sd > nwddevice.txt
awk '{split($11,arr,"/"); print arr[3]}' nwddevice.txt > nwd1device.txt
variable1=$(cat nwd1device.txt)
echo "4096" > /sys/block/$variable1/queue/nr_requests
echo "512" > /sys/block/$variable1/device/queue_depth
echo "deadline" > /sys/block/$variable1/queue/scheduler
echo "2" > /sys/block/$variable1/queue/rq_affinity
echo 0 > /sys/block/$variable1/queue/rotational
echo 0 > /sys/block/$variable1/queue/add_random
echo 1024 > /sys/block/$variable1/queue/max_sectors_kb
echo 0 > /sys/block/$variable1/queue/nomerges
blockdev --setra 0 /dev/$variable1

Final thoughts
The most important step in implementing Nytro PCIe flash cards under Linux is aligning the card on a boundary, which I cover in Part 1 of this series.  This step alone can deliver a 3x performance gain or more based on our in-house tests as well as testing from some of our customers. The rest of this series walks you through the process of setting up these aligned flash cards using a file system, ASM or RAW device and, finally, persisting all the Linux performance variables to the card so these settings are persisted across reboots.

Links to the other posts in this series:

How to maximize PCIe flash performance under Linux

Part 1: Aligning PCIe flash devices
Part 2: Creating the RAW device or filesystem
Part 3: Oracle ASM

Tags: , , , , , , , , , , ,
Views: (1833)


My “Size matters: Everything you need to know about SSD form factors” blog in January spawned some interesting questions, a number of them on Z-height.

What is a Z-height anyway?
For a solid state drive (SSD), Z-height describes its thickness and is generally its smallest dimension. Z-height is a redundant term, since Z is a variable representing the height of an SSD. The “Z” is one of the variables – X, Y and Z, synonymous with length, width and height – that describe the measurements of a 3-dimensional object. Ironically, no one says X-length or Y-width, but Z-height is widely used.

What’s the state of affairs with SSD Z-height?
The Z-height has typically been associated with the 2.5″ SSD form factor. As I covered in my January form factor blog, the initial dimensions of SSDs were modeled after hard disk drives (HDDs). The 2.5” HDD form factor featured various heights depending on the platter count – the more disks, the greater the capacity and the thicker the HDD. The first 2.5” full capacity HDDs had a maximum Z-height of 19mm, but quickly dropped to a 15mm maximum to enable full-capacity HDDs in thinner laptops. By the time SSDs hit high-volume market acceptance, the dimensional requirements for storage had shrunk even more, to a maximum height of 12.5mm in the 2.5” form factor. Today, the Z-height of most 2.5″ SSDs generally ranges from 5.0mm to 9.5mm.

With printed circuit board (PCB) form factor SSDs—those with no outer case—the Z-height is defined by the thickness of the board and its components, which can be 3mm or less. Some laptops have unique shape or height restrictions for the SSD space allocation. For example, the MacBook Air’s ultra-thin profile requires some of the thinnest SSDs produced.

A new standard in SSD thickness
The platter count of an HDD determines its Z-height. In contrast, an SSD’s Z-height is generally the same regardless of capacity. The proportion of SSD form factors deployed in systems is shifting from the traditional, encased SSDs to the new bare PCB SSDs. As SSDs drift away from the older form factors with different heights, consumers and OEM system designers will no longer need to consider Z-height because the thickness of most bare PCB SSDs will be standard.

Tags: , , , , , , , , , , ,
Views: (1689)


The introduction of LSI® SF3700 flash controllers has prompted many questions about the PCIe® (PCI Express) interface and how it benefits solid state storage, and there’s no better person to turn to for insights than our resident expert, Jeremy Werner, Sr. Director of Product and Customer Management in LSI’s Flash Components Division (SandForce):

Most client-based SSDs have used SATA in the past, while PCIe was mainly used for enterprise applications. Why is the PCIe interface becoming so popular for the client market?

Jeremy: Over the past few decades, the performance of host interfaces for client devices has steadily climbed. Parallel ATA (PATA) interface speed grew from 33MB/s to 100MB/s, while the performance of the Serial ATA (SATA) connection rose from 1.5Gb/s to 6Gb/s. Today, some solid state drives (SSDs) use the PCIe Gen2 x4 (second-generation speeds with four data communication lanes) interface, supporting up to 20Gb/s (in each direction). Because the PCIe interface can simultaneously read and write (full duplex) and SATA can only read or write at one time (half-duplex), PCIe can potentially double the 20Gb/s speeds in a mixed (read and write) workload, making it nearly seven times faster than SATA.

Will the PCIe interface replace SATA for SSDs?

Jeremy: Eventually the replacement is likely, but it will probably take many years in the single-drive client PC market given two hindrances. First, some single-drive client platforms must use a common HDD and SSD connection to give users the choice between the two devices. And because the 6Gb/s SATA interface delivers much higher speeds than than hard disk drives, there is no immediate need for HDDs to move to the faster PCIe connection, leaving SATA as the sole interface for the client market. And, secondly, the older personal computers already in consumers’ homes that need an SSD upgrade support only SATA storage devices, so there’s no opportunity for PCIe in that upgrade market.

By contrast, the enterprise storage market, and even some higher-end client systems, will migrate quickly to PCIe since they will see significant speed increases and can more easily integrate PCIe SSD solutions available now.

It is noteworthy that some standards, like M.2 and SATA Express, have defined a single connector that supports SATA or PCIe devices.  The recently announced LSI SF3700 is one example of an SSD controller that supports both of those interfaces on an M.2 board.

What is meant by the terms “x1, x2, x4, x16” when referencing a particular PCIe interface?

Jeremy: These numbers are the PCIe lane counts in the connection. Either the host (computer) or the device (SSD) could limit the number of lanes used. The theoretical maximum speed of the connection (not including protocol overhead) is the number of lanes multiplied by the speed of each lane.

What is protocol overhead?

Jeremy: PCIe, like many bus interfaces, uses a transfer encoding scheme – a set number of data bits represented by a slightly larger number of bits called a symbol. The additional bits in the symbol constitute the inefficient overhead of metadata required to manage the transmitted user data. PCIe Gen3 features a more efficient data transfer encoding with 128b/132b (3% overhead) instead of the 8b/10b (20% overhead) of PCIe Gen2, increasing data transfer speeds by up to 21%.

What is defined in the PCIe 2.0 and 3.0 specifications, and do end users really care?

Jeremy: Although each PCIe Gen3 lane is faster than PCIe Gen2 (8Gb/s vs 5Gb/s, respectively), lanes can be combined to boost performance in both versions. The changes most relevant to consumers pertain to higher speeds. For example, today consumer SSDs top out at 150K random read IOPS at 4KB data transfer sizes. That translates to about 600MB/s, which is insufficient to saturate a PCIe Gen2 x2 link, so consumers would see little benefit from a PCIe Gen3 solution over PCIe Gen2. The maximum performance of PCIe Gen2 x4 and PCIe Gen3 x2 devices is almost identical because of the different transfer encoding schemes mentioned previously.

Are there mandatory features that must be supported in any of these specifications?

Jeremy: Yes, but nearly all of these features have little impact on performance, so most users have no interest in the specs. It’s important to keep in mind that the PCIe speeds I’ve cited are defined as the maximums, and the spec has no minimum speed requirement. This means a PCIe Gen3 solution might support only a maximum of 5Gb/s, but still be considered a PCIe Gen3 solution if it meets the necessary specifications. So buyers need to be aware of the actual speed rating of any PCIe solution.

Is a PCIe Gen3 SSD faster than a PCIe Gen2 SSD?

Jeremy: Not necessarily. For example, a PCIe Gen2 x4 SSD is capable of higher speeds than a PCIe Gen3 x1 SSD. However, bottlenecks other than the front-end PCIe interface will limit the performance of many SSDs. Examples of other choke points include the bandwidth of the flash, the processing/throughput of the controller, the power or thermal limitations of the drive and its environment, and the ability to remove heat from that environment. All of these factors can, and typically do, prevent the interface from reaching its full steady-state performance potential.

In what form factors are PCIe cards available?

Jeremy: PCIe cards are typically referred to as plug-in products, much like SSDs, graphics cards and host-bus adapters. PCIe SSDs come in many form factors, with the most popular called “half-height, half-length.” But the popularity of the new, tiny M.2 form factors is growing, driven by rising demand for smaller consumer computers. There are other PCIe form factors that resemble traditional hard disk drives, such as the SFF-8639, a 2.5” hard disk drive form factor that features four PCIe lanes and is hot pluggable. What’s more, its socket is compatible with the SAS and SATA interfaces. The adoption of the SATA Express 2.5” form factor has been limited, but could be given a boost with the availability of new capabilities like SRIS (Separate Refclk with Independent SSC), which enables the use of lower cost interconnection cables between the device and host.

Are all M.2 cards the same?

Jeremy: No. All SSD M.2 cards are 22 mm wide (while some WAN cards are 30 mm wide), but the specification allows for different lengths (30, 42, 60, 80, and 110 mm). What’s more, the cards can be single- or double-sided to account for differences in the thickness of the products. Also, they are compatible with two different sockets (socket 2 and socket 3). SSDs compatible with both socket types, or only socket 2, can connect only two lanes  (x2), while SSDs compatible with only socket 3 can connect up to four (x4).

Summary
In my last few blogs, I covered various aspects of SSD form factors and included many images of the types that Jeremy mentioned above. I also delve deeper into details of the M.2 form factor in my blog “M.2: Is this the Prince of SSD form factors?” One thing about PCIe is certain: It is the next step in the evolution of computer interfaces and will give rise to more SSDs with higher performance, lower power consumption and better reliability.

 

Tags: , , , , , , , , , ,
Views: (1928)


A customer recently asked me if the SF3700, our latest flash controller, supports SATA Express and fired away with a bunch of other questions about the standard. The depth of his curiosity suggested a broader need for education on the basics of the standard.

To help me with the following overview of SATA Express, I recruited Sumit Puri, Sr. Director of Strategic Marketing for the Flash Components Division at LSI (SandForce). Sumit is a longtime contributor to many storage standards bodies and has been working with SATA- IO – the group responsible for SATA Express – for many years. He has first-hand knowledge of SATA- IO’s work.

Here are his insights into some of the fundamentals of SATA Express.

What is SATA Express?
Sumit: There’s quite a bit of confusion in the industry about what SATA Express defines. In simple terms, SATA Express is a specification for a new connector type that enables the routing of both PCIe® and SATA signals. SATA Express is not a command or signaling protocol. It should really be thought of as a connector that mates with legacy SATA cables and new PCIe cables.

Sumit Puri 

Why was SATA Express created?
Sumit: SATA Express was developed to help smooth the transition from the legacy SATA interface to the new PCIe interface. SATA Express gives system vendors a common connector that supports both traditional SATA and PCIe signaling and helps OEMs streamline connector inventory and reduce related costs.

What is the protocol used in SATA Express?
Sumit: One of the misconceptions about SATA Express is that it’s a protocol specification. Rather, as I mentioned, it’s a mechanical specification for a connector and the matching cabling. Protocols that support SATA Express include SATA, AHCI and NVME.

What are the form factors for SATA Express?
Sumit: SATA Express defines connectors for both a 2.5” drive and the host system. SATA Express connects the drive and system using SATA cables or the newly defined PCIe cables.

What connector configuration is used for SATA Express?
Sumit: Because SATA Express supports both SATA and PCIe signaling as well as the legacy SATA connectors, there are multiple configuration options available to motherboard and device manufacturers. The image below shows plug (a) which is built for attaching to a PCIe device. Socket (b) would be part of a cable assembly for receiving plug (a) or a standard SATA plug, and Socket (c) would mount to a backplane or motherboard fir receiving plug (a) or a standard SATA plug. The last two connectors are a mating pair designed to enable cabling (e) to connect to motherboards (d).

When will hosts begin supporting SATA Express?

Sumit: We expect systems to begin using SATA Express connectors early this year. They will primarily be deployed in desktop environments, which require cabling. In contrast, we expect limited use of SATA Express in notebook and other portable systems that are moving to cableless card-edge connector designs like the recently minted M.2 form factor. We also expect to see scant use of SATA Express in enterprise backplanes. Enterprise customers will likely transition to other connectors that support higher speed PCIe signaling like the SFF-8639, a new connector that was originally included in the SATA Express specification but has since been removed.

Will LSI support SATA Express?
Sumit: Absolutely. Our SF3700 flash controller will be fully compatible with the newly defined SATA Express connector and support either SATA or PCIe. Our current SF-2000 SATA flash controllers support SATA cabling used on SATA Express, but not PCIe.

Will LSI also support SRIS?
Sumit: PCIe devices enabled with SRIS (Separate Refclk Independent SSC) can self-clock so need no reference clock from the host, allowing system builders to use lower cost PCIe cables. SRIS is an important cost-saving feature for cabling that supports PCIe signaling. It doesn’t support card-edge connector designs. Today the SF3700 supports PCIe connectivity, and LSI will support SRIS in future releases of SF3700 and other products.

Why is it called SATA Express?
Sumit: SATA Express blends the names of the two connectors and captures the hybridization of the physical interconnects. The name reflects the ability of legacy SATA connectors to support higher PCIe data rates to simplify the transition to PCIe devices. SATA Express can pull double duty, supporting both PCIe and SATA signaling in the same motherboard socket. The same SATA Express socket accepts both traditional SATA and new PCIe cables and links to either a legacy SATA or SATA Express device connector.

How fast can SATA Express run?
Sumit: The PCIe interface defines the top SATA Express speed. A PCIe Gen2 x2 device supports up to 900 MB/s of throughput, a PCIe Gen3 x2 device up to 1800 MB/s of throughput – both significantly higher than 550 Mb/s speed ceiling of today’s SATA devices.

Is SATA Express similar to M.2?
Sumit: There are two key similarities. Both support SATA and PCIe on the same host connector, and both are designed to help transition from SATA to PCIe over time.

SATA Express delivers the future of connector speeds today
SATA Express was born of the stuff of all great inventions. Necessity. The challenge SATA-IO faced in doubling SATA 6 Gb/s speeds was herculean. The undertaking would have been too time-consuming to support the next-generation connection speeds that PCIe answers. It would have been too involved, requiring an overhaul of the SATA standard. Even in the brightest scenario, the effort would have produced a power guzzler at a time when greater power efficiency is a must for system builders. SATA-IO found a better path, an elegant bridge to PCIe speeds in the form of SATA Express.

Tags: , , , , , , , , , ,
Views: (3906)


The term ”form factor” is used in the computer industry to describe the shape and size of computer components, like drives, motherboards and power supplies. When hard disk drives (HDDs) initially made their way into microprocessor-based computers, they used magnetic platters up to 8 inches in diameter. Because that was the largest single component inside the HDD, it defined the minimum width of the HDD housing—the metal box around the guts of the drive.

The height was dictated by the number of platters stacked on the motor (about 14 for the largest configurations). Over time the standard size of the magnetic patter diameter shrank, which allowed the HDD width to decrease as well. The computer industry used the platter diameter dimensions to describe the HDD form factors, and those contours shrank over the years. Those 8” HDDs for datacenter storage and desktop PCs shed size to 5” to today’s 3.5”, and laptop HDDs, starting at 2.5”, are now as small as 1.8”.

What defines an SSD form factor?
When solid state drives (SSDs) first started replacing HDDs, they had to fit into computer chassis or laptop drive bays (mounting location) built for HDDs, so they had to conform to HDD dimensions. The two SSDs shown below are form factor identical twins—without the outer casing—to 1.8” and 2.5” HDDs. The SSDs also use standard SATA connectors, but note that the SATA connector for 1.8” devices is narrower than the 2.5” devices to accommodate the smaller width.

Internal circuit board of 1.8″ and 2.5″ SSD without the case

 

However, there’s no requirement for the SSD to match the shape of a typical HDD form factor. In fact some of the early SSDs slid into the high-speed PCIe slots inside the computer chassis, not into the drive bays.  A PCIe® SSD card solution resembles an add-in graphic card and installs the same way in the PCIe slot since the physical interface is PCIe.

SSD manufactured as a PCIe expansion card

 

The largest component of an SSD is a flash memory chip so, depending on how many flash chips are used, manufacturers have virtually limitless options in defining dimensions. JEDEC (Joint Electronic Device Engineering Council) defines technical standards for the electronic industry including SSD form factors. JEDEC defined the MO-297 standard, which establishes parameters for the layout, connector locations and dimensions of 54mm x 39mm Serial ATA (SATA) SSDs, so they can use the same connector as standard 2.5” HDDs, but fit into a much smaller space.

MO-297 with standard width (2.5”) SATA connector

 

The most important element of an SSD form factor is the interface connector, the conduit to the host computer. In the early days of SSDs, that connector was typically the same SATA connector used with HDDs. But over time the width of some SSDs became smaller than the SATA connector itself, driving the need for new connectors.

Standard SATA connector is too large for small form factors

Card edge connectors – the part of a computer board that plugs into a computer – emerged to enable smaller designs and to further reduce manufacturing and component costs by requiring the installation of only a single female socket on the host as a receptor for the edge of the SSD’s printed circuit board. (The original 2.5” and 1.8” SSD SATA connector required both a male and female plastic connector to mate the SSD to the computer).

With standardization of these connectors critical to ensuring interoperability among different manufacturers, a few organizations have defined standards for these new connectors. JEDEC defined the MO-300 (50.8mm x 29.85mm), which uses a mini-SATA (mSATA) connector, the same physical connector as mini PCI Express, although the two are not electrically compatible. SSD manufacturers have used that same mSATA edge connector and board width, but customized the length to accommodate more flash chips for higher capacity SSDs.

MO-300 mSATA (left) and custom-length mSATA (right)

 

In 2012 a new, even smaller form factor was introduced as Next Generation Form Factor (NGFF), but was later renamed to M.2. The M.2 standard defines a long list of optional board sizes, and the connector supports both SATA and PCIe electrical interfaces. The keyways or notches on the connector can help determine the interface and number of PCIe lanes possible to the board. However, that gets into more details than we have space to cover here, so we will save that for a future blog.

M.2 form factor supports PCIe and SATA, with many board size options

Apple® MacBook Air® and some MacBook Pro systems use an SSD with a connector and dimensions that closely resemble those of the M.2 form factor. In fact Apple MacBook systems have used a number of different connectors and interfaces for its SSD over the years. Apple used a custom connector with SATA signals from 2010 through 2012 and in 2013 switched to a custom connector with PCIe signals. 

 

MacBook Air SSDs with custom SATA and PCIe connectors

 

 

In some cases, standard SSD form factor configurations are not an option, so SSD manufacturers have taken it upon themselves to create custom board and interface configurations that meet those less typical needs.

Various custom SSD form factors and connectors

And finally there’s the ubiquitous USB-based connection. While USB flash drives have been around for nearly a decade, many people do not realize the performance of these devices can vary by 10 to 20 times. Typically a USB flash drive is used to make data portable—replacing the old floppy disk. In those cases the speed of the device is not critical since it is used infrequently.

Now with the high speed USB 3 interface, a SATA-to-USB 3 bridge chip, and a high performance flash controller like the LSI® SandForce® controller, these external devices can operate as a primary system SSDs, performing as fast as a standard SSD inside the system. The primary advantages of these SSDs are removability and transportability while providing high-speed operation.

USB host interface containing high-speed SSDs

If there’s one constant in life, it’s demand for ever smaller storage form factors that prompt changes in circuit layout, connector position and, of course, dimensions. New connectors proposed for future generations of storage devices like the SFF-8639 specification will enable multiple interfaces and data path channels on the same connector. While the SFF-8639 does not technically define the device to which it connects, the connector itself is rather large, so the form factor of the SSD will need to be big enough to hold the connector. That’s why the primary SFF-8639 market is datacenters that use back-plane connectors and racks of storage devices. A similar connector – like SFF-8639, very large and built to support multiple data paths – is the SATA Express connector. I will save the details of that connector for an upcoming blog.

The sky’s the limit for SSD shapes and sizes. Without a spinning platter inside a box, designers can let their imaginations run wild. Creative people in the industry will continue to find new applications for SSDs that were previously restricted by the internal components of HDDs. That creativity and flexibility will take on growing importance as we continue to press datacenters and consumer electronics to do more with less, reminding us that size does in fact matter.

Tags: , , , , , , , , , , , , ,
Views: (3668)


I want to warn you, there is some thick background information here first. But don’t worry. I’ll get to the meat of the topic and that’s this: Ultimately, I think that PCIe® cards will evolve to more external, rack-level, pooled flash solutions, without sacrificing all their great attributes today. This is just my opinion, but other leaders in flash are going down this path too…

I’ve been working on enterprise flash storage since 2007 – mulling over how to make it work. Endurance, capacity, cost, performance have all been concerns that have been grappled with. Of course the flash is changing too as the nodes change: 60nm, 50nm, 35nm, 24nm, 20nm… and single level cell (SLC) to multi level cell (MLC) to triple level cell (TLC) and all the variants of these “trimmed” for specific use cases. The spec “endurance” has gone from 1 million program/erase cycles (PE) to 3,000, and in some cases 500.

It’s worth pointing out that almost all the “magic” that has been developed around flash was already scoped out in 2007. It just takes a while for a whole new industry to mature. Individual die capacity increased, meaning fewer die are needed for a solution – and that means less parallel bandwidth for data transfer… And the “requirement” for state-of-the-art single operation write latency has fallen well below the write latency of the flash itself. (What the ?? Yea – talk about that later in some other blog. But flash is ~1500uS write latency, where state of the art flash cards are ~50uS.) When I describe the state of technology it sounds pretty pessimistic.  I’m not. We’ve overcome a lot.

We built our first PCIe card solution at LSI in 2009. It wasn’t perfect, but it was better than anything else out there in many ways. We’ve learned a lot in the years since – both from making them, and from dealing with customer and users – about our own solutions and our competitors.  We’re lucky to be an important player in storage, so in general the big OEMs, large enterprises and the hyperscale datacenters all want to talk with us – not just about what we have or can sell, but what we could have and what we could do. They’re generous enough to share what works and what doesn’t. What the values of solutions are and what the pitfalls are too. Honestly? It’s the hyperscale datacenters in the lead both practically and in vision.

If you haven’t  nodded off to sleep yet, that’s a long-winded way of saying – things have changed fast, and, boy, we’ve learned a lot in just a few years.

Most important thing we’ve learned…
Most importantly, we’ve learned it’s latency that matters. No one is pushing the IOPs limits of flash, and no one is pushing the bandwidth limits of flash. But they sure are pushing the latency limits.

PCIe cards are great, but…
We’ve gotten lots of feedback, and one of the biggest things we’ve learned is – PCIe flash cards are awesome. They radically change performance profiles of most applications, especially databases allowing servers to run efficiently and actual work done by that server to multiply 4x to 10x (and in a few extreme cases 100x). So the feedback we get from large users is “PCIe cards are fantastic. We’re so thankful they came along. But…” There’s always a “but,” right??

It tends to be a pretty long list of frustrations, and they differ depending on the type of datacenter using them. We’re not the only ones hearing it. To be clear, none of these are stopping people from deploying PCIe flash… the attraction is just too compelling. But the problems are real, and they have real implications, and the market is asking for real solutions.

  • Stranded capacity & IOPs
    • Some “leftover” space is always needed in a PCIe card. Databases don’t do well when they run out of storage! But you still pay for that unused capacity.
    • All the IOPs and bandwidth are rarely used – sure latency is met, but there is capability left on the table.
    • Not enough capacity on a card – It’s hard to figure out how much flash a server/application will need. But there is no flexibility. If my working set goes one byte over the card capacity, well, that’s a problem.
  • Stranded data on server fail
    • If a server fails – all that valuable hot data is unavailable. Worse – it all needs to be re-constructed when the server does come online because it will be stale. It takes quite a while to rebuild 2TBytes of interesting data. Hours to days.
  • PCIe flash storage is a separate storage domain vs. disks and boot.
    • Have to explicitly manage LUNs, move data to make it hot.
    • Often have to manage via different API’s and management portals.
    • Applications may even have to be re-written to use different APIs, depending on the vendor.
  • Depending on the vendor, performance doesn’t scale.
    • One card gives awesome performance improvement. Two cards don’t  give quite the same improvement.
    • Three or four cards don’t give any improvement at all. Performance maxed out somewhere below 2 cards.  It turns out drivers and server onloaded code create resource bottlenecks, but this is more a competitor’s problem than ours.
  • Depending on the vendor, performance sags over time.
    • More and more computation (latency)  is needed in the server as flash wears and needs more error correction.
    • This is more a competitor’s problem than ours.
  • It’s hard to get cards in servers.
    • A PCIe card is a card – right? Not really. Getting a high capacity card in a half height, half length PCIe form factor is tough, but doable. However, running that card has problems.
    • It may need more than 25W of power  to run at full performance – the slot may or may not provide it. Flash burns power proportionately to activity, and writes/erases are especially intense on power. It’s really hard to remove more than 25W air cooling in a slot.
    • The air is preheated, or the slot doesn’t get good airflow. It ends up being a server by server/slot by slot qualification process. (yes, slot by slot…) As trivial as this sounds, it’s actually one of the biggest problems

Of course, everyone wants these fixed without affecting single operation latency, or increasing cost, etc. That’s what we’re here for though – right? Solve the impossible?

A quick summary is in order. It’s not looking good. For a given solution, flash is getting less reliable, there is less bandwidth available at capacity because there are fewer die, we’re driving latency way below the actual write latency of flash, and we’re not satisfied with the best solutions we have for all the reasons above.

The implications
If you think these through enough, you start to consider one basic path. It also turns out we’re not the only ones realizing this. Where will PCIe flash solutions evolve over the next 2, 3, 4 years? The basic goals are:

  • Unified storage infrastructure for boot, flash, and HDDs
  • Pooling of storage so that resources can be allocated/shared
  • Low latency, high performance as if those resources were DAS attached, or PCIe card flash
  • Bonus points for file store with a global name space

One easy answer would be – that’s a flash SAN or NAS. But that’s not the answer. Not many customers want a flash SAN or NAS – not for their new infrastructure, but more importantly, all the data is at the wrong end of the straw. The poor server is left sucking hard. Remember – this is flash, and people use flash for latency. Today these SAN type of flash devices have 4x-10x worse latency than PCIe cards. Ouch. You have to suck the data through a relatively low bandwidth interconnect, after passing through both the storage and network stacks. And there is interaction between the I/O threads of various servers and applications – you have to wait in line for that resource. It’s true there is a lot of startup energy in this space.  It seems to make sense if you’re a startup, because SAN/NAS is what people use today, and there’s lots of money spent in that market today. However, it’s not what the market is asking for.

Another easy answer is NVMe SSDs. Right? Everyone wants them – right? Well, OEMs at least. Front bay PCIe SSDs (HDD form factor or NVMe – lots of names) that crowd out your disk drive bays. But they don’t fix the problems. The extra mechanicals and form factor are more expensive, and just make replacing the cards every 5 years a few minutes faster. Wow. With NVME SSDs, you can fit fewer HDDs – not good. They also provide uniformly bad cooling, and hard limit power to 9W or 25W per device. But to protect the storage in these devices, you need to have enough of them that you can RAID or otherwise protect. Once you have enough of those for protection, they give you awesome capacity, IOPs and bandwidth, too much in fact, but that’s not what applications need – they need low latency for the working set of data.

What do I think the PCIe replacement solutions in the near future will look like? You need to pool the flash across servers (to optimize bandwidth and resource usage, and allocate appropriate capacity). You need to protect against failures/errors and limit the span of failure,  commit writes at very low latency (lower than native flash) and maintain low latency, bottleneck-free physical links to each server… To me that implies:

    • Small enclosure per rack handling ~32 or more servers
    • Enclosure manages temperature and cooling optimally for performance/endurance
    • Remote configuration/management of the resources allocated to each server
    • Ability to re-assign resources from one server to another in the event of server/VM blue-screen
    • Low-latency/high-bandwidth physical cable or backplane from each server to the enclosure
    • Replaceable inexpensive flash modules in case of failure
    • Protection across all modules (erasure coding) to allow continuous operation at very high bandwidth
    • NV memory to commit writes with extremely low latency
    • Ultimately – integrated with the whole storage architecture at the rack, the same APIs, drivers, etc.

That means the performance looks exactly as if each server had multiple PCIe cards. But the capacity and bandwidth resources are shared, and systems can remain resilient. So ultimately, I think that PCIe cards will evolve to more external, rack level, pooled flash solutions, without sacrificing all their great attributes today. This is just my opinion, but as I say – other leaders in flash are going down this path too…

What’s your opinion?

Tags: , , , , , , , , , , , , , , , ,
Views: (15148)


Big data and Hadoop are all about exploiting new value and opportunities with data. In financial trading, business and some areas of science, it’s all about being fastest or first to take advantage of the data. The bigger the data sets, the smarter the analytics. The next competitive edge with big data comes when you layer in flash acceleration. The challenge is scaling performance in Hadoop clusters.

The most cost-effective option emerging for breaking through disk-to-I/O bottlenecks to scale performance is to use high-performance read/write flash cache acceleration cards for caching. This is essentially a way to get more work for less cost, by bringing data closer to the processing. The LSI® Nytro™ product has been shown during testing to improve the time it takes to complete Hadoop software framework jobs up to a 33%.

Flash cache cards increase Hadoop application performance
Combining flash cache acceleration cards with Hadoop software is a big opportunity for end users and suppliers. LSI estimates that less than 10% of Hadoop software installations today incorporate flash acceleration1.  This will grow rapidly as companies see the increased productivity and ROI of flash to accelerate their systems.  And Hadoop software adoption is also growing fast. IDC predicts a CAGR of as much as 60% by 20162. Drivers include IT security, e-commerce, fraud detection and mobile data user management. Gartner predicts that Hadoop software will be in two-thirds of advanced analytics products by 20153. Many thousands of Hadoop software clusters are already deployed.

Where flash makes the most immediate sense is with those who have smaller clusters doing lots of in-place batch processing. Hadoop is purpose-built for analyzing a variety of data, whether structured, semi-structured or unstructured, without the need to define a schema or otherwise anticipate results in advance. Hadoop enables scaling that allows an unprecedented volume of data to be analyzed quickly and cost-effectively on clusters of commodity servers. Speed gains are about data proximity. This is why flash cache acceleration typically delivers the highest performance gains when the card is placed directly in the server on the PCI Express® (PCIe) bus.

Combining the best of flash and HDDs to drive higher performance and storage capacity
PCIe flash cache cards are now available with multiple terabytes of NAND flash storage, which substantially increases the hit rate. We offer a solution with both onboard flash modules and Serial-Attached SCSI (SAS) interfaces to enable high-performance direct-attached storage (DAS) configurations consisting of solid state and hard disk drive storage. This couples the low-latency performance benefits of flash with the capacity and cost-per-gigabyte advantages of HDDs.

To keep the processor close to the data, Hadoop uses servers with DAS. And to get the data even closer to the processor, the servers are usually equipped with significant amounts of random access memory (RAM). An additional benefit: Smart implementation of Hadoop and flash components can reduce the overall server footprint and simplify scaling, with some solutions enabling up to 128 devices to share a very high bandwidth interface. Most commodity servers provide 8 or less SATA ports for disks, reducing expandability.

Hadoop is great, but flash-accelerated Hadoop is best. It’s an effective way, as you work to extract full value from big data, to secure a competitive edge.

  1. Based on internal LSI research.
  2. “IDC Worldwide Hadoop-MapReduce Ecosystem Software 2012-2016 Forecast,” May 2012.
  3. “Gartner Predicts 2013: Business Intelligence and Analytics Need to Scale Up to Support Explosive Growth in Data Sources,” December 2012.

Tags: , , , , , , , , , , , , , ,
Views: (19270)


white board messageAnyone who knows me knows I like to ask “why?” Maybe I never outgrew the 2-year-old phase. But I also like to ask “why not?” Every now and then you need to rethink everything you know top to bottom because something might have changed.

I’ve been talking to a lot of enterprise datacenter architects and managers lately. They’re interested in using flash in their servers and storage, but they can’t get over all the “problems.”

The conversation goes something like this: Flash is interesting, but it’s crazy expensive $/bit. The prices have to come way down – after all it’s just a commodity part. And I have these $4k servers – why would I put an $8k PCIe card in them – that makes no sense. And the stuff wears out, which is an operational risk for me – disks last forever. Maybe flash isn’t ready for prime time yet.

These arguments are reasonable if you think about flash as a disk replacement, and don’t think through all the follow-on implications.

In contrast I’ve also been spending a lot of time with the biggest datacenters in the world – you know – the ones we all know by brand name. They have at least 200k servers, and anywhere from 1.5 million to 7 million disks. They notice CapEx and OpEx a lot. You multiply anything by that much and it’s noticeable. (My simple example is add 1 LED to each server with 200k servers and the cost adds up to 26K watts + $10K LED cost.) They are very scientific about cost. More specifically they measure work/$ very carefully. Anything to increase work or reduce $ is very interesting – doing both at once is the holy grail. Already one of those datacenters is completely diskless. Others are part way there, or have the ambition of being there. You might think they’re crazy – how can they spend so much on flash when disks are so much cheaper, and these guys offer their services for free?

When the large datacenters – I call the hyperscale datacenters – measure cost, they’re looking at purchase cost, including metal racks and enclosures, shipping, service cost both parts and human expense, as well as operational disruption overhead and the complexity of managing that, the opportunity cost of new systems vs. old systems that are less efficient, and of course facilities expenses – buildings, power, cooling, people… They try to optimize the mix of these.

Let’s look at the arguments against using flash one by one.

Flash is just a commodity part
This is a very big fallacy. It’s not a commodity part, and flash is not all the same. The parts you see in cheap consumer devices deserve their price. In the chip industry, it’s common to have manufacturing fallout; 3% – 10% is reasonable. What’s more the devices come at different performance levels – just look at x86 performance versions of the same design. In the flash business 100% of the devices are sold, used, and find their way into products. Those cheap consumer products are usually the 3%-10% that would be scrap in other industries. (I was once told – with a smile – “those are the parts we sweep off the floor”…)

Each generation of flash (about 18 months between them) and each manufacturer (there are 5, depending how you count) have very different characteristics. There are wild differences in erase time, write time, read time, bandwidth, capacity, endurance, and cost. There is no one supplier that is best at all of these, and leadership moves around. More importantly, in a flash system, how you trade these things off has a huge effect on write latency (#1 impactor on work done), latency outliers (consistent operation), endurance or life span, power consumption, and solution cost. All flash products are not equal – not by a long shot. Even hyperscale datacenters have different types of solutions for different needs.

It’s also important to know that temperature of operation and storage, inter-arrival time of writes, and “over provisioning” (the amount hidden for background use and garbage collection) have profound impacts on lifespan and performance.

$8k PCIe card in a $4k server – really?
I am always stunned by this. No one thinks twice about spending more on virtualization licenses than on hardware, or say $50k for a database license to run on a $4k server. It’s all about what work you need to accomplish, and what’s the best way to accomplish it. It’s no joke that in database applications it’s pretty easy to get 4x the work from a server with a flash solution inserted. You probably won’t get worse than 4x, and as good as 10x. On a purely hardware basis, that makes sense – I can have 1 server @ $4k +  $8K flash vs. 4 servers @ $4k. I just saved $4k CapEx. More importantly, I saved the service contract, power, cooling and admin of 3 servers. If I include virtualization or database licenses, I saved another $150k + annual service contracts on those licenses. That’s easy math. If I worry about users supported rather than work done, I can support as many as 100x users. The math becomes overwhelming. $8K PCIe card in a $4k server? You bet when I think of work/$.

The stuff wears out & disks last forever
It’s true that car tires wear out, and depending on how hard you use them that might be faster or slower. But tires are one of the most important parts in a cars performance – acceleration, stopping, handling – you couldn’t do any of that without them. The only time you really have catastrophic failure with tires is when you wear them way past any reasonable point – until they are bald and should have been replaced. Flash is like that – you get lots of warning as its wearing out, and you get lots of opportunity to operationally plan and replace the flash without disruption. You might need to replace it after 4 or 5 years, but you can plan and do it gracefully. Disks can last “forever,” but they also fail randomly and often.

Reliability statistics across millions of hard drives show somewhere around 2.5% fail annually. And that’s for 1st quality drives. Those are unpredicted, catastrophic failures, and depending on your storage systems that means you need to go into rebuild or replication of TBytes of data, and you have a subsequent degradation in performance (which can completely mess up load balancing of a cluster of 20 to 200 other nodes too), potentially network traffic overhead, and a physical service event that needs to be handled manually and fairly quickly. And really – how often do admins want to take the risk of physically replacing a drive while a system is running. Just one mistake by your tech and it’s all over… Operationally flash is way better, less disruptive, predictable, lower cost, and the follow on implications are much simpler.

Crazy expensive $/bit
OK – so this argument doesn’t seem so relevant anymore. Even so, in most cases you can’t use much of the disk capacity you have. It will be stranded because you need to have spare space as databases, etc. grow. If you run out of space for db’s the result is catastrophic. If you are driving a system hard, you often don’t have the bandwidth left to actually access that extra capacity. It’s common to only use ½ of the available capacity of drives.

Caching solutions change the equation as well. You can spend money on flash for the performance characteristics, and shift disk drive spend to fewer, higher capacity, slower, more power efficient drives for bulk capacity. Often for the same or similar overall storage spend you can have the same capacity at 4x the system performance. And the space and power consumed and cooling needed for that system is dramatically reduced.

Even so, flash is not going to replace large capacity storage for a long, long time, if ever. What ever the case, the $/bit is simply not the right metric for evaluating flash. But it’s true, flash is more expensive per bit. It’s simply that in most operational contexts, it more than makes up for that by other savings and work/$ improvements.

So I would argue (and I’m backed up by the biggest hyperscale datacenters in the world) that flash is ready for prime time adoption. Work/$ is the correct metric, but you need to measure from the application down to the storage bits to get that metric. It’s not correct to think about flash as “just a disk replacement” – it changes the entire balance of a solution stack from application performance and responsiveness and cumulative work, to server utilization to power consumption and cooling to maintenance and service to predictable operational stability. It’s not just a small win; it’s a big win. It’s not a fit yet for large pools of archival storage – but even for that a lot of energy is going into trying to make that work. So no – enterprise will not go diskless for quite a while, but it is understandable why hyperscale datacenters want to go diskless. It’s simple math.

Every now and then you need to rethink everything you know top to bottom because something might have changed.

Tags: , , , , , ,
Views: (22454)