A customer recently asked me if the SF3700, our latest flash controller, supports SATA Express and fired away with a bunch of other questions about the standard. The depth of his curiosity suggested a broader need for education on the basics of the standard.

To help me with the following overview of SATA Express, I recruited Sumit Puri, Sr. Director of Strategic Marketing for the Flash Components Division at LSI (SandForce). Sumit is a longtime contributor to many storage standards bodies and has been working with SATA- IO – the group responsible for SATA Express – for many years. He has first-hand knowledge of SATA- IO’s work.

Here are his insights into some of the fundamentals of SATA Express.

What is SATA Express?
Sumit: There’s quite a bit of confusion in the industry about what SATA Express defines. In simple terms, SATA Express is a specification for a new connector type that enables the routing of both PCIe® and SATA signals. SATA Express is not a command or signaling protocol. It should really be thought of as a connector that mates with legacy SATA cables and new PCIe cables.

Sumit Puri 

Why was SATA Express created?
Sumit: SATA Express was developed to help smooth the transition from the legacy SATA interface to the new PCIe interface. SATA Express gives system vendors a common connector that supports both traditional SATA and PCIe signaling and helps OEMs streamline connector inventory and reduce related costs.

What is the protocol used in SATA Express?
Sumit: One of the misconceptions about SATA Express is that it’s a protocol specification. Rather, as I mentioned, it’s a mechanical specification for a connector and the matching cabling. Protocols that support SATA Express include SATA, AHCI and NVME.

What are the form factors for SATA Express?
Sumit: SATA Express defines connectors for both a 2.5” drive and the host system. SATA Express connects the drive and system using SATA cables or the newly defined PCIe cables.

What connector configuration is used for SATA Express?
Sumit: Because SATA Express supports both SATA and PCIe signaling as well as the legacy SATA connectors, there are multiple configuration options available to motherboard and device manufacturers. The image below shows plug (a) which is built for attaching to a PCIe device. Socket (b) would be part of a cable assembly for receiving plug (a) or a standard SATA plug, and Socket (c) would mount to a backplane or motherboard fir receiving plug (a) or a standard SATA plug. The last two connectors are a mating pair designed to enable cabling (e) to connect to motherboards (d).

When will hosts begin supporting SATA Express?

Sumit: We expect systems to begin using SATA Express connectors early this year. They will primarily be deployed in desktop environments, which require cabling. In contrast, we expect limited use of SATA Express in notebook and other portable systems that are moving to cableless card-edge connector designs like the recently minted M.2 form factor. We also expect to see scant use of SATA Express in enterprise backplanes. Enterprise customers will likely transition to other connectors that support higher speed PCIe signaling like the SFF-8639, a new connector that was originally included in the SATA Express specification but has since been removed.

Will LSI support SATA Express?
Sumit: Absolutely. Our SF3700 flash controller will be fully compatible with the newly defined SATA Express connector and support either SATA or PCIe. Our current SF-2000 SATA flash controllers support SATA cabling used on SATA Express, but not PCIe.

Will LSI also support SRIS?
Sumit: PCIe devices enabled with SRIS (Separate Refclk Independent SSC) can self-clock so need no reference clock from the host, allowing system builders to use lower cost PCIe cables. SRIS is an important cost-saving feature for cabling that supports PCIe signaling. It doesn’t support card-edge connector designs. Today the SF3700 supports PCIe connectivity, and LSI will support SRIS in future releases of SF3700 and other products.

Why is it called SATA Express?
Sumit: SATA Express blends the names of the two connectors and captures the hybridization of the physical interconnects. The name reflects the ability of legacy SATA connectors to support higher PCIe data rates to simplify the transition to PCIe devices. SATA Express can pull double duty, supporting both PCIe and SATA signaling in the same motherboard socket. The same SATA Express socket accepts both traditional SATA and new PCIe cables and links to either a legacy SATA or SATA Express device connector.

How fast can SATA Express run?
Sumit: The PCIe interface defines the top SATA Express speed. A PCIe Gen2 x2 device supports up to 900 MB/s of throughput, a PCIe Gen3 x2 device up to 1800 MB/s of throughput – both significantly higher than 550 Mb/s speed ceiling of today’s SATA devices.

Is SATA Express similar to M.2?
Sumit: There are two key similarities. Both support SATA and PCIe on the same host connector, and both are designed to help transition from SATA to PCIe over time.

SATA Express delivers the future of connector speeds today
SATA Express was born of the stuff of all great inventions. Necessity. The challenge SATA-IO faced in doubling SATA 6 Gb/s speeds was herculean. The undertaking would have been too time-consuming to support the next-generation connection speeds that PCIe answers. It would have been too involved, requiring an overhaul of the SATA standard. Even in the brightest scenario, the effort would have produced a power guzzler at a time when greater power efficiency is a must for system builders. SATA-IO found a better path, an elegant bridge to PCIe speeds in the form of SATA Express.

Tags: , , , , , , , , , ,
Views: (3249)


Pushing your enterprise cluster solution to deliver the highest performance at the lowest cost is key in architecting scale-out datacenters. Administrators must expand their storage to keep pace with their compute power as capacity and processing demands grow.

safijidsjfijdsifjiodsjfiosjdifdsoijfdsoijfsfkdsjifodsjiof dfisojfidosj iojfsdiojofodisjfoisdjfiodsj ofijds fds foids gfd gfd gfd gfd gfd gfd gfd gfd gfd gfdg dfg gfdgfdg fd gfd gdf gfd gdfgdf g gfd gdfg dfgfdg fdgfdgBeyond price and capacity, storage resources must also deliver enough bandwidth to support these growing demands. Without enough I/O bandwidth, connected servers and users can bottleneck, requiring sophisticated storage tuning to maintain reasonable performance. By using direct attached storage (DAS) server architectures, IT administrators can

Diagram of DataBolt technology buffering 6Gb/s SAS media while maintaining a 12Gb/s SAS link.

Beyond price and capacity, storage resources must also deliver enough bandwidth to support these growing demands. Without enough I/O bandwidth, connected servers and users can bottleneck, requiring sophisticated storage tuning to maintain reasonable performance. By using direct attached storage (DAS) server architectures, IT administrators can reduce the complexities and performance latencies associated with storage area networks (SANs). Now, with LSI 12Gb/s SAS or MegaRAID® technology, or both, connected to 12Gb/s SAS expander-based storage enclosures, administrators can leverage the DataBolt™ technology to clear I/O bandwidth bottlenecks. The result: better overall resource utilization, while preserving legacy drive investments. Typically a slower end device would step down the entire 12Gb/s SAS storage subsystem to 6Gb/s SAS speeds. How does Databolt technology overcome this? Well, without diving too deep into the nuts and bolts, intelligence in the expander buffers data and then transfers it out to the drives at 6Gb/s speeds in order to match the bandwidth between faster hosts and slower SAS or SATA devices.

The DataBolt enabled Hadoop server bandwidth is optimized with 12Gb/s SAS.

So for this demonstration at AIS, we are showcasing two Hadoop Distributed File System (HDFS) servers. Each server houses the newly shipping MegaRAID 9361-8i 12Gb/s SAS RAID controller connected to a drive enclosure featuring a 12Gb/s SAS expander and 32 6Gb/s SAS hard drives. One has a DataBolt-enabled configuration, while the other is disabled.

For the benchmarks, we ran DFSIO, which simulates MapReduce workloads and is typically used to detect performance network bottlenecks and tune hardware configurations as well as overall I/O performance.

The primary goal of the DFSIO benchmarks is to saturate storage arrays with random read workloads in order to ensure maximum performance of a cluster configuration. Our tests resulted in MapReduce Jobs completing faster in 12Gb/s mode, and overall throughput increased by 25%.

DataBolt optimization of DFSIO MapReduce tests (MB/s) per cluster slot maps.

Tags: , , , , , , , , , , , , , , , ,
Views: (2785)


Imagine a bathtub full of water and asking someone to empty the tub while you turn your back for a moment. When you look again and see the water is gone, do you just assume someone pulled the drain plug?

I think most people would, but what about the other methods of removing the water like with a siphon, or using buckets to bail out the water? In a typical bathroom you are not likely to see these other methods used, but that does not mean they do not exist. The point is that just because you see a certain result does not necessarily mean the obvious solution was used.

I see a lot of confusion in forum posts from SandForce Driven™ SSD users and reviewers over how the LSI® DuraWrite™ data reduction and advanced garbage collection technology relates to the SATA TRIM command. In my earlier blog on TRIM I covered this topic in great detail, but in simple terms the operating system uses the TRIM command to inform an SSD what information is outdated and invalid. Without the TRIM command the SSD assumes all of the user capacity is valid data. I explained in my blog Gassing up your SSD that creating more free space through over provisioning or using less of the total capacity enables the SSD to operate more efficiently by reducing the write amplification, which leads to increased performance and flash memory endurance. So without TRIM the SSD operates at its lowest level of efficiency for a particular level of over provisioning.

Will you drown in invalid data without TRIM?
TRIM is a way to increase the free space on an SSD – what we call “dynamic over provisioning” – and DuraWrite technology is another method to increase the free space. Since DuraWrite technology is dependent upon the entropy (randomness) of the data, some users will get more free space than others depending on what data they store. Since the technology works on the basis of the aggregate of all data stored, boot SSDs with operating systems can still achieve some level of dynamic over provisioning even when all other files are at the highest entropy, e.g., encrypted or compressed files.

With an older operating system or in an environment that does not support TRIM (most RAID configurations), DuraWrite technology can provide enough free space to offer the same benefits as having TRIM fully operational. In cases where both TRIM and DuraWrite technology are operating, the combined result may not be as noticeable as when they’re working independently since there are diminishing returns when the free space grows to greater than half of the SSD storage capacity.

So the next time you fill your bathtub, think about all the ways you can get the water out of the tub without using the drain. That will help you remember that both TRIM and DuraWrite technology can improve SSD performance using different approaches to the same problem. If that analogy does not work for you, consider the different ways to produce a furless feline, and think about what opening graphic image I might have used for a more jolting effect. Although in that case you might not have seen this blog since that image would likely have gotten us banned from Google® “safe for work” searches.

I presented on this topic in detail at the Flash Memory Summit in 2011. You can read it in full here: http://www.lsi.com/downloads/Public/Flash%20Storage%20Processors/LSI_PRS_FMS2011_T1A_Smith.pdf

 

Tags: , , , , , , , , , , , , , , ,
Views: (11915)


I’ve spent a lot of time with hyperscale datacenters around the world trying to understand their problems – and I really don’t care what area those problems are as long as they’re important to the datacenter. What is the #1 Real Problem for many hyperscale datacenters? It’s something you’ve probably never heard about, and probably have not even thought about. It’s called false disk failure. Some hyperscale datacenters have crafted their own solutions – but most have not.

Why is this important, you ask? Many large datacenters today have 1 million to 4 million hard disk drives (HDDs) in active operation. In anyone’s book that’s a lot. It’s also a very interesting statistical sample size of HDDs. Hyperscale datacenters get great pricing on HDDs. Probably better than OEMs get, and certainly better than the $79 for buying 1 HDD at your local Fry’s store. So you would imagine if a disk fails – no one cares – they’re cheap and easy to replace. But the burden of a failed disk is much more than the raw cost of the disk:

  • Disk rebuild and/or data replicate of 2TB or 3TB drive
    • Performance overhead of a RAID rebuild makes it difficult to justify, and can take days
    • Disk capacity must be added somewhere to compensate: ~$40-$50
    • Redistribute replicated data across many servers
    • Infrastructure overhead to rebalance workloads to other distributed servers
    • Person to service disk: remove and replace
      • And then ensure the HDD data cannot be accessed – wipe it or shred it

Let’s put some scale to this problem, and you’ll begin to understand the issue.  One modest size hyperscale datacenter has been very generous in sharing its real numbers. (When I say modest, they are ~1/4 to 1/2 the size of many other hyperscale datacenters, but they are still huge – more than 200k servers). Other hyperscale datacenters I have checked with say – yep, that’s about right. And one engineer I know at an HDD manufacturer said – “wow – I expected worse than that. That’s pretty good.” To be clear – these are very good HDDs they are using, it’s just that the numbers add up.

The raw data:

RAIDed SAS HDDs

  • 300k SAS HDDs
  • 15-30 SAS failed per day
    • SAS false fail rate is about 30%~45% (10-15 per day)
    • About 1/1000 HDD annual false failure rate

Non-RAIDed (direct map) SATA drives behind HBAs

  • 1.2M SATA HDDs
  • 60-80 SATA failed disks per day
    • SATA false fail rate is about 40~55% (24-40 per day)
    • About 1/100 HDD annual false failure rate

What’s interesting is the relative failure rate of SAS drives vs. SATA. It’s about an order of magnitude worse in SATA drives than SAS. Frankly some of this is due to protocol differences. SAS allows far more error recovery capabilities, and because they also tend to be more expensive, I believe manufacturers invest in slightly higher quality electronics and components. I know the electronics we ship into SAS drives is certainly more sophisticated than SATA drives.

False fail? What? Yea, that’s an interesting topic. It turns out that about 40% of the time with SAS and about 50% of the time with SATA, the drive didn’t actually fail. It just lost its marbles for a while. When they pull the drive out and put it into a test jig, everything is just fine. And more interesting, when they put the drive back into service, it is no more statistically likely to fail again than any other drive in the datacenter. Why? No one knows. I suspect though.

I used to work on engine controllers. That’s a very paranoid business. If something goes wrong and someone crashes, you have a lawsuit on your hands. If a controller needs a recall, that’s millions of units to replace, with a multi-hundred dollar module, and hundreds of dollars in labor for each one replaced. No one is willing to take that risk. So we designed very carefully to handle soft errors in memory and registers. We incorporated ECC like servers use, background code checksums and scrubbing, and all sorts of proprietary techniques, including watchdogs and super-fast self-resets that could get operational again in less than a full revolution of the engine.  Why? – the events were statistically rare. The average controller might see 1 or 2 events in its lifetime, and a turn of the ignition would reset that state.  But the events do happen, and so do recalls and lawsuits… HDD controllers don’t have these protections, which is reasonable. It would be an inappropriate cost burden for their price point.

You remember the Toyota Prius accelerator problems? I know that controller was not protected for soft errors. And the source of the problem remained a “mystery.”  Maybe it just lost its marbles for a while? A false fail if you will. Just sayin’.

Back to HDDs. False fail is especially frustrating, because half the HDDs actually didn’t need to be replaced. All the operational costs were paid for no reason. The disk just needed a power cycle reset. (OK, that introduces all sorts of complex management by the RAID controller or application to manage that 10 second power reset cycle and application traffic created in that time – be we can handle that.)

Daily, this datacenter has to:

  • Physically replace 100 disk drives
    • Individually destroy or recycle the 100 failed drives
    • Replicate or rebuild 200-300 TBytes of data – just think about that
    • Rebalance the application load on at least 100 servers – more likely 100 clusters of servers – maybe 20,000 servers?
    • Handle the network traffic  load of ~200 TBytes of replicated data
      • That’s on the order of 50 hours of 10GBit Ethernet traffic…

And 1/2 of that is for no reason at all.

First – why not rebuild the disk if it’s RAIDed? Usually hyperscale datacenters use clustered applications. A traditional RAID rebuild drives the server performance to ~50%, and for a 2TByte drive, under heavy application load (definition of a hyperscale datacenter) can truly take up to a week.  50% performance for a week? In a cluster that means the overall cluster is running ~50% performance.  Say 200 nodes in a cluster – that means you just lost ~100 nodes of work – or 50% of cluster performance. It’s much simpler to just take the node offline with the failed drive, and get 99.5% cluster performance, and operationally redistribute the workload across multiple nodes (because you have replicated data elsewhere). But after rebuild, the node will have to be re-synced or re-imaged. There are ways to fix all this. We’ll talk about them on another day. Or you can simply run direct mapped storage, and unmounts the failed drive.

Next – Why replicate data over the network, and why is that a big deal? For geographic redundancy (say a natural disaster at one facility) and regional locality, hyperscale datacenters need multiple data copies. Often 3 copies so they can do double duty as high-availability copies, or in the case of some erasure coding, 2.2 to 2.5 copies (yea – weird math – how do you have 0.5 copy…). When you lose one copy, you are down to 2, possibly 1. You need to get back to a reliable number again. Fast. Customers are loyal because of your perfect data retention. So you need to replicate that data and re-distribute it across the datacenter on multiple servers. That’s network traffic, and possibly congestion, which affects other aspects of the operations of the datacenter. In this datacenter it’s about 50 hours of 10G Ethernet traffic every day.

To be fair, there is a new standard in SAS interfaces that will facilitate resetting a disk in-situ. And there is the start of discussion of the same around SATA – but that’s more problematic. Whatever the case, it will be a years before the ecosystem is in place to handle the problems this way.

What’s that mean to you?

Well. You can expect something like 1/100 of your drives to really fail this year. And you can expect another 1/100 of your drives to fail this year, but not actually be failed. You’ll still pay all the operational overhead of not actually having a failed drive – rebuilds, disk replacements, management interventions, scheduled downtime/maintenance time, and the OEM replacement price for that drive – what $600 or so ?… Depending on your size, that’s either a don’t care, or a big deal. There are ways to handle this, and they’re not expensive – much less than the disk carrier you already pay for to allow you to replace that drive – and it can be handled transparently – just a log entry without seeing any performance hiccups.  You just need to convince your OEM to carry the solution.

Tags: , , , , , , , , , , ,
Views: (24251)