For many years LSI has known the importance of truly understanding the complexities of interfacing with NAND flash memory to optimize its performance and lifetime. For that reason LSI created a group focused on characterizing NAND flash behavior as it interfaces with LSI flash controllers. I recently spoke to LSI’s expert in this area, Bill Hunt, Engineering Director Flash Analytics at LSI, to better understand what his group produces for LSI and how that translates into better solutions for our customers.
Q: Is all NAND flash created equal?
Bill: Definitely not. NAND flash specs, performance and ratings not only vary from vendor to vendor, they also vary between process geometries, between models within the same NAND family, and over the production life – especially during early production ramp. Also, NAND vendors intentionally create unequal models of the same part to address their different markets, like client and enterprise. Understanding the difference between NAND types is critical to building a robust solution.
Q: How does NAND vary from vendor to vendor?
Bill: There are really two levels of differences among NAND vendors, differences as a result of different architectures and differences when NAND vendors share architectures. For NAND vendors with completely different designs and fab processes, there are many differences in the NAND specifications. Some of the differences between NAND devices include different pin-outs, power requirements, block and page layouts, addressing schemes, timing specifications, commands, and read recovery procedures. I could go on.
Some NAND vendors have common designs and fab processes. But even these devices can have significant operational differences for each vendor. Each device can have unique features enabled with different device trims (editor’s note: manufacturing settings), command, diagnostics, and read recovery steps. Even using standard interfaces like ONFI and Toggle doesn’t guarantee common operations. Each vendor has their own interpretation and implementation of these standards.
Q: How does NAND vary from generation to generation?
Bill: Shrinking the process geometry requires a new device architecture. The new architecture drives changes to the operation and specification of a NAND device. The greatest changes are driven by NAND capacity increases. For example, the size and layout of the planes, blocks, and pages have to be modified to deal with the new architecture and increased capacity. Since the NAND cells are smaller and closer together, the error handling capability also has to be increased. The error-correcting code (ECC) requirements and resulting spare areas increase. The NAND also has to change to deal with increased bad block rates. The data rate and performance of each generation must also improve to keep up with what users are asking for. This drives changes to the interface timing specifications and adds new feature sets. In general NAND endurance gets worse with shrinking geometries and it is critical to understand changes due to new generations to develop more powerful and effective ECC algorithms.
Q: Does LSI have any dedicated facility to evaluate NAND from different suppliers?
Bill: Yes, LSI’s Flash Analytics lab is dedicated to evaluating and characterizing NAND flash that will be used with LSI flash controllers.
Q: What kinds of testing does LSI do in the Flash Analytics lab?
Bill: The Flash Analytics lab has two main functions. First, we integrate NAND devices into solid-state drives (SSDs) with LSI SandForce controllers to ensure they work well together. Second, we characterize NAND devices to see how the NAND flash performs and operates over the lifetime of the device. We do this in various operational modes. It is critical to understand the behavior of the raw NAND to design and develop solutions with the reliability and performance demanded by the market.
Q: Does LSI test flash memory beyond their rated lifetimes?
Bill: Yes. NAND vendors do not always share their own characterization testing results beyond their rated endurance limit, so we gather that data. Typically we perform program-erase cycles on devices until very poor raw bit error rate is achieved or a catastrophic error occurs. We also exceed other specifications, such as retention limits and read disturb limits. Understanding what happens to flash as it ages gives us valuable information on how devices might fail in real-world scenarios.
Q: What type of data is generated from all the tests that are conducted?
Bill: We generate a characterization report for each device we test. This report compares our results to the vendor specifications, including graphs of error rates vs. program-erase cycles for different retention limits and error correction limits. The report also evaluates the effects from read disturb over endurance and retention lifetimes. Other sections include an analysis of the physical location of errors, and read recovery effectiveness. We also evaluate the impact to performance over the life of the drive.
Q: What does LSI do with this data?
Bill: First, we use it to validate the flash vendor specifications. Second, we use it to design and optimize our LSI SandForce flash controller designs. In particular, we use the data to optimize our error recovery and SHIELD technology. We also use it to evaluate possible trade-offs: for example, to trade performance for increasing the endurance of the NAND and extending its life. Last, we share information with customers whenever possible. The goal of collecting this data is to develop the most advanced ECC possible to increase SSD reliability, endurance, and performance.
Q: Is LSI able to generate better products because we collect this data?
Bill: The information we gather in the LSI Flash Analytics lab has certainly helped improve our products. Our testing has improved quality by assuring NAND parts are meeting the vendor specs. When we show our data to the NAND vendor, they are more motivated to share their detailed data with us. Our lab is also equipped to run specific tests to help diagnose problems seen with our products during qualification and production. As an example, we have run tests to run specific tests to evaluate read recovery issues and physical location stress. We also gather raw data during our characterization testing that is used by our product architecture team. The raw data is fed into simulation models and used to optimize our flash channel and SHIELD technology. In a nutshell, our improved understanding of flash memory helps us build better flash controllers – which helps our customers build better SSDs.
Q: Does LSI work closely with NAND vendors on this analysis?
Bill: Yes, we have regular meetings with all of the NAND flash vendors that our flash controllers support. We work closely with NAND vendors to assure we have the latest information. We not only make sure we have the latest roadmaps, datasheets and application notes, but we get clarifications about flash operations, quality and performance. We share the characterization data we collect, and get insight to our results. We also keep the NAND vendors informed about our controller roadmap and features so they assure their products are tracking with us.
Mastering NAND flash memory is critical to flash controller development success
Any NAND Flash memory controller developer would be remiss if the engineers did not perform in-depth testing and characterization to better understand this very complex technology. Also, any company that can support more than one flash vendor must be able to understand the differences between manufacturers to better design and modify the controller to support the widest selection of NAND flash providing the greatest flexibility for their customers.
Ever since SandForce introduced data reduction technology with the DuraWrite™ feature in 2009, some users have been confused about how it works and questioned whether it delivers the benefits we claim. Some even believe there are downsides to using DuraWrite with an SSD. In this blog, I will dispel those misconceptions.
Data reduction technology refresher
Four of my previous blogs cover the many advantages of using data reduction technology like DuraWrite:
In a nutshell, data reduction technology reduces the size of data written to the flash memory, but returns 100% of the original data when reading it back from the flash. This reduction in the required storage space helps accelerate reads and writes, extend the life of the flash and increase the dynamic over provisioning (OP).
What is incompressible data?
Data is incompressible when data reduction technology is unable to reduce the size of a dataset – in which case the technology offers no benefit for the user. File types that are altogether or mostly incompressible include MPEG, JPEG, ZIP and encrypted files. However, data reduction technology is applied to an entire SSD, so the free space resulting from the smaller, compressed files increases OP for all file types, even incompressible files.
The images below help illustrate this process. The image on the left represents a standard SSD 256GB SSD filled to about 80% capacity with a typical operating system, applications and user data. The remaining 20% of free space is automatically used by the SSD as dynamic OP. The image on the right shows how the same data stored on a data reduction-capable SSD can nearly double the available OP for the SSD because the operating system, applications and half of the user data can be reduced in this example.
Why is dynamic OP so important?
OP is the lifeblood of a flash memory-based SSD (nearly all of them available today). Without OP the SSD could not operate. Allocating more space for OP increases an SSD’s performance and endurance, as well as reduces it power consumption. In the illustrations above, both SSDs are storing about 30% of user data as incompressible files like MPEG movies and JPG images. As I mentioned, data reduction technology can’t compress those files, but the rest of the data can be reduced. The result is the SSD with data reduction delivers higher overall performance than the standard SSD even with incompressible data.
Misconception 1: Data reduction technology is a trick
There’s no trickery with data reduction technology. The process is simple: It reduces the size of data differently depending on the content, increasing SSD speed and endurance.
Misconception 2: Users with movie, picture, and audio files will not benefit from data reduction
As illustrated above, as long as an operating system and other applications are stored on the SSD, there will be at least some increase in dynamic OP and performance despite the incompressible files.
Misconception 3: Testing with all incompressible data delivers worst-case performance
Given that a typical SSD stores an operating system, programs and other data files, an SSD test that writes only incompressible data to the device would underestimate the performance of the SSD in user deployments.
Data reduction technology delivers
Data reduction technology, like LSI® SandForce® DuraWrite, is often misunderstood to the point that users believe they would be better off without it. The truth is, with data reduction technology, nearly every user will see performance and endurance gains with their SSD regardless of how much incompressible data is stored.
My “Size matters: Everything you need to know about SSD form factors” blog in January spawned some interesting questions, a number of them on Z-height.
What is a Z-height anyway?
For a solid state drive (SSD), Z-height describes its thickness and is generally its smallest dimension. Z-height is a redundant term, since Z is a variable representing the height of an SSD. The “Z” is one of the variables – X, Y and Z, synonymous with length, width and height – that describe the measurements of a 3-dimensional object. Ironically, no one says X-length or Y-width, but Z-height is widely used.
What’s the state of affairs with SSD Z-height?
The Z-height has typically been associated with the 2.5″ SSD form factor. As I covered in my January form factor blog, the initial dimensions of SSDs were modeled after hard disk drives (HDDs). The 2.5” HDD form factor featured various heights depending on the platter count – the more disks, the greater the capacity and the thicker the HDD. The first 2.5” full capacity HDDs had a maximum Z-height of 19mm, but quickly dropped to a 15mm maximum to enable full-capacity HDDs in thinner laptops. By the time SSDs hit high-volume market acceptance, the dimensional requirements for storage had shrunk even more, to a maximum height of 12.5mm in the 2.5” form factor. Today, the Z-height of most 2.5″ SSDs generally ranges from 5.0mm to 9.5mm.
With printed circuit board (PCB) form factor SSDs—those with no outer case—the Z-height is defined by the thickness of the board and its components, which can be 3mm or less. Some laptops have unique shape or height restrictions for the SSD space allocation. For example, the MacBook Air’s ultra-thin profile requires some of the thinnest SSDs produced.
A new standard in SSD thickness
The platter count of an HDD determines its Z-height. In contrast, an SSD’s Z-height is generally the same regardless of capacity. The proportion of SSD form factors deployed in systems is shifting from the traditional, encased SSDs to the new bare PCB SSDs. As SSDs drift away from the older form factors with different heights, consumers and OEM system designers will no longer need to consider Z-height because the thickness of most bare PCB SSDs will be standard.
The introduction of LSI® SF3700 flash controllers has prompted many questions about the PCIe® (PCI Express) interface and how it benefits solid state storage, and there’s no better person to turn to for insights than our resident expert, Jeremy Werner, Sr. Director of Product and Customer Management in LSI’s Flash Components Division (SandForce):
Most client-based SSDs have used SATA in the past, while PCIe was mainly used for enterprise applications. Why is the PCIe interface becoming so popular for the client market?
Jeremy: Over the past few decades, the performance of host interfaces for client devices has steadily climbed. Parallel ATA (PATA) interface speed grew from 33MB/s to 100MB/s, while the performance of the Serial ATA (SATA) connection rose from 1.5Gb/s to 6Gb/s. Today, some solid state drives (SSDs) use the PCIe Gen2 x4 (second-generation speeds with four data communication lanes) interface, supporting up to 20Gb/s (in each direction). Because the PCIe interface can simultaneously read and write (full duplex) and SATA can only read or write at one time (half-duplex), PCIe can potentially double the 20Gb/s speeds in a mixed (read and write) workload, making it nearly seven times faster than SATA.
Will the PCIe interface replace SATA for SSDs?
Jeremy: Eventually the replacement is likely, but it will probably take many years in the single-drive client PC market given two hindrances. First, some single-drive client platforms must use a common HDD and SSD connection to give users the choice between the two devices. And because the 6Gb/s SATA interface delivers much higher speeds than than hard disk drives, there is no immediate need for HDDs to move to the faster PCIe connection, leaving SATA as the sole interface for the client market. And, secondly, the older personal computers already in consumers’ homes that need an SSD upgrade support only SATA storage devices, so there’s no opportunity for PCIe in that upgrade market.
By contrast, the enterprise storage market, and even some higher-end client systems, will migrate quickly to PCIe since they will see significant speed increases and can more easily integrate PCIe SSD solutions available now.
It is noteworthy that some standards, like M.2 and SATA Express, have defined a single connector that supports SATA or PCIe devices. The recently announced LSI SF3700 is one example of an SSD controller that supports both of those interfaces on an M.2 board.
What is meant by the terms “x1, x2, x4, x16” when referencing a particular PCIe interface?
Jeremy: These numbers are the PCIe lane counts in the connection. Either the host (computer) or the device (SSD) could limit the number of lanes used. The theoretical maximum speed of the connection (not including protocol overhead) is the number of lanes multiplied by the speed of each lane.
What is protocol overhead?
Jeremy: PCIe, like many bus interfaces, uses a transfer encoding scheme – a set number of data bits represented by a slightly larger number of bits called a symbol. The additional bits in the symbol constitute the inefficient overhead of metadata required to manage the transmitted user data. PCIe Gen3 features a more efficient data transfer encoding with 128b/132b (3% overhead) instead of the 8b/10b (20% overhead) of PCIe Gen2, increasing data transfer speeds by up to 21%.
What is defined in the PCIe 2.0 and 3.0 specifications, and do end users really care?
Jeremy: Although each PCIe Gen3 lane is faster than PCIe Gen2 (8Gb/s vs 5Gb/s, respectively), lanes can be combined to boost performance in both versions. The changes most relevant to consumers pertain to higher speeds. For example, today consumer SSDs top out at 150K random read IOPS at 4KB data transfer sizes. That translates to about 600MB/s, which is insufficient to saturate a PCIe Gen2 x2 link, so consumers would see little benefit from a PCIe Gen3 solution over PCIe Gen2. The maximum performance of PCIe Gen2 x4 and PCIe Gen3 x2 devices is almost identical because of the different transfer encoding schemes mentioned previously.
Are there mandatory features that must be supported in any of these specifications?
Jeremy: Yes, but nearly all of these features have little impact on performance, so most users have no interest in the specs. It’s important to keep in mind that the PCIe speeds I’ve cited are defined as the maximums, and the spec has no minimum speed requirement. This means a PCIe Gen3 solution might support only a maximum of 5Gb/s, but still be considered a PCIe Gen3 solution if it meets the necessary specifications. So buyers need to be aware of the actual speed rating of any PCIe solution.
Is a PCIe Gen3 SSD faster than a PCIe Gen2 SSD?
Jeremy: Not necessarily. For example, a PCIe Gen2 x4 SSD is capable of higher speeds than a PCIe Gen3 x1 SSD. However, bottlenecks other than the front-end PCIe interface will limit the performance of many SSDs. Examples of other choke points include the bandwidth of the flash, the processing/throughput of the controller, the power or thermal limitations of the drive and its environment, and the ability to remove heat from that environment. All of these factors can, and typically do, prevent the interface from reaching its full steady-state performance potential.
In what form factors are PCIe cards available?
Jeremy: PCIe cards are typically referred to as plug-in products, much like SSDs, graphics cards and host-bus adapters. PCIe SSDs come in many form factors, with the most popular called “half-height, half-length.” But the popularity of the new, tiny M.2 form factors is growing, driven by rising demand for smaller consumer computers. There are other PCIe form factors that resemble traditional hard disk drives, such as the SFF-8639, a 2.5” hard disk drive form factor that features four PCIe lanes and is hot pluggable. What’s more, its socket is compatible with the SAS and SATA interfaces. The adoption of the SATA Express 2.5” form factor has been limited, but could be given a boost with the availability of new capabilities like SRIS (Separate Refclk with Independent SSC), which enables the use of lower cost interconnection cables between the device and host.
Are all M.2 cards the same?
Jeremy: No. All SSD M.2 cards are 22 mm wide (while some WAN cards are 30 mm wide), but the specification allows for different lengths (30, 42, 60, 80, and 110 mm). What’s more, the cards can be single- or double-sided to account for differences in the thickness of the products. Also, they are compatible with two different sockets (socket 2 and socket 3). SSDs compatible with both socket types, or only socket 2, can connect only two lanes (x2), while SSDs compatible with only socket 3 can connect up to four (x4).
In my last few blogs, I covered various aspects of SSD form factors and included many images of the types that Jeremy mentioned above. I also delve deeper into details of the M.2 form factor in my blog “M.2: Is this the Prince of SSD form factors?” One thing about PCIe is certain: It is the next step in the evolution of computer interfaces and will give rise to more SSDs with higher performance, lower power consumption and better reliability.
I was asked some interesting questions recently by CEO & CIO, a Chinese business magazine. The questions ranged from how Chinese Internet giants like Alibaba, Baidu and Tencent differ from other customers and what leading technologies big Internet companies have created to questions about emerging technologies such as software-defined storage (SDS) and software-defined datacenters (SDDC) and changes in the ecosystem of datacenter hardware, software and service providers. These were great questions. Sometimes you need the press or someone outside the industry to ask a question that makes you step back and think about what’s going on.
I thought you might interested, so this blog, the first of a 3-part series covering the interview, shares details of the first two questions.
CEO & CIO: In recent years, Internet companies have built ultra large-scale datacenters. Compared with traditional enterprises, they also take the lead in developing datacenter technology. From an industry perspective, what are the three leading technologies of ultra large-scale Internet data centers in your opinion? Please describe them.
There are so many innovations and important contributions to the industry from these hyperscale datacenters in hardware, software and mechanical engineering. To choose three is difficult. While I would prefer to choose hardware innovations as their big ones, I would suggest the following as they have changed our world and our industry and are changing our hardware and businesses:
Autonomous behavior and orchestration
An architect at Microsoft once told me, “If we had to hire admins for our datacenter in a normal enterprise way, we would hire all the IT admins in the world, and still not have enough.” There are now around 1 million servers in Microsoft datacenters. Hyperscale datacenters have had to develop autonomous, self-managing, sometimes self-deploying datacenter infrastructure simply to expand. They are pioneering datacenter technology for scale – innovating, learning by trial and error, and evolving their practices to drive more work/$. Their practices are specialized but beginning to be emulated by the broader IT industry. OpenStack is the best example of how that specialized knowledge and capability is being packaged and deployed broadly in the industry. At LSI, we’re working with both hyperscale and orchestration solutions to make better autonomous infrastructure.
High availability at datacenter level vs. machine level
As systems get bigger they have more components, more modes of failure and they get more complex and expensive to maintain reliability. As storage is used more, and more aggressively, drives tend to fail. They are simply being used more. And yet there is continued pressure to reduce costs and complexity. By the time hyperscale datacenters had evolved to massive scale – 100’s of thousands of servers in multiple datacenters – they had created solutions for absolute reliability, even as individual systems got less expensive, less complex and much less reliable. This is what has enabled the very low cost structures of the cloud, and made it a reliable resource.
These solutions are well timed too, as more enterprise organizations need to maintain on-premises data across multiple datacenters with absolute reliability. The traditional view that a single server requires 99.999% reliability is giving way to a more pragmatic view of maintaining high reliability at the macro level – across the entire datacenter. This approach accepts the failure of individual systems and components even as it maintains data center level reliability. Of course – there are currently operational issues with this approach. LSI has been working with hyperscale datacenters and OEMs to engineer improved operational efficiency and resilience, and minimized impact of individual component failure, while still relying on the datacenter high-availability (HA) layer for reliability.
It’s such an overused term. It’s difficult to believe the term barely existed a few years ago. The gift of Hadoop® to the industry – an open source attempt to copy Google® MapReduce and Google File System – has truly changed our world unbelievably quickly. Today, Hadoop and the other big data applications enable search, analytics, advertising, peta-scale reliable file systems, genomics research and more – even services like Apple® Siri run on Hadoop. Big data has changed the concept of analytics from statistical sampling to analysis of all data. And it has already enabled breakthroughs and changes in research, where relationships and patterns are looked for empirically, rather than based on theories.
Overall, I think big data has been one of the most transformational technologies this century. Big data has changed the focus from compute to storage as the primary enabler in the datacenter. Our embedded hard disk controllers, SAS (Serial Attached SCSI) host bus adaptors and RAID controllers have been at the heart of this evolution. The next evolutionary step in big data is the broad adoption of graph analysis, which integrates the relationship of data, not just the data itself.
CEO & CIO: Due to cloud computing, mobile connectivity and big data, the traditional IT ecosystem or industrial chain is changing. What are the three most important changes in LSI’s current cooperation with the ecosystem chain? How does LSI see the changes in the various links of the traditional ecosystem chain? What new links are worth attention? Please give some examples.
Cloud computing and the explosion of data driven by mobile devices and media has and continues to change our industry and ecosystem contributors dramatically. It’s true the enterprise market (customers, OEMs, technology, applications and use cases) has been pretty stable for 10-20 years, but as cloud computing has become a significant portion of the server market, it has increasingly affected ecosystem suppliers like LSI.
Timing: It’s no longer enough to follow Intel’s ticktock product roadmap. Development cycles for datacenter solutions used to be 3 to 5 years. But these cycles are becoming shorter. Now, demand for solutions is closer to 6 months – forcing hardware vendors to plan and execute to far tighter development cycles. Hyperscale datacenters also need to be able to expand resources very quickly, as customer demand dictates. As a result they incorporate new architectures, solutions and specifications out of cycle with the traditional Intel roadmap changes. This has also disrupted the ecosystem.
End customers: Hyperscale datacenters now have purchasing power in the ecosystem, with single purchase orders sometimes amounting to 5% of the server market. While OEMs still are incredibly important, they are not driving large-scale deployments or innovating and evolving nearly as fast. The result is more hyperscale design-win opportunities for component or sub-system vendors if they offer something unique or a real solution to an important problem. This also may shift profit pools away from OEMs to strong, nimble technology solution innovators. It also has the potential to reduce overall profit pools for the whole ecosystem, which is a potential threat to innovation speed and re-investment.
New players: Traditionally, a few OEMs and ISVs globally have owned most of the datacenter market. However, the supply chain of the hyperscale cloud companies has changed that. Leading datacenters have architected, specified or even built (in Google’s case) their own infrastructure, though many large cloud datacenters have been equipped with hyperscale-specific systems from Dell and HP. But more and more systems built exactly to datacenter specifications are coming from suppliers like Quanta. Newer network suppliers like Arista have increased market share. Some new hyperscale solution vendors have emerged, like Nebula. And software has shifted to open source, sometimes supported for-pay by companies copying the Redhat® Linux model – companies like Cloudera, Mirantis or United Stack. Personally, I am still waiting for the first 3rd-party hardware service emulating a Linux support and service company to appear.
Open initiatives: Yes, we’ve seen Hadoop and its derivatives deployed everywhere now – even in traditional industries like oil and gas, pharmacology, genomics, etc. And we’ve seen the emergence of open-source alternatives to traditional databases being deployed, like Casandra. But now we’re seeing new initiatives like Open Compute and OpenStack. Sure these are helpful to hyperscale datacenters, but they are also enabling smaller companies and universities to deploy hyperscale-like infrastructure and get the same kind of automated control, efficiency and cost structures that hyperscale datacenters enjoy. (Of course they don’t get fully there on any front, but it’s a lot closer). This trend has the potential to hurt OEM and ISV business models and markets and establish new entrants – even as we see Quanta, TYAN, Foxconn, Wistron and others tentatively entering the broader market through these open initiatives.
New architectures and new algorithms: There is a clear movement toward pooled resources (or rack scale architecture, or disaggregated servers). Developing pooled resource solutions has become a partnership between core IP providers like Intel and LSI with the largest hyperscale datacenter architects. Traditionally new architectures were driven by OEMs, but that is not so true anymore. We are seeing new technologies emerge to enable these rack-scale architectures (RSA) – technologies like silicon photonics, pooled storage, software-defined networks (SDN), and we will soon see pooled main memory and new nonvolatile main memories in the rack.
We are also seeing the first tries at new processor architectures about to enter the datacenter: ARM 64 for cool/cold storage and web tier and OpenPower P8 for high power processing – multithreaded, multi-issue, pooled memory processing monsters. This is exciting to watch. There is also an emerging interest in application acceleration: general-purposing computing on graphics processing units (GPGPUs), regular expression processors (regex) live stream analytics, etc. We are also seeing the first generation of graph analysis deployed at massive scale in real time.
Innovation: The pace of innovation appears to be accelerating, although maybe I’m just getting older. But the easy gains are done. On one hand, datacenters need exponentially more compute and storage, and they need to operate 10x to 1000x more quickly. On the other, memory, processor cores, disks and flash technologies are getting no faster. The only way to fill that gap is through innovation. So it’s no surprise there are lots of interesting things happening at OEMs and ISVs, chip and solution companies, as well as open source community and startups. This is what makes it such an interesting time and industry.
Consumption shifts: We are seeing a decline in laptop and personal computer shipments, a drop that naturally is reducing storage demand in those markets. Laptops are also seeing a shift to SSD from HDD. This has been good for LSI, as our footprint in laptop HDDs had been small, but our presence in laptop SSDs is very strong. Smart phones and tablets are driving more cloud content, traffic and reliance on cloud storage. We have seen a dramatic increase in large HDDs for cloud storage, a trend that seems to be picking up speed, and we believe the cloud HDD market will be very healthy and will see the emergence of new, cloud-specific HDDs that are radically different and specifically designed for cool and cold storage.
There is also an explosion of SSD and PCIe flash cards in cloud computing for databases, caches, low-latency access and virtual machine (VM) enablement. Many applications that we take for granted would not be possible without these extreme low-latency, high-capacity flash products. But very few companies can make a viable storage system from flash at an acceptable cost, opening up an opportunity for many startups to experiment with different solutions.
Summary: So I believe the biggest hyperscale innovations are autonomous behavior and orchestration, HA at the datacenter level vs. machine level, and big data. These are radically changing the whole industry. And what are those changes for our industry and ecosystem? You name it: timing, end customers, new players, open initiatives, new architectures and algorithms, innovation, and consumption patterns. All that’s staying the same are legacy products and solutions.
These were great questions. Sometimes you need the press or someone outside the industry to ask a question that makes you step back and think about what’s going on. Great questions.
Tags: Alibaba, Apple Siri, Arista, ARM 64, Baidu, big data, Casandra, CEO & CIO Magazine, China, cloud storage, Cloudera, cold storage, cool storage, datacenter, datacenter ecosystem, Dell, flash, Foxconn, Google File System, Google MapReduce, Hadoop, hard disk drive, HDD, high availability, HP, hyperscale datacenter, Intel, Internet, latency, Microsoft, Mirantis, Nebula, OEM, Open Compute, OpenPower P8, OpenStack, original equipment manufacturer, Quanta, rack scale, RAID, Redhat Linux, SAS, SDDC, SDN, SDS, Serial Attached SCSI, software-defined datacenter, software-defined networks, software-defined storage, solid state drive, SSD, Tencent, TYAN, United Stack, virtual machine, VM, Wistron
I was recently speaking to a customer about data reduction technology and I remembered a conversation I had with my mother when I was a teenager. She used to complain how chaotic my bedroom looked, and one time I told her “I was illustrating the second law of thermodynamics” for my physics class. I was referring to the mess and the tendency of things to evolve towards the state of maximum entropy, or randomness. I have to admit I only used that line once with my mom because it pissed her off and she likened me to an intelligent donkey.
I never expected those early lessons in theoretical physics to be useful in the real world, but as it turns out entropy can be a significant factor in determining solid state drive (SSD) performance. When an SSD employs data reduction technology, the degree of entropy or randomness in the data stream becomes inversely related to endurance and performance—the lower the data entropy, the higher the endurance and performance of the SSD.
Entropy affects data reduction
In this context I am defining entropy as the degree of randomness in data stored by an SSD. Theoretically, minimal or nonexistent entropy would be characterized by data bits of all ones or all zeros, and maximum entropy by a completely random series of ones and zeros. In practice, the entropy of what we often call real-world data falls somewhere in between these two extremes. Today we have hardware engines and software algorithms that can perform deduplication, string substitution and other advanced procedures that can reduce files to a fraction of their original size with no loss of information. The greater the predictability of data – that is, the lower the entropy – the more it can be reduced. In fact, some data can be reduced by 95% or more!
Files such as documents, presentations and email generally contain repeated data patterns with low randomness, so are readily reducible. In contrast, video files (which are usually compressed) and encrypted files (which are inherently random) are poor candidates for data reduction.
A reminder is in order not to confuse random data with random I/O. Random (and sequential) I/Os describe the way data is accessed from the storage media. The mix of random vs. sequential I/Os also influences performance, but in a different way than entropy, described in my blog “Teasing out the lies in SSD benchmarking.”
Why data reduction matters in an SSD
The NAND flash memory inside SSDs is very sensitive to the cumulative amount of data written to it. The more data written to flash, the shorter the SSD’s service life and the sooner its performance will degrade. Writing less data, therefore, means better endurance and performance. You can read more about this topic in my two blogs “Can data reduction technology substitute for TRIM” and “Write Amplification – Part 2.”
Real-world examples in client computing
Take an encrypted text document. The file started out as mostly text with some background formatting data. All things considered, the original text file is fairly simple and organized. The encryption, by design, turns the data into almost completely random gibberish with almost no predictability to the file. The original text file, then, has low entropy and the encrypted file high entropy.
Intel Labs examined entropy in the context of compressibility as background research to support its Intel SSD 520 Series. The following chart summarizes Intel’s findings for the kinds of data commonly found on client storage drives, and the amount of compression that might be achieved:
According to Intel, “75% of the file types observed can be typically compressed 60% or more.” Granted, the kind of files found on drives varies widely according to the type of user. Home systems might contain more compressed audio and video, for example – poor candidates, as we mentioned, for data reduction. But after examining hundreds of systems from a wide range of environments, LSI estimates that the entropy of typical user data averages about a 50%, suggesting that many users would see at least a moderate improvement in performance and endurance from data reduction because most data can be reduced before it is written to the SSD.
Real-world examples in the enterprise
Enterprise IT managers might be surprised at the extent to which data reduction technology can increase workload performance. While gauging the level of improvement with any precision would require data-specific benchmarking, sample data can provide useful insights. LSI examined the entropy of various data types, shown in the chart below. I found the high reducibility of the Oracle® database file very surprising because I had previously been told by database engineers that I should expect higher entropy. I later came to understand these enterprise databases are designed for speed, not capacity optimization. Therefore it is faster to store the data in its raw form rather than use a software compression application to compress and decompress the database on the fly and slow it down.
Putting it all together
The chief goals of PC and laptop users and IT managers have long been, and remain, to maximize the performance and lifespan of storage devices – SSDs and HDDs – and at a competitive price point. The challenge for SSD users is to find a device that delivers on all three fronts. LSI® SandForce® DuraWrite™ technology helps give SSD users exactly what they want. By reducing the amount of data written to flash memory, DuraWrite increases SSD endurance and performance without additional cost – even if it doesn’t help organize your teenager’s bedroom.
A customer recently asked me if the SF3700, our latest flash controller, supports SATA Express and fired away with a bunch of other questions about the standard. The depth of his curiosity suggested a broader need for education on the basics of the standard.
To help me with the following overview of SATA Express, I recruited Sumit Puri, Sr. Director of Strategic Marketing for the Flash Components Division at LSI (SandForce). Sumit is a longtime contributor to many storage standards bodies and has been working with SATA- IO – the group responsible for SATA Express – for many years. He has first-hand knowledge of SATA- IO’s work.
Here are his insights into some of the fundamentals of SATA Express.
What is SATA Express?
Sumit: There’s quite a bit of confusion in the industry about what SATA Express defines. In simple terms, SATA Express is a specification for a new connector type that enables the routing of both PCIe® and SATA signals. SATA Express is not a command or signaling protocol. It should really be thought of as a connector that mates with legacy SATA cables and new PCIe cables.
Why was SATA Express created?
Sumit: SATA Express was developed to help smooth the transition from the legacy SATA interface to the new PCIe interface. SATA Express gives system vendors a common connector that supports both traditional SATA and PCIe signaling and helps OEMs streamline connector inventory and reduce related costs.
What is the protocol used in SATA Express?
Sumit: One of the misconceptions about SATA Express is that it’s a protocol specification. Rather, as I mentioned, it’s a mechanical specification for a connector and the matching cabling. Protocols that support SATA Express include SATA, AHCI and NVME.
What are the form factors for SATA Express?
Sumit: SATA Express defines connectors for both a 2.5” drive and the host system. SATA Express connects the drive and system using SATA cables or the newly defined PCIe cables.
What connector configuration is used for SATA Express?
Sumit: Because SATA Express supports both SATA and PCIe signaling as well as the legacy SATA connectors, there are multiple configuration options available to motherboard and device manufacturers. The image below shows plug (a) which is built for attaching to a PCIe device. Socket (b) would be part of a cable assembly for receiving plug (a) or a standard SATA plug, and Socket (c) would mount to a backplane or motherboard fir receiving plug (a) or a standard SATA plug. The last two connectors are a mating pair designed to enable cabling (e) to connect to motherboards (d).
When will hosts begin supporting SATA Express?
Sumit: We expect systems to begin using SATA Express connectors early this year. They will primarily be deployed in desktop environments, which require cabling. In contrast, we expect limited use of SATA Express in notebook and other portable systems that are moving to cableless card-edge connector designs like the recently minted M.2 form factor. We also expect to see scant use of SATA Express in enterprise backplanes. Enterprise customers will likely transition to other connectors that support higher speed PCIe signaling like the SFF-8639, a new connector that was originally included in the SATA Express specification but has since been removed.
Will LSI support SATA Express?
Sumit: Absolutely. Our SF3700 flash controller will be fully compatible with the newly defined SATA Express connector and support either SATA or PCIe. Our current SF-2000 SATA flash controllers support SATA cabling used on SATA Express, but not PCIe.
Will LSI also support SRIS?
Sumit: PCIe devices enabled with SRIS (Separate Refclk Independent SSC) can self-clock so need no reference clock from the host, allowing system builders to use lower cost PCIe cables. SRIS is an important cost-saving feature for cabling that supports PCIe signaling. It doesn’t support card-edge connector designs. Today the SF3700 supports PCIe connectivity, and LSI will support SRIS in future releases of SF3700 and other products.
Why is it called SATA Express?
Sumit: SATA Express blends the names of the two connectors and captures the hybridization of the physical interconnects. The name reflects the ability of legacy SATA connectors to support higher PCIe data rates to simplify the transition to PCIe devices. SATA Express can pull double duty, supporting both PCIe and SATA signaling in the same motherboard socket. The same SATA Express socket accepts both traditional SATA and new PCIe cables and links to either a legacy SATA or SATA Express device connector.
How fast can SATA Express run?
Sumit: The PCIe interface defines the top SATA Express speed. A PCIe Gen2 x2 device supports up to 900 MB/s of throughput, a PCIe Gen3 x2 device up to 1800 MB/s of throughput – both significantly higher than 550 Mb/s speed ceiling of today’s SATA devices.
Is SATA Express similar to M.2?
Sumit: There are two key similarities. Both support SATA and PCIe on the same host connector, and both are designed to help transition from SATA to PCIe over time.
SATA Express delivers the future of connector speeds today
SATA Express was born of the stuff of all great inventions. Necessity. The challenge SATA-IO faced in doubling SATA 6 Gb/s speeds was herculean. The undertaking would have been too time-consuming to support the next-generation connection speeds that PCIe answers. It would have been too involved, requiring an overhaul of the SATA standard. Even in the brightest scenario, the effort would have produced a power guzzler at a time when greater power efficiency is a must for system builders. SATA-IO found a better path, an elegant bridge to PCIe speeds in the form of SATA Express.
Solid state drive (SSD) makers have introduced many new layout form factors that are not possible with hard disk drives (HDDs). My blog Size matters: Everything you need to know about SSD form factors talks about the many current SSD form factors, but I gave the new M.2 form factor only a glimpse. The specification and its history merit a deeper look.
A few years ago the PCI Special Interest Group (PCI-SIG), teaming with The Serial ATA International Organization (SATA-IO), started to develop a new form factor standard to replace Mini-PCIe and mSATA since specifications from both of these organizations are required to build SATA M.2 SSDs. The new layout and connector would be used for applications including WiFi, WWAN, USB, PCIe and SATA, with SSD implementations using either PCIe or SATA interfaces. The groups set out to create a narrower connector that supports higher data rates, a lower profile and boards of varying lengths to accommodate various very small notebook computers.
This new form factor also aimed to support micro servers and similar high-density systems by enabling the deployment of dozens of M.2 boards. Unique notches in the edge connector known as “keys” would be used to differentiate the wide array of products using the M.2 connector and prevent the insertion of incompatible products into the wrong socket.
The name change
Initially the M.2 form factor was called Next Generation Form Factor, or NGFF for short. NGFF was designed to follow the dimensional specifications of M.2, a different specification from NGFF, which at that time was being defined by the PCI SIG. Soon after NGFF was announced, confusion between the identical form factors reigned, prompting the name change of NGFF to M.2. Many people in the industry have been slow to adopt the new M.2 name and you often see articles that describe these solutions as M.2, formally known as NGFF.
In the world of connectors or sockets, a “key” prevents the insertion of a connector into an incompatible socket to ensure the proper mating of connectors and sockets. The M.2 specification has defined 11 key configurations, seven for use sometime in the future. A socket can only have one key, but the plug-in cards can have keyways cut for multiple keys if they support those socket types. Of the four defined keys available for current use, two support SSDs. Key ID B (pins 12-19) gives PCIe SSDs up to two lanes of connectivity and key ID M (pins 59-66) provides PCIe SSDs with up to four lanes of connectivity. Both can accommodate SATA devices. All of the key patterns are uniquely configured so that the card cannot be flipped over and inserted incorrectly.
Unfortunately these keys alone do not tell the user enough about an SSD to help in the selection of replacement or upgrade drives. For example, a computer with an M.2 socket for PCIe x2 support features a B key so that no M.2 boards with PCIe x4 requirements (M key) can fit. However, even though a SATA M.2 card with a B key can fit in the same system, the host will not recognize SATA signals from the motherboard’s PCIe socket. With this signal incompatibility, users need to carefully read other socket specifications either printed on the motherboard or included in the system configuration information to see if the socket is PCIe or SATA.
The profile and lengths
Pin spacing on the M.2 card connector is higher in density than prior connector specifications, enabling a narrower board and thinner, lighter mobile computing systems that are smaller and weigh less. What’s more, the M.2 specification defines a module with components populating only one side of the board, leaving enough space between the main system board and the module for other components. The number of flash chips used by SSDs varies with storage capacity. The less the storage capacity requirement of an SSD, the shorter the module can be used, leaving system manufacturers more space for other components.
It’s all in the name
When I hear people call this specification by the name M.2 formally known as NGFF, I cannot help but think about the time when the rock artist Prince changed his name to an unpronounceable symbol and everyone was stuck calling him The Artist Formerly Known as Prince. In his case I believe he was going for the publicity of the confusion.
As for the renaming of NGFF to M.2, I really don’t think that was the goal. In fact I believe it was intended to simplify brand identity by eliminating a second name for the same specification. No matter what we call this new form factor, it appears destined to thrive in both the mobile computing and high-density server markets.
The term ”form factor” is used in the computer industry to describe the shape and size of computer components, like drives, motherboards and power supplies. When hard disk drives (HDDs) initially made their way into microprocessor-based computers, they used magnetic platters up to 8 inches in diameter. Because that was the largest single component inside the HDD, it defined the minimum width of the HDD housing—the metal box around the guts of the drive.
The height was dictated by the number of platters stacked on the motor (about 14 for the largest configurations). Over time the standard size of the magnetic patter diameter shrank, which allowed the HDD width to decrease as well. The computer industry used the platter diameter dimensions to describe the HDD form factors, and those contours shrank over the years. Those 8” HDDs for datacenter storage and desktop PCs shed size to 5” to today’s 3.5”, and laptop HDDs, starting at 2.5”, are now as small as 1.8”.
What defines an SSD form factor?
When solid state drives (SSDs) first started replacing HDDs, they had to fit into computer chassis or laptop drive bays (mounting location) built for HDDs, so they had to conform to HDD dimensions. The two SSDs shown below are form factor identical twins—without the outer casing—to 1.8” and 2.5” HDDs. The SSDs also use standard SATA connectors, but note that the SATA connector for 1.8” devices is narrower than the 2.5” devices to accommodate the smaller width.
However, there’s no requirement for the SSD to match the shape of a typical HDD form factor. In fact some of the early SSDs slid into the high-speed PCIe slots inside the computer chassis, not into the drive bays. A PCIe® SSD card solution resembles an add-in graphic card and installs the same way in the PCIe slot since the physical interface is PCIe.
The largest component of an SSD is a flash memory chip so, depending on how many flash chips are used, manufacturers have virtually limitless options in defining dimensions. JEDEC (Joint Electronic Device Engineering Council) defines technical standards for the electronic industry including SSD form factors. JEDEC defined the MO-297 standard, which establishes parameters for the layout, connector locations and dimensions of 54mm x 39mm Serial ATA (SATA) SSDs, so they can use the same connector as standard 2.5” HDDs, but fit into a much smaller space.
The most important element of an SSD form factor is the interface connector, the conduit to the host computer. In the early days of SSDs, that connector was typically the same SATA connector used with HDDs. But over time the width of some SSDs became smaller than the SATA connector itself, driving the need for new connectors.
Card edge connectors – the part of a computer board that plugs into a computer – emerged to enable smaller designs and to further reduce manufacturing and component costs by requiring the installation of only a single female socket on the host as a receptor for the edge of the SSD’s printed circuit board. (The original 2.5” and 1.8” SSD SATA connector required both a male and female plastic connector to mate the SSD to the computer).
With standardization of these connectors critical to ensuring interoperability among different manufacturers, a few organizations have defined standards for these new connectors. JEDEC defined the MO-300 (50.8mm x 29.85mm), which uses a mini-SATA (mSATA) connector, the same physical connector as mini PCI Express, although the two are not electrically compatible. SSD manufacturers have used that same mSATA edge connector and board width, but customized the length to accommodate more flash chips for higher capacity SSDs.
In 2012 a new, even smaller form factor was introduced as Next Generation Form Factor (NGFF), but was later renamed to M.2. The M.2 standard defines a long list of optional board sizes, and the connector supports both SATA and PCIe electrical interfaces. The keyways or notches on the connector can help determine the interface and number of PCIe lanes possible to the board. However, that gets into more details than we have space to cover here, so we will save that for a future blog.
Apple® MacBook Air® and some MacBook Pro systems use an SSD with a connector and dimensions that closely resemble those of the M.2 form factor. In fact Apple MacBook systems have used a number of different connectors and interfaces for its SSD over the years. Apple used a custom connector with SATA signals from 2010 through 2012 and in 2013 switched to a custom connector with PCIe signals.
In some cases, standard SSD form factor configurations are not an option, so SSD manufacturers have taken it upon themselves to create custom board and interface configurations that meet those less typical needs.
And finally there’s the ubiquitous USB-based connection. While USB flash drives have been around for nearly a decade, many people do not realize the performance of these devices can vary by 10 to 20 times. Typically a USB flash drive is used to make data portable—replacing the old floppy disk. In those cases the speed of the device is not critical since it is used infrequently.
Now with the high speed USB 3 interface, a SATA-to-USB 3 bridge chip, and a high performance flash controller like the LSI® SandForce® controller, these external devices can operate as a primary system SSDs, performing as fast as a standard SSD inside the system. The primary advantages of these SSDs are removability and transportability while providing high-speed operation.
If there’s one constant in life, it’s demand for ever smaller storage form factors that prompt changes in circuit layout, connector position and, of course, dimensions. New connectors proposed for future generations of storage devices like the SFF-8639 specification will enable multiple interfaces and data path channels on the same connector. While the SFF-8639 does not technically define the device to which it connects, the connector itself is rather large, so the form factor of the SSD will need to be big enough to hold the connector. That’s why the primary SFF-8639 market is datacenters that use back-plane connectors and racks of storage devices. A similar connector – like SFF-8639, very large and built to support multiple data paths – is the SATA Express connector. I will save the details of that connector for an upcoming blog.
The sky’s the limit for SSD shapes and sizes. Without a spinning platter inside a box, designers can let their imaginations run wild. Creative people in the industry will continue to find new applications for SSDs that were previously restricted by the internal components of HDDs. That creativity and flexibility will take on growing importance as we continue to press datacenters and consumer electronics to do more with less, reminding us that size does in fact matter.
In the spirit of Christmas and the holidays, I thought it would be appropriate for a slight change from the usual blog entry today. In this case you need to think about the classic song “The Twelve Days of Christmas.” I promise not to dive into the origins of the song and the meaning behind it, but I will provide an alternate version of the words to think about when you are with your family singing Christmas carols.
Rather than write out all the 12 separate choruses, I decided to start with the final chorus on the 12th day.
Sung to the tune of “The 12 Days of Christmas”
On the 12th day of Christmas my true love gave to me
Twelve second boot times,
Eleven data reduction benefits,
Ten nm-class flash,
Nine flash channels,
Eight flash chips,
Seven percent over provisioning,
Six gigabits per second,
Five golden edge connectors,
Four PCIe lanes,
Third generation controller,
Two interfaces in one package,
And a SandForce Driven SSD.
Maybe if you were good this year, Santa will bring you a SandForce Driven SSD to revitalize your computer too! Merry Christmas and happy holidays to all!