Scroll to Top

I started working years ago to engage large datacenters, learn what their problems are and try to craft solutions for their problems. It’s taken years, but we engaged them, learned, changed how we thought about storage and began creating solutions that are being deployed at scale.

We’ve started to do the same with the Chinese Internet giants. They’re growing at an incredible rate.  They have similar problems, but it’s surprising how different their solution approaches are. Each one is unique. And we’re constantly learning from these guys.

So to wrap up the blog series on my interview with CIO & CEO magazine, here are the last two questions to explain a bit more.

CEO & CIO: Please use examples to tell the stories about the forward-looking technologies and architectures that LSI has jointly developed with Internet giants.

While our host bus adapters (HBAs) and MegaRAID® solutions have been part of the hyperscale Internet companies’ infrastructure since the beginning, we have only recently worked very closely with them to drive joint innovation. In 2009 I led the first LSI engagement with what we then called “mega datacenters.” It took a while to understand what they were doing and why. By 2010 we realized there were specialized needs, and began to imagine new hardware products that worked with these datacenters. Out of this work came the realization that flash was important for efficiency and capability, and the “invention” of LSI® Nytro™ product portfolio. (More are in the pipeline). We have worked closely with hyperscale datacenters to evolve and tune these solutions, to where Nytro products have become the backbone of their main revenue platforms. Facebook has been a vitally important partner in evolving our Nytro platform – teaching us what was truly needed, and now much of their infrastructure runs on LSI products. These same products are a good fit for other hyperscale customers, and we are slowly winning many of the large ones.

Looking forward, we are partnered with several Internet giants in the U.S. and China to work on cold storage solutions, and more importantly shared DAS (Distributed DAS: D-DAS) solutions. We have been demonstrating prototypes. These solutions enable pooled architectures and rack scale architecture, and can be made to work tightly with software-defined datacenters (SDDCs). They simplify management and resource allocation – making task deployment more efficient and easier. Shared DAS solutions increase infrastructure efficiency and improves lifecycle management of components. And they have the potential to radically improve application performance and infrastructure costs.

Looking further into the future, we see even more radical changes in silicon supporting transport protocols and storage models, and in rack scale architectures supporting storage and pooled memory. And cold storage is a huge though, some would say, boring problem that we are also focused on – storing lots of data for free and using no power to do it… but I really can’t talk about any of that.

CEO & CIO: LSI maintains good contact with big Internet companies in China. What are the biggest differences between dealing with these Internet enterprises and dealing with traditional partners?

Yes, we have a very good relationship with large Chinese Internet companies. In fact, I will be visiting Tencent, Alibaba and Baidu in a few weeks. One of the CTOs I would like to say is a friend. That is, we have fun talking together about the future.

These meetings have evolved. The first meetings LSI had about two years ago were sales calls, or support for OEM storage solutions. These accomplished very little. Once we began visiting as architects speaking to architects, real dialogs began. Our CEO has been spending time in China meeting with these Internet companies both to learn, and to make it clear that they are important to us, and we want a chance to solve their problems. But the most interesting conversations have been the architectural ones. There have been very clear changes in the two years I have traveled within China – from standard enterprise to hyperscale architectures.

We’ve received fascinating feedback on architecture, use, application profiles, platforms, problems and goals. We have strong engagement with the U.S. Internet giants. At the highest level, the Chinese Internet companies have similar problems and goals. But the details quickly diverge because of revenue per user, resources, power availability, datacenter ownership and Internet company age. The use of flash is very different.

The Chinese Internet giants are at an amazing change point. Most are ready for explosive growth of infrastructure and deployment of cloud services. Most are changing from standard OEM systems and architectures to self-designed hyperscale systems after experimenting with Scorpio and microserver deployments. Several, like JD.com (an Amazon-like company) are moving from hosted to self-built infrastructure. And there seems to be a general realization that the datacenter has changed from a compute-centric model to a dataflow model, where storage and network dictate how much work gets done more than the CPU does. These giants are leveraging their experience and capability to move very quickly, and in a few cases are working to create true pooled rack level architectures much like Facebook and Google have started in the U.S. In fact, Baidu is similar to Facebook in this approach, but is different in its longer term goals for the architecture.

The Chinese companies are amazingly diverse, even within one datacenter, and arguments on architectural direction are raging within these Internet giants – it’s healthy and exciting. However, the innovations that are coming are similar to those developed by large U.S. Internet companies. Personally I have found these Internet companies much more exciting and satisfying to work with than traditional OEMs. The speed and cadence of advancement, the recognition of problems and their importance, the focus on efficiency and optimization have been much more exciting. And the youthful mentality and view to problems, without being burdened by “the way we’ve always done this” has been wonderful.

Also see these blogs of mine over the past year, where you can read more about some of these changes:

Postcard from Shenzhen: China’s hyperscale datacenter growth, mixed with a more traditional approach
China in the clouds, again
China: A lot of talk about resource pooling, a better name for disaggregation

Or see them (and others) all here.

Summary: So it’s taken years, but we engaged U.S. Internet giants, learned about their problems, changed how we thought about storage and began creating solutions that are now being deployed at scale. And we’re constantly learning from these guys. Constantly, because their problems are constantly changing.

We’ve now started to do the same with the Chinese Internet giants. They have similar problems, and will need similar solutions, but they are not the same. And just like the U.S. Internet giants, each one is unique.

Tags: , , , , , , , , , , , , , , , , , , , , , , , , , ,
Views: (2090)


One of the coolest parts of my job is talking with customers and partners about their production environment challenges around database technology.  A topic of particular interest lately is in-memory database (IMDB) systems and their integration into an existing environment.

The need for speed
Much of the media coverage of IMDB integrations is heavily focused on speed and loaded with terms like real-time processing, on-demand analytics and memory speed.  But zeroing in on the performance benefits comes at the expense of so many other key aspects of IMDBs. The technology needs to be evaluated as a whole.

Granted, in-memory databases can store data structures in DRAM with latency that is measured in nanoseconds. (Latency of disk-based technology, comparatively, is glacial – clocked in milliseconds.)  Depending on the workload and the vendor’s database engine architecture, DRAM processing can improve database performance by as much as 50X-100X.

How durable is it?
Keep in mind that most relational database systems conform to the ACID (Atomicity, Consistency, Isolation, and Durability) properties of transactions. (You can find a more thorough investigation of these properties in this paper – “The Transaction Concept: Virtues and Limitation” – authored by database pioneer Jim Gray.) The matter of relational database system durability naturally raises the question: But how is data protected from DRAM failures when things go haywire and what is the recovery experience like?  Relational databases implement the durable property to prevent problems associated with power loss or hardware failure to ensure transaction information is permanently captured.

The commonly used WAL (Write Ahead Logging) method ensures that the transaction data is written to a log file (persisted on non-volatile storage) before it is committed and subsequently written to a data file (persisted on non-volatile storage). When the database engine restarts after a failure, it switches to recovery mode to read the log file and determine if the transactions should be rolled forward (committed) or rolled back (cancelled), depending on their state at the time of failure.

Current in-memory database systems support durability and their implementations vary by vendor.  Here is a sampling of durability techniques they use:

  • WAL (Write Ahead Logging)
    • Traditional method described above using a log file.
    • Changes are persisted to non-volatile storage that is used for recovery.
  • Replication
    • Data is copied to more than one location, and can be across different nodes.
    • Recovery can be handled using failover to alternate nodes.
  • Snapshots
    • Database snapshots are taken at intervals.
    • Previous snapshots can be used for recovery.
  • Data Tiering
    • Frequently accessed data resides only in in-memory DRAM structures.
    • Archival or less frequently accessed data resides only on non-volatile storage.
    • Replication can be used as well.

Shopping tip: Consider durability when evaluating your options
If changes in your data environment are frequent and require greater persistence and consistency, be sure to also consider durability when evaluating and comparing vendor implementations.  Durability is no less important than query speed.  Different implementations may or may not be a good fit and in some cases might require additional hardware that can increase cost.

It’s easy to get swept away by all the media attention about how in-memory databases deliver blazing performance, but customers often tell me they would gladly give up some performance for rock-solid stability and interoperability.

For our part, LSI enterprise PCIe® flash storage solutions not only perform well but also include DuraClass™ technology, which can increase the endurance, reliability and power efficiency of non-volatile storage used for in-memory database systems.

*Old suitcase by allesok used with permission.

 

Tags: , , , , , , , , , , , , , , ,
Views: (2003)