Scroll to Top

During the past few years, the deployment of cloud architectures has accelerated to support various consumer and enterprise applications such as email, word processing, enterprise resource planning, customer relationship management and the like. Traditionally, co-located servers, storage and networking moved to the cloud en masse in the form of a service, with overlying applications that have been and remain very insensitive to delay and jitter.

But the fast-emerging next generation of business applications require much tighter service level agreements (SLA) from cloud providers. Applications such as Internet of Things, smart grids, immersive communications, hosted clients and gaming are some good examples. These use cases tend to be marked by periods of high interactivity, so delay and jitter for the network, computer and storage must be minimized.  During times of normal interactivity, the applications are in steady-state condition, requiring minimal SLAs from the infrastructure resources.

Emerging use cases drive demand for two-tier cloud architectures
These emerging use cases are driving the rise of two-tier cloud architectures. The key for these architectures to succeed is efficiency: they must be cost-effective to deploy and guarantee a tight SLA for applications while leaving the rest of the carrier and cloud infrastructure unchanged. What’s more, the application service needs to move closer to the end user, but only for the duration of the real-time interaction. These measures help ensure that the customer’s application-specific requirements for delay and jitter are met without requiring major upgrades to the carrier or cloud infrastructures.

In this two-tier cloud architecture, the first cloud tier, also referred to in the industry as centralized cloud, is where the applications typically reside. The second cloud tier is invoked on demand, and the application’s virtual machine along with its relevant network, application and storage data shift to this tier. Keep in mind that the second tier can be instantiated as part of an existing service provider network element or as a stand-alone infrastructure element closer to the end user.

A connected patient heart monitor provides a useful example. During most of its operational time, the device may be collecting data only periodically, and with no need for any interactions with medical staff. But when the heart monitor detects an abnormality, the application hosted in the cloud must instantly be moved closer to the user in order to provide interactivity. For this use case, the second tier cloud must host the application, assess the patient’s condition, retrieve relevant historical information and alert the medical staff for a possible medical response.

The key, then, is to move applications from tier one to tier two clouds seamlessly. LSI® Axxia® multi-core communication processors feature an architectural scalability for network acceleration and computer cluster capabilities that provide this seamless bridge between the two clouds. In order for the two-tier cloud architectures to thrive, they need three fundamental elements:

a.       On-demand resource provisioning
Many cloud datacenters are squarely focused on deploying end-to-end resource provisioning tools to improve efficiency. Not the least among these is the fast-growing end-to-end orchestration ecosystem for OpenStack® software, though there are many proprietary solutions. End-to-end orchestration tools need to be aware of all the second-tier cloud datacenter components. In some cases, OpenStack is even being deployed to boot up second tier cloud components. However, a big challenge remains – maintaining a steady state and full capabilities of various distributed second-tier cloud components.

b.       Efficient virtual machine movements
For tiered cloud architectures to thrive, they must also transfer enough network, application and storage data to sustain continuing operations of the application at the second tier. However, many of today’s virtual machine migration solutions are not geared to moving datacenter resources efficiently. In a two-tier cloud architecture, the virtual machine migration may traverse many hops of carrier infrastructure, increasing total cost of ownership (TCO). In addition, complete virtual machine images must be transferred before the destination station can start the machines, extending the time it takes for the second tier to take control. The upshot is that optimized solutions need to be developed to enable seamless virtual machine migrations.

c.       Network and storage acceleration of resource-constrained tier-two clouds
Unlike the first cloud tier, the second cloud tier is bound to be resource-constrained, requiring significant data acceleration for both the networking and storage layers. A 16-core full SMP ARM®-based processor like the LSI Axxia 5500 processor, with its processor cores and, more importantly, its fully programmable acceleration engines for offloading security, deep packet inspection, traffic management and other functions is well-suited for network acceleration of the second cloud tier. Keep in mind that specific acceleration needs vary based on the location of the second tier cloud. For example, the acceleration requirements of the second cloud tier would differ depending on whether it is part of a service provider access aggregation router or located on a remote lamp post. The need for security acceleration, in particular, increases tremendously in cases where data associated with particular data events must be authenticated before further processing. To support these various acceleration needs, the second cloud tier can be built out of fairly homogeneous and scalable ARM-based hardware components with differing acceleration builds tuned to specific tasks running on it.

Momentum for greater connectivity builds
Momentum behind billions of connected things/machines across industrial and consumer applications to create a more connected, interactive world is building. Two-tier clouds and other innovative architectures are emerging at an accelerated pace to meet demand for this higher order of connectivity. And it is solutions like the LSI Axxia processor that promise to enable the scalable, flexible acceleration required for these emerging two-cloud architectures.

 

Tags: , , , , , , , , , , , , ,
Views: (1470)


Have you ever seen the old BBC TV show “Connections”? It’s a little old now, but I loved how it followed threads through time, and I marveled at the surprising historical depth of important “inventions.” I think we need to remember that as engineers and technologists. We get caught up in the short-term tactical delivery of technology. We don’t see the sometimes immense ripples in society from our work – even years later.

I got a flurry of emails yesterday, arranging an anniversary get-together in August at the Apple campus. Why? It’s the 20th anniversary of the Newton. Ok – so this has nothing to do with LSI really, but it does have a lot to do with our everyday lives. More than you think.

So you either know the Newton and think it was a failure (think Trudeau’s famous handwriting cartoon), or you don’t and you’re wondering what the *bleep* I’m talking about. Sometimes things that don’t seem very significant early on end up having profound consequences.  And I admit, the Newton was a failure, too expensive and not quite good enough, and the world couldn’t even get the concept of a general-purpose computer in your hand.

But oh – you could smell the future and get a tantalizing hint of what it would be. Remember – we’re talking 1993 here.

First – why does Rob Ober care? It’s personal. While I didn’t remotely help create the Newton, I did help bring it to market, mature the technology, and set the stage for the future (well – it’s not the future any more – it’s now). I was at Apple wrapping up the creation of the PowerPC processor and architecture, and the first Power Macs. I have a great memory around that time of getting the first Power Mac booted. Someone had the great idea of running the beta 68K emulator (to run standard Mac stuff). That was great, it worked, and then someone else said – wait – I have an Apple II emulator for the 68K Mac. So we had the very first PowerPC Mac running 68K code as a Mac to emulate a 6502 as an Apple II … and we played for hours. I also have a very clear memory of that PowerPC Mac standing shoulder-to-shoulder with the Robotron game in the Valley Green 5 building break room. It was a state-of-the-art video game and looked like this.

Yea, that shows you it was a while ago. (But it was a good game.)

A guy named Shane Robison pulled me over (yea, the same HP CTO, now CEO of FusionIO) to come fix some things on the super-hush Newton program. In the end, I took over responsibility for the processors, custom chips, communication stacks and hardware, plastics and tooling, display, touch screen, power supply, wireless, NiMH and LiION batteries…  A lot.  We pushed the limits of state of the art on all those fronts. It was a really important wonderful/terrible part of my career. I learned an amazing amount.

(If you’re interested in viewing a Newton from today’s perspective, there is a fascinating review here: http://techland.time.com/2012/06/01/newton-reconsidered/)

Let me start with some boring effects. We were using the ARM processor because of its low power. But. It wasn’t perfect, and ARM itself was on the edge of insolvency. We invested a sizable chunk of money, and gave it guidance on how to transition from ARM 6 to 7 to 9. ARM is alive today because of that, and the ARM 9 is still in 100’s of millions of products. And we also worked with DEC to create the StrongARM processor family, which became XScale at Intel, then went to Marvel, and also bootstrapped Atom, and, and…

The Newton needed non-volatile storage. Disks were immense, expensive and power-hungry. 2-1/2” disk? Didn’t exist.  3-1/2” was small. The only remotely cost-effective technology was called NAND flash, which was fundamentally incompatible with program execution, and nightmarish for data storage/retrieval, and unbelievably expensive per bit. I think the early Newtons were 8 Mbytes? (that’s mega not giga…). The team figured out how to make that work. Yep – that was the first use of Toshiba NAND for program/data. (I’ve been playing with flash for storage since then.)

Then some more interesting things…

I wired the Apple campus with wireless LAN base stations (it would be 6 years until Wifi, and 802.11 wasn’t even dreamt up yet) and built the wireless LAN receivers into Newtons, gave them to the Apple execs and set up their mail to be forwarded. You couldn’t even do that on laptops. We could be anywhere in the campus and instantly receive and send emails. More – we could browse the (rudimentary) web. I also worked with RIM (yea – Research In Motion – Blackberry) and Metricom to use their wireless wide area net technology to give Newtons access to email and the Web anywhere in the Bay Area. Quite a few times I was driving to meetings, wasn’t sure where to go, so pulled over and looked up the meeting in my Newton calendar, then checked the address on my browser with MapQuest. 1995. Sound familiar?

We also spent time with FedEx pitching it on the idea of a Newton-based tablet to manage inventory (integrated bar code scanner), accept signatures on screen with tablet/pen (even the upside down thing to hand it to the customer), show route maps, and cellularly send all that info back and forth for live tracking. FedEx was stunned by the concept. Sound familiar? I still have the proposal book with industrial designs in my garage. Yes, another Silicon Valley garage. Here’s what it rolled out 10 years later… which is ultimately pretty similar to our proposal.

And don’t forget Object Programming. (You remember when OOPS was a high-tech term?) I’m not really a software guy – just not my thing – but I loved programming on the Newton. In 10 minutes you could actually bang out a useful, great-looking program. Personally, I think the world would have been way better off if those object libraries had been folded into the Java object library. Even so, I get a nostalgic feel when I do iOS programming.

I even built a one-off proto that had cellphone guts inside the plastic of the Newton. (OK – it was chunky, but the smallest phones at the time were HUGE). I could make phone calls from the contacts or calendar or emails, send and receive SMS messages, and rudimentary MMS messages before there was such a thing – used just like a very overweight iPhone (OK – more like the big Samsung galaxy phones). I could even, in a pinch, do data over the GSM network – email, web, etc. It was around that time Nokia came calling and asked about our UI, our OS, our ability to used data over the GSM network… Those talks fell apart, but it was serious enough I made trips to Nokia’s mothership in Helsinki and Tampere a few times. (That’s north even for a Canadian boy…)

And then years later I got a phone call from one of the key people at Apple – Mike Culbert (who, sadly, recently passed away) – to ask about cellular/baseband chipsets and solutions. He knew I knew the technology. I introduced him to my friends at Infineon (now Intel Mobile) for a discussion on a mystery project… Those parts ended up in the iPhone. A lot of the same people and technology, just way more advanced…

iPad? Sure. A lot of the same people were involved in a Newton that never saw the light of day. The BIC. Here it is with the iPad. Again – 15 years apart.

And you remember the $100 laptop (OLPC?). As a founding board member, I brought an eMate kids Newton laptop to show the team early on. And of course the debate on disk vs. flash followed the same path as it had in Newton.  Here they are together, separated by more than 10 years. And then of course, OLPC has direct genetic parentage of netbooks, which then lead to Ultrabooks… (Did you know at one point Apple was considering joining OLPC and offering Darwin/OSX as the OS? Didn’t last long.)

And then there are the people. Off the top of my head there were founders or key movers of Palm, Xbox, Kindle, Hotmail, Yahoo,  Netscape, Android, WebTV (think most set-top boxes), Danger phone (you remember the sidekick?), Evernote, Mercedes research and a bunch of others. And some friends who became well-known VCs.  And I still have a lot of super-talented friends from that time, many of whom are still at Apple.

Sometimes things that don’t seem very significant have profound follow-on consequences. I think we need to remember that as engineers and technologists. We don’t see the sometimes immense ripples in society from our work – even years later. Today we’re planting the seeds for all those great things in the future. I admit, the Newton was a failure, but oh – you could smell the future and get a tantalizing hint of what it would be. Remember – we’re talking 1993 here.

Tags: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , ,
Views: (1293)


Walking the Great Wall before visits to some of China’s hyperscale datacenters

I’ve been travelling to China quite a bit over the last year or so. I’m sitting in Shenzhen right now (If you know Chinese internet companies, you’ll know who I’m visiting). The growth is staggering. I’ve had a bit of a trains, planes, automobiles experience this trip, and that’s exposed me to parts of China I never would have seen otherwise. Just to accommodate sheer population growth and the modest increase in wealth, there is construction everywhere – a press of people and energy, constant traffic jams, unending urban centers, and most everything is new. Very new. It must be exciting to be part of that explosive growth. What a market.  I mean – come on – there are 1.3 billion potential users in China.

The amazing thing for me is the rapid growth of hyperscale datacenters in China, which is truly exponential. Their infrastructure growth has been 200%-300% CAGR for the past few years. It’s also fantastic walking into a building in China, say Baidu, and feeling very much at home – just like you walked into Facebook or Google. It’s the same young vibe, energy, and ambition to change how the world does things. And it’s also the same pleasure – talking to architects who are super-sharp, have few technical prejudices, and have very little vanity – just a will to get to business and solve problems. Polite, but blunt. We’re lucky that they recognize LSI as a leader, and are willing to spend time to listen to our ideas, and to give us theirs.

Even their infrastructure has a similar feel to the US hyperscale datacenters. The same only different.  ;-)

Alibaba (top and bottom) and Baidu visitor badges

Profitability
A lot of these guys are growing revenue at 50% per year, several getting 50% gross margin. Those are nice numbers in any country. One has $100’s of billions in revenue.  And they’re starting to push out of China.  So far their pushes into Japan have not gone well, but other countries should be better. They all have unique business models. “We” in the US like to say things like “Alibaba is the Chinese eBay” or “Sina Weibo is the Chinese Twitter”…. But that’s not true – they all have more hybrid business models, unique, and so their datacenter goals, revenue and growth have a slightly different profile. And there are some very cool services that simply are not available elsewhere. (You listening Apple®, Google®, Twitter®, Facebook®?) But they are all expanding their services, products and user base. Interestingly, there is very little public cloud in China. So there are no real equivalents to Amazon’s services or Microsoft’s Azure. I have heard about current development of that kind of model with the government as initial customer. We’ll see how that goes.

Scale
100’s of thousands of servers. They’re not the scale of Google, but they sure are the scale of Facebook, Amazon, Microsoft…. It’s a serious market for an outfit like LSI. Really it’s a very similar scale now to the US market. Close to 1 million servers installed among the main 4 players, and exabytes of data (we’ve blown past mere petabytes). Interestingly, they still use many co-location facilities, but that will change. More important – they’re all planning to probably double their infrastructure in the next 1-2 years – they have to – their growth rates are crazy.

Platforms
Often 5 or 6 distinct platforms, just like the US hyperscale datacenters. Database platforms, storage platforms, analytics platforms, archival platforms, web server platforms…. But they tend to be a little more like a rack of traditional servers that enterprise buys with integrated disk bays, still a lot of 1G Ethernet, and they are still mostly from established OEMs. In fact I just ran into one OEM’s American GM, who I happen to know, in Tencent’s offices today. The typical servers have 12 HDDs in drive bays, though they are starting to look at SSDs as part of the storage platform. They do use PCIe® flash cards in some platforms, but the performance requirements are not as extreme as you might imagine. Reasonably low latency and consistent latency are the premium they are looking for from these flash cards – not maximum IOPs or bandwidth – very similar to their American counterparts. I think hyperscale datacenters are sophisticated in understanding what they need from flash, and not requiring more than that. Enterprise could learn a thing or two.

Some server platforms have RAIDed HDDs, but most are direct map drives using a high availability (HA) layer across the server center – Hadoop® HDFS or self-developed Hadoop like platforms. Some have also started to deploy microserver archival “bit buckets.” A small ARM® SoC with 4 HDDs totaling 12 TBytes of storage, giving densities like 72 TBytes of file storage in 2U of rack. While I can only find about 5,000 of those in China that are the first generation experiments, it’s the first of a growing wave of archival solutions based on lower performance ARM servers. The feedback is clear – they’re not perfect yet, but the writing is on the wall. (If you’re wondering about the math, that’s 5,000 x 12 TBytes = 60 Petabytes….)

Power
Yes, it’s important, but maybe more than we’re used to. It’s harder to get licenses for power in China. So it’s really important to stay within the envelope of power your datacenter has. You simply can’t get more. That means they have to deploy solutions that do more in the same power profile, especially as they move out of co-located datacenters into private ones. Annually, 50% more users supported, more storage capacity, more performance, more services, all in the same power. That’s not so easy. I would expect solar power in their future, just as Apple has done.

Scorpio
Here’s where it gets interesting. They are developing a cousin to OpenCompute that’s called Scorpio. It’s Tencent, Alibaba, Baidu, and China Telecom so far driving the standard.  The goals are similar to OpenCompute, but more aligned to standardized sub-systems that can be co-mingled from multiple vendors. There is some harmonization and coordination between OpenCompute and Scorpio, and in fact the Scorpio companies are members of OpenCompute. But where OpenCompute is trying to change the complete architecture of scale-out clusters, Scorpio is much more pragmatic – some would say less ambitious. They’ve finished version 1 and rolled out about 200 racks as a “test case” to learn from. Baidu was the guinea pig. That’s around 6,000 servers. They weren’t expecting more from version 1. They’re trying to learn. They’ve made mistakes, learned a lot, and are working on version 2.

Even if it’s not exciting, it will have an impact because of the sheer size of deployments these guys are getting ready to roll out in the next few years. They see the progression as 1) they were using standard equipment, 2) they’re experimenting and learning from trial runs of Scorpio versions 1 and 2, and then they’ll work on 3) new architectures that are efficient and powerful, and different.

Information is pretty sketchy if you are not one of the member companies or one of their direct vendors. We were just invited to join Scorpio by one of the founders, and would be the first group outside of China to do so. If that all works out, I’ll have a much better idea of the details, and hopefully can influence the standards to be better for these hyperscale datacenter applications. Between OpenCompute and Scorpio we’ll be seeing a major shift in the industry – a shift that will undoubtedly be disturbing to a lot of current players. It makes me nervous, even though I’m excited about it. One thing is sure – just as the server market volume is migrating from traditional enterprise to hyperscale datacenter (25-30% of the server market and growing quickly), we’re starting to see a migration to Chinese hyperscale datacenters from US-based ones. They have to grow just to stay still. I mean – come on – there are 1.3 billion potential users in China….

Tags: , , , , , , , , , , , , , , , , , , , , , ,
Views: (76113)