The problem with multicore processors isnâ€™t that they have a lot of cores. I hope my IC designer colleagues donâ€™t jump me when I say that having more than one core on a chip is a simple matter of cut and paste. The tricky part is getting all those cores to work together â€“ a coordinated, efficient effort is key. After all, if it were enough for the cores to work independently, we would just use multiple single-core processors. To be sure, the devil is in the details of connecting cores and managing how they share resources.
A key value of a multicore processor is using the processing muscle of additional cores â€“ all working on a problem at the same time â€“ to accelerate system performance. Basically, two heads are better than one. And 16 are even better. That is if they donâ€™t get in each otherâ€™s way. When multiple cores are working on one job, they need to deftly hand off information to each other and to other on-chip resources like memory and I/O. Managing and streamlining the movement of all that information to minimize delays can require complex traffic management. If one core or another resource becomes a bottleneck, the entire performance benefit of multiple cores can be lost.
The challenge of cache coherence
Another complexity of coordinating multiple cores is cache coherence â€“ the process of ensuring the consistency of data stored in each processorâ€™s cache memory. Processors store frequently accessed information in this small, fast memory so they donâ€™t have to access it again and again from slower storage such as main memory or disks. For example, if a core is running an application for ordering products online, it might load the inventory record for a particular product from disk into cache, modify it, and then write it back to disk when the transaction is complete.
The rub arises when more than one core caches the same data. If two cores were running the online ordering application, they might both cache the same inventory record. Both cores might then execute a transaction to sell the last unit of that product and not detect that the product is sold out. In a system with coherent cache, when one core makes any changes to cached data, all other cores storing the same data are notified that their cache is outdated, prompting an update for consistency. Â Tracking all cached data and making sure it is coherent is a formidable effort requiring highly sophisticated cache management.
A third challenge in getting multicore design right is choosing the number and type of cores. Networking system workloads consist of varying tasks. Some are large complex tasks that require powerful general-purpose cores running complex programs. Others are very simple, quick tasks that are executed millions of times a second and are best handled by specialized compute engines. And of course there are tasks that fall between these extremes. Getting the right number and mix of compute engines requires detailed understanding of the applications the multicore processor will be used in. Too many cores and the processor consumes too much power. Too few of one type of core and the others sit idle wasting cost and, again, power.
Striking the right balance of interconnect, cache coherence and cores
The problem with multicore processors is getting the right combination of interconnect, cache coherence and number and type of cores. LSIâ€™s latest solution to the multicore challenge for enterprise networking is the AxxiaÂ® 4500 family of processors. For general-purpose processing, the Axxia 4500 features up to 4 ARMÂ® Cortexâ„˘ A15 cores that deliver high performance and power efficiency in a standard Linux programming environment. For special-purpose packet processing, the new chips offer up to 50Gb/s packet processing and acceleration engines for security encryption, deep packet inspection, traffic management and other networking functions. Connecting all these compute resources is the ARM Corelink CCN504 interconnect with integrated cache coherence and quality of service technologies for efficient on-chip communications.
No, you are not about to read some Luddite rant about how smart phones are destroying our society. I love smart phones and most of you do too. Itâ€™s remarkable how quickly we have gone from arguing over the definition of a smart phone to not being able to live without them. In fact, the rapid adoption of smart phones has led to the problem I am going to talk about: smart phones can overwhelm dumb wireless networks.
Many of the networks that carry the wireless data to and from our smart phones are built with chips that were designed before Apple announced the first iPhoneÂ® in June of 2007. It takes a year or two to get a new semiconductor chip designed and built. Then another year or two for network equipment manufacturers to get their products into the market. By the time that new equipment has been deployed into networks around the world, five or six years have passed since chip designers decided what features their networking chips would have.
Even the latest 4G networks are built with chips that were designed before Apple invited everyone to store their music libraries in the cloud and before Vine enabled every kid with a phone to create and distribute videos. Todayâ€™s networks were not designed with these wireless data applications in mind and they are struggling to keep up.
Making dumb networks smarter
The problem is proving hard to solve because data traffic is growing faster than the obvious ways to cope with it. Network operators canâ€™t simply deliver more network capacity. Available spectrum is limited as is the capital to invest in expanded networks. The seemingly inevitable improvements in technology performance arenâ€™t enough to solve the problem either. Demand for data traffic is growing faster than Mooreâ€™s law can answer. Doing more of the same thing or doing the same thing faster isnâ€™t enough. Networking companies need to figure out new ways to handle data. We need to make dumb networks smarter.
When I say â€śdumb networks,â€ť I am referring to the fact that most of the existing wireless data networks were designed to move a packet of data from point A to point B in a reasonably short time. Thatâ€™s a fine approach when wireless networks can easily carry occasional stock updates and photo uploads from a few million early adopters. But now, when 90% of handset sales are iPhones or AndroidÂ® phones, the networks have become overwhelmed with data. Treating data packets with equal importanceÂ â€“ whether they are part of a VOIP phone call, business critical data or the 40 thousandth download of a cute panda video â€“ doesnâ€™t make sense anymore.
Prioritizing data for higher speed
As networks get smarter, they will be able to triage data â€“ for example, identifying voice packets to maintain call quality. Smart networks will know if the same video has been downloaded 5 times in the last minute, and will store it locally to speed the next download. Smart networks will know if a business user has contracted for a guaranteed level of service and prioritize those packets accordingly. Smart networks will know if an application update can wait until times of the day when the volume of network trafficÂ is lower. Smart networks will know if a flow of packets contains virus software that could damage your phone or the network itself.
To be smart about the data being transported, networks need a higher level of real-time analytical intelligence. We are now seeing the introduction of networking chips and equipment designed in the era of the smart phone. Networks are now gaining the ability to distinguish the nature of the data contained in a packet and to make smart decisions about the way the data is delivered. Networks are, in a word, becoming smarter â€“ better able to manage the crush of data coursing through them every day. Smart networks mayÂ soon be able to stand up to smartphones, and perhapsÂ even outwit them.