Why what happens in an internet minute really matters

This might not be the year the world ends, as some say the Mayans prophesized, but it may seem like it for a telecommunications industry under pressure to meet booming demand for Internet resources.


By some projections, the explosion in adoption of Internet connected devices such as laptops, tablets, smart phones, blue ray players and Internet TVs could strain network bandwidth beyond capacity as soon as this year, causing cases of services loss as available access to network bandwidth becomes completely consumed by peak demand.



This might not be the year the world ends, as some say the Mayans prophesized, but it may seem like it for a telecommunications industry under pressure to meet booming demand for Internet resources.


By some projections, the explosion in adoption of Internet connected devices such as laptops, tablets, smart phones, blue ray players and Internet TVs could strain network bandwidth beyond capacity as soon as this year, causing cases of services loss as available access to network bandwidth becomes completely consumed by peak demand.



Click image to enlarge


Just consider how much Internet activity is already happening every minute of every day.


By Intel’s estimates, nearly 640,000GB of global IP data is transferred every 60 seconds. In that same span of time, some 1,300 mobile users sign up for new services. More than 204 million emails are sent. Amazon rings up about $83,000 in sales. Around 20 million photos are viewed and 3,000 uploaded on Flickr. At least 6 million Facebook pages are viewed around the world. And more than 1.3 million video clips are watched on YouTube. By some estimates, Netflix alone, which is shifting to a video streaming vs. DVD rental model, accounts for nearly 30 percent of U.S. peak download traffic and 22 percent overall.


Not surprisingly, with so many connected devices coming on line, these figures pale in comparison to what we can expect in just a few years. The number of networked devices today has already matched the global population of about 6.5 billion, and that is expected to double by 2015. Similarly, worldwide mobile data traffic is expected to increase 18-fold over the next five years, driven by a jump in streamed content, mobile connections, enhanced computing of devices, faster mobile speeds and the proliferation of mobile video, according to the recently released Cisco Visual Networking Index (VNI) Global Mobile Data Traffic Forecast for 2011 to 2016.


All of this means substantially more capacity needs to be brought online fairly quickly, and many industry pundits are now calling for the private sector to drive network infrastructure projects that would mirror government initiatives to improve critical infrastructure, such as roads and bridges.


As network operators (hopefully) move in this direction, they should strongly consider whether it makes sense to invest in traditional, telecom-service-centric equipment. While it served its purpose in its day, this equipment is generally custom built to optimize network throughput by increasing “speeds and feeds.” Increased performance is always a good thing, but the problem with traditional telecommunications gear is that it is based on special purpose hardware, which makes it rather inflexible and expensive to scale.


An alternative approach that’s gaining steam is the idea of deploying customer- and application-centric equipment that is software focused and gives service providers more flexibility to quickly develop and deploy a broad array of content and data services aimed at connected devices. Based on general purpose server technology, this approach simplifies the convergence of voice, data and applications because one platform does all of the processing. Basically, the same hardware platform can be used to launch next-generation services — simply by adding open standards-based software.


A standards-based software approach yields even more benefit with what Intel calls a “4:1 workload consolidation strategy.” This is a framework that enables any platform built on Intel processors to execute four workloads (application, control plane, dataplane and signal processing) simultaneously on a basic networking element, such as a wireless base station. This is completely possible today because, per Moore’s Law, which says the number of transistors that can be placed on an integrated circuit doubles every two years, the power of processors has been increasing exponentially over time such that they can support the consolidation of multiple networking workloads onto a single architecture.


This approach has already proven successful in major data centers, and service providers are excited by its implications because it opens up many exciting operational and business opportunities. From an operational perspective, the 4:1 approach enables them to more affordably keep pace with the worldwide trend toward ubiquitous connectivity, which is the primary driver in the telecom industry right now. Service providers know that we are getting much closer to the day when billions of devices – from toasters to television sets and automobiles – tap embedded processors to connect to the Internet, each other and the cloud, creating one giant network of Intelligent Systems. According to an IDC report published in September 2011, Worldwide Intelligent Systems 2011-2015 Forecast: The Next Big Opportunity, the market for intelligent systems is developing rapidly with more than 1.8 billion connected devices and more than $1 trillion in revenue this year. By 2015, IDC expects that market to double to nearly 4 billion connected devices and more than $2 trillion in revenue. Video traffic already accounts for about 40 percent of mobile traffic today and is expected to make up 90 percent of mobile traffic within a decade, according to other analysts.


Service providers also know this incredible growth means there will be unparalleled demand for network and wireless bandwidth, and the traditional approach of investing billions of dollars to upgrade costly telecommunications equipment just isn’t sustainable.


Investing in a standards-based software infrastructure not only allows service providers to boost bandwidth more affordably, it also enables them to look at their businesses in new ways. The flexibility of the software approach provides enables them to push out applications of their own, based on visibility into customer connectivity habits. For example, every new mobile phone subscriber could receive a list of the “10 Most Popular Apps” for their specific devices. Some of the applications could be free; others might be delivered for a small fee. The net results would be increased customer loyalty, higher revenues and a bump in customer upgrades to premium plans.


Service providers are in a unique position in that they have access to both subscriber and network information. By deploying more Intelligent Systems, service providers can see how, when and where subscribers are using their devices and applications on the network… and customize applications for them that can be monetized along the way.


All of this said, it is important to note that a move to standards-based platforms will present a challenge for traditional telecommunications equipment manufacturers (TEMs). Their old model of providing closed platforms created a lock-in for service providers and required long-term equipment cost recovery paths. The TEM will be challenged to meet service provider demand for more cost-effective equipment and to remain relevant in the new environment. But like the entire industry, there will be opportunities for TEMs as well by adapting and delivering product at a price point that makes sense for their customers.


There is time for the entire telecommunications industry to adapt. Time is not necessarily running out, as some interpreters of the Mayan calendar might suggest. Yet there is a clear need for the private sector to bust out of its traditional, proprietary way of thinking and collaborate around standards based solutions that address emerging challenges with network bandwidth. Because if we do things too slowly or wait too long, the march toward ubiquitous connectivity could come to a halt — in an Internet Minute.


John Healy is Director of Strategic Marketing for Intel’s Communications Infrastructure Division. Based in Phoenix, Ariz. and an engineering graduate from the University of Limerick in Ireland, Healy has been in the communications and embedded industry for 22 years. He has held various engineering and line management positions with responsibility for network deployment and construction, transmission planning, switching and maintenance. He has also held positions in finance, information systems, planning and sales management during this time. Healy overseas the strategic business planning and marketing for Intel’s communications infrastructure market segments.