Today’s communications service providers face a challenge similar to the one that shook enterprise data centers over the past decade and a half. Namely, the imperative to move from siloed, proprietary application stacks to a virtualized cloud-based architecture that can provide a unified and sustainable platform for mixed workloads. And workloads are the key difference here: enterprise data center workloads are comprised of apps like Oracle, Exchange and SAP, while communications service provider workloads are the network functions that deliver wireless and wireline calls, text messages, and streaming media, along with services such as VPNs and firewalls.
To extend this analogy, at the beginning of the cloud transformation in the enterprise, the challenge was that there were databases and enterprise apps sitting on top of purpose-built technology stacks that were inefficient. This was primarily due to their inability to share CPU, storage and networking resources and the OPEX required to deploy and manage them. Often times there were different groups of people administering these different application environments, magnifying their inefficiency and complexity. So, when virtualization came along, it was revolutionary – unlocking the ability to port enterprise apps onto a cloud-based platform built on industry standard x86-based hardware, abstracted into pools of compute, storage and networking services.
This same revolution is happening on the network operations side of telecom operators and other communication service providers like cable companies, except that instead of enterprise apps, it is virtualized network functions, or VNFs, that are moving onto virtualized, or “software-defined”, platforms. Network functions are not familiar to most people in IT, and include things like Evolved Packet Core (EPC), which ensures that multiple data types, like voice and data, can be managed in the same network environment, and Customer Premise Equipment (CPE), which includes equipment like set top boxes issued to customers by cable companies. Virtualizing the platform these functions run on has multiple benefits, including reducing the amount of hardware to manage and providing the software automation necessary to keep these increasingly complex and demanding environments up and running and delivering on customer SLAs. But like we’ve already mentioned, this isn’t a common undertaking, it’s incredibly specific, and even more complicated.
The Business Challenge
Telecom operators are under pressure from multiple sources, including (1) saturated markets with more mobile phones than people, (2) prepaid markets where it is increasingly easy for subscribers to switch carriers, and (3) over-the-top (OTT) competitors such as Facebook, Netflix and Google, which are able to deliver services like voice, text and video over IP networks. These OTT competitors have the advantage of agility – which is the ability to spin up and spin down new services in a matter of weeks or even days, versus months in the case of most traditional telecoms. These “born in the Web” OTT providers are not burdened with the obligation to support legacy businesses as the traditional telecom providers are and can therefore focus on new innovation geared at driving new services over the Internet. The result of all these pressures for telecoms is not only margin erosion and customer churn, but also a massive change in technology strategy. One the likes they’ve never experienced before, and quite frankly traditionally had the skills at scale to undertake – while having to keep business as usual running.
The new challenge for telecoms is to provide a customer experience that meets or exceeds that which can be delivered by the competition. And the way to achieve this is through a combination of operational agility and customer intimacy.
Software-Defined Network Infrastructure for Maximum Efficiency
Operational agility requires new approaches in a time of increasing network speeds, content mixes and customer expectations. Throwing more hardware at the problem is no guarantee that a provider will be able to deliver video or voice at the quality demanded by subscribers. In fact, adding more hardware is exactly the wrong way to go as it adds more operational cost into the network infrastructure and increases the need for manual, error-prone management processes.
Software-defined infrastructure makes it possible to support a multitude of workloads with a variety of resource requirements side-by-side. This is because a modern, software-defined infrastructure has two very important characteristics. The first is programmability. A programmable infrastructure is one that can support multiple virtualization environments and be reconfigured easily to accommodate different workloads. For instance, some VNFs are designed to run on Linux KVM virtual machines on OpenStack, while others are designed to run on VMware ESX virtual machines, and yet others run in Docker containers. A truly robust network environment will be able to support any combination of these operating environments by providing the means to program the necessary behaviors into the infrastructure on the fly. This is much faster and more efficient than adding new hardware into the environment, but this capability requires highly advanced software capabilities.
The second important characteristic of a software-defined infrastructure is service assurance. Customers expect seamless voice, video, and media quality and data protection. A truly carrier-grade infrastructure will deliver on these expectations by quickly analyzing the root causes of component failures, remediating those failures before they impact subscriber services, and ultimately, predicting and avoiding outages and performance issues before they occur. All of this can only be accomplished through automated software.
The D.I.Y. versus the Pragmatist
All telecom providers today recognize the necessity to virtualize their networks and make them more software-defined. But different operators are taking different approaches. Some operators, typically larger ones, have embarked on ambitious, multiyear transformational initiatives to build entirely new infrastructures from the ground up. Others have decided to take a more incremental approach by picking one or maybe two network functions and virtualizing them on a pilot basis, with plans to roll them into production once they have been proven.
Neither approach is right or wrong and whichever is chosen will depend entirely on an operator’s technical capabilities, willingness to invest in innovation, and appetite for risk. And both approaches should result in the kind of cloud-based environment described above – one that is software-defined and that can support multiple workloads side-by-side, and provide the ability to rapidly and easily provision new workloads in response to customer demand. Such an environment should be completely agnostic to the underlying hardware infrastructure and provide openness and choice at every layer of the stack.
Data Analytics for Competitive Advantage
Communications service providers sit on mountains of high quality data that can be used in a variety of ways. The data that a telecom has about its subscribers paint a much more real picture about user preferences and behaviors then the kind of curated data that users provide to social media sites like Facebook. Having access to a richer and more objective set of data gives communications service providers the competitive advantage.
Data unlocks many exciting use cases, including network optimization, location-based marketing, investment planning, fraud detection, and new areas such as smart cities, smart cars and smart homes. Telecom data analytics promises competitive differentiation and the ability to move into adjacent markets to providers who use it effectively.
As with most other industries today, telecoms are just beginning to understand the power of big data. As this area evolves, the ability to generate insights from data will grow. Right now, the important realization is that the ability to collect and analyze realtime data about devices, equipment and subscribers, combined with the power of network virtualization to efficiently transport and aggregate that data, put telecoms in an enviable position.
The communications service provider of tomorrow will be at the center of a new data-centric paradigm. As the Internet of Things matures and sensor-enabled devices generate and capture more data than ever, telecoms are perfectly positioned to be the orchestrators in the symphony of big data. To succeed, they will need a platform that can manage an increasing velocity, variety and volume of data with a minimum of operating expense and a maximum of speed – the type of platform only a software-defined virtualized network infrastructure can offer. The race is on, the opportunity is incredible.