Rolling out new, more compelling services quickly and with guaranteed quality is critical for carriers facing mounting threats from over-the-top (OTT) providers. The prospect of lowering total cost of ownership (TCO) while doing so is driving many to adopt strategies for virtualizing the underlying network.
Basically, the shift of network control to software to gain a host of operational and economic advantages, Network Functions Virtualization (NFV), migrates networking functions to virtualized architectures to speed innovation, accommodate growth, and maintain a competitive edge into the future.
Target benefits include:
- Simplified provisioning
- Greater network elasticity
- Increased automation (simplified operations)
- More fluid and efficient resource allocation and utilization through service chaining
But like most blooming markets, growth forecasts for NFV are all over the map. The most aggressive include:
- Mind Commerce estimating global spending on NFV will grow at a CAGR of 46% through 2019 with revenues reaching $1.3 billion
- Dell’Oro Group projecting the market could represent $2 billion in equipment sales alone by 2018
- Doyle Research envisioning rapid momentum resulting in a $5 billion market by 2018 (including software, servers, and storage)
As always, “how much, how soon” depends on how well the industry rises to new challenges. NFV and its counterpart, Software Defined Networking (SDN) pose a fundamental conundrum in rendering what’s known about the performance, reliability, and security unknown again.
Operators must weigh the tradeoffs between openness and performance, flexibility and control, and quality and cost. Tried-and-true network elements must be evaluated from the ground up to determine 1) Whether the business case supports migration, and 2) How performance might be impacted.
Before putting subscriber satisfaction at risk by moving critical pieces around, efforts must be made to inform decisions about what, when, and how best to go about virtualizing proven network functions.
At a minimum, performance must be equal to that of the legacy system for migration to be considered successful. For mobile network operators, defining a process and quantifying results involves three key pieces:
- Validating new elements of the virtualized network environment
- Validating performance throughout the migration process
- Regaining visibility into blind spots created during the migration process
Let’s look at each one-by-one.
Validating the New Infrastructure
With virtualization almost sure to happen in stages, validation must be ongoing. “Lab to live” strategies should include regression testing to ensure network functions don’t get broken and the expected performance is maintained during migration. New common elements of the virtualized architecture will not only determine the performance of the system as a whole, but potentially introduce bottlenecks and security vulnerabilities along the way.
At each layer of a virtualized function, unique aspects must be explored:
- At the hardware level, varying commercial server features, performance characteristics, and requirements for CPU brand and type, memory, etc. must be explored.
With more than one server platform often in play, testing ensures consistent and predictable performance as virtual machines (VMs) are deployed and moved from one type to another.
- Virtual switches (vSwitches) also vary greatly, with some packaged with hypervisors and others
functioning standalone and favoring proprietary technology. In evaluating their options, operators will need to weigh performance, throughput, and functionality against resource utilization, beginning with baselining I/O performance, then piling virtual functions on top of the vSwitch. Careful attention must be given to tuning the system to accommodate the intended workload (data plane, control plane, signaling).
- Hypervisors enable consolidation of physical servers onto a virtual stack on a single server, and allow virtual resources (memory, CPU) to be strictly provisioned for each VM, enabling features like fast start/stop, snapshot, and VM migration. In choosing between commercial products with advanced features and open-source alternatives, operators should look at both the overall performance of each potential hypervisor, its requirements, and the impact of its unique feature set. The ability of its underlying hardware layer (L1) to communicate with upper layers should also be evaluated.
- Management and Orchestration (M&O) is undergoing a fundamental shift from managing physical boxes to managing virtualized functionality. The shift requires vastly increased automation that must be thoroughly tested to avert bottlenecks.
- VM and VNF performance should be verified against individual hypervisors, and the ability of the host OS to talk to both virtual I/O and the physical layer assessed.
- “Portability” of a VM from one server to another should be validated to ensure no performance degradation occurs.
Measuring Virtual Performance in Real-world Networks
New strategies and capabilities are needed to validate performance and measure success. Chief among these is the addition of virtualized test capabilities.
Along with traditional physical testers, virtualized testing proves valuable insight throughout the deployment life-cycle. After baselining the current performance and defining target benefits, operators need to apply the right approach at each stage:
- During development and quality assurance (QA), virtual testing speeds and streamlines the process with rapid setup and low physical resource requirements. VMs can be instantiated on demand, and multiple virtual assessments and regressions conducted simultaneously, without requiring engineers to share physical testers.
- For scalability, traditional physical testers deliver powerful advantages in simulating high scale and session rates, and testing real-world capacity. High-precision testing helps optimize elasticity as well as performance.
- To guarantee real-world performance, testing to real-world conditions must be conducted prior to deployment. A mix of physical and virtual testing combines to efficiently replicate the complexities of a hybrid production environment. Intended configurations can be modeled prior to taking on real user traffic, and the performance of a newly virtualized function demonstrated alone, and in the context of end-to-end services.
- To replicate and resolve field issues, physical and virtual test and monitoring solutions can both be used to speed information back in the lab.
Thorough assessment addresses the new challenges inherent in moving to a more distributed infrastructure, and dealing with new dynamics like multi-tenancy where multiple functions share commercial off-the-shelf (COTS) servers. From there, it’s a matter of maintaining performance and security, which in turn means regaining visibility lost during migration.
Maintaining Visibility: the Key to Ongoing Success
In a virtual environment, real-time visibility into the end-to-end architecture becomes even more critical to ensuring service availability, QoE, and security. Real-time monitoring verifies performance and security as NFV deployments scale and evolve, and provides a valuable feedback loop that informs ongoing development.
A sound visibility architecture allows providers to quickly find and alleviate bottlenecks, pinpoint performance issues, and test varying configurations. But as is the case with testing, NFV and SDN give rise to new monitoring challenges requiring new capabilities.
Newly emerging virtual monitoring taps (vTaps), for example, eliminate blind spots inherent in the new environment. vTaps introduce visibility into the “east-west” traffic flowing between VMs sharing a server which cannot be adequately monitored by traditional taps. Physical and virtual taps work in tandem with Network Packet Brokers (NPBs) that provide the intelligence needed to deliver the right data to the right monitoring tools.
Embracing the Unknown
Moving faster than many times in history, carriers worldwide are expected to embark on virtualization sooner rather than later. “How much, how soon” remains a moving target, but the goal appears to be firmly in focus.
More importantly, operators seem committed to mounting the effort needed to demystify the process and deliver on the promise: better, faster, perhaps even cheaper services that meet ever-rising demand.
Joe Zeto serves as a market development manager within Ixia’s marketing organization. He has over 17 years of experience in wireless and IP networking, both from the engineering and marketing sides. Joe has extensive knowledge and a global prospective of the networking market and the test and measurement industry. Prior to joining Ixia, Joe was Director of Product Marketing at Spirent Communications running Enterprise Switching, Storage Networking, and Wireless Infrastructure product lines. Joe holds a Juris Doctorate from Loyola Law School, Los Angeles, CA