Why There is a Buzz Over Virtual CPE

As operators embrace NFV, vCPE for business services promises to be a prime use case

. Yuri Gittik, RAD

Networks that are agile, efficient, and well orchestrated are the first priority if service providers are to meet the challenges of automated, speedy, and ultimately profitable service delivery. The holy grail of programmable networks is driving the acceleration in network function virtualization (NFV) and software-defined networking (SDN).

As providers embrace NFV in their networks, virtual customer premises equipment (vCPE) for business services promises to be a prime use case. In a vCPE environment, at least some of the networking functionality of conventional CPE is virtualized and often relocated. But that relocation can operate in two directions; even as some functionalities may be moved from the customer site to a data center, it is the computational power in the vCPE that allows moving other functionalities from deep in the network to the customer premises.

Business vCPE amounts to a virtualized networking appliance at the customer edge. It has transformed a collection of single-purpose, hardware-based devices – such as a router, load balancer, or firewall – at each customer location into virtualized appliances that can be dynamically added or dropped as needed.

Physical and virtualized vCPE functions are divided between the customer site and the data center to ensure maximum flexibility and performance, with network-located functionality sharable among multiple tenants.

So it is no wonder that vCPE is perceived as the ideal candidate for proving NFV and programmable network concepts. Enterprise equipment tends to be costly from both capital and operational standpoints, which can affect network operations. The cost can limit a service provider’s ability to roll out new services quickly or to make timely service modifications and upgrades.

Conventional, appliance-based CPE involves slow and expensive deployment processes, making NFV-enabled vCPE a logical choice for service providers. It carries the promise of increased revenues and lower total cost of ownership.

Implementation Issues

A vCPE architecture combines physical and virtualized entities at the customer premises and elsewhere in the network, raising several implementation concerns for providers. They include:

  • Minimum functionality requirements. While virtualized CPE functions may run on virtual machines in the cloud, certain basic data forwarding and service demarcation and termination capabilities are still needed at the customer site. There the physical CPE must have at least a simple switch with essential forwarding functionality and xDSL or cable modem functionalities will be most likely  needed as well.
  • Virtualization decisions. Yes, some network functions may be virtualized and relocated to the cloud. But others should remain embedded in the CPE. Among them are data-plane functionalities, such as packet forwarding, traffic queueing, and prioritization, all potentially affecting the physical CPE architecture.
  • Where to locate. There can be speed and performance advantages when some functions stay within the same location or device, even though these functions can be service chained regardless of location. Implementations might be hybrid, using both physical and virtualized resources. For example, an application awareness functionality might use a deep packet inspection engine as a virtualized control plane located in the network, while at the customer site, hardware-based forwarding and flow marking functionalities ensure wire-speed operation.

Three Location Scenarios

Providers could use only a basic switch/router as the physical device at the customer premises, with all virtualized functions residing at the data center. The most advantageous use of this approach would be in the smaller enterprise services market, where speeds and performance requirements can be fully supported by cloud vCPE.

Alternatively, they could place both physical and virtualized vCPE functions at customer sites, with no central virtual functions. A network interface device with an integrated computing platform could serve as the NFV infrastructure. Or a standalone server could be paired with the NID, although this would be of little value in per-application traffic handling or traffic offload for hardware-based processing.

Another option is to place virtualized functions wherever their performance, cost, and policy compliance are optimal. That could be at the customer edge or in the network. This would allow functions residing in different locations to be dynamically ordered, configured, and chained to meet customer business needs.

The latter two scenarios reflect a distributed approach to NFV, placing virtualized functions where they make the most functional and economic sense. This approach could be optimal for value-added services that involve high networking costs and stringent performance requirements. It could also help avoid bandwidth inefficiency and application performance degradation.

Five Relocation Issues

Relocating functions could affect service quality, which raises these issues:

  1. The cost in bandwidth of moving functionality deeper into the network, since it could impact service delivery in areas served by lower-speed connections such as DSL.
  2. Could moving a function to the network expose sensitive end user data? An encryption application that isn’t at the customer premises may not provide adequate protection, since interception could occur in an unsecure access segment.
  3. The need for critical functions to remain operational even when an access link is down. Hosting IP PBX or router functions at a data center, for example, could result in an inability to locally handle calls or deliver traffic in the event of a network failure.
  4. How much network-added delay is acceptable when the workflows are delay-sensitive? Care must be taken as traffic travels farther so that function relocation does not hamper performance due to inadequate access link bandwidth or excessive delay.
  5. Testing and troubleshooting applications must accurately measure link and end-to-end service quality, as well as localize faults, starting from service handoff. Doing this at a data center could impede the determination of causes for performance and traffic handling issues.

vCPE Options

Many vCPE options are being discussed for various use cases at the service and operations levels as attention increasingly focuses on management and orchestration. Such orchestration may permit dynamic relocation of functions, based on factors such as changing network loads, to provide optimal quality of experience. That will also affect the way providers need to think about the five issues noted above.

Right now, the vCPE options being discussed typically presume manual provisioning of functions alongside various proprietary virtual networking mechanisms. The next step would be automation of connectivity and optimization of function selection and placement.

When providers look at the business services landscape, vCPE is understandably an excellent testing ground for the commercial deployment of NFV. What vCPE offers is hardware abstraction along with the promise of shorter, more flexible deployment cycles for new services. As programmable networks become a reality, the industry’s focus moves to automation and control, as vCPE transitions from loose couplings to integrated entities, with management functionalities increasingly part of a dynamic control plane.

Dr. Yuri Gittik heads Strategic Marketing for RAD Data Communications, a leading global provider of Ethernet systems and other network access equipment. Contact him at yuri_g@rad.com.