How C-RAN architecture can reduce costs for mobile backhaul deployments

By Nir Halachmi, Product Line Manager, Telco Systems

With mobility and an increased level of high bandwidth content, mobile operators are facing a challenge, as they cannot convert the increased demand for broadband into an increased profit.

By Nir Halachmi, Product Line Manager, Telco Systems

With mobility and an increased level of high bandwidth content, mobile operators are facing a challenge, as they cannot convert the increased demand for broadband into an increased profit.

According to a June 2011 report published by Juniper Research (Hampshire, England), global operator revenues will total $1 trillion a year by 2016, but unless telcos take action, spiraling costs will offset these revenues somewhere between 2014 and 2015. According to Juniper, “Margins will be squeezed quite dramatically if remedial action is not taken to address data traffic costs; if there is a lack of planning prior to network deployments, resulting in inefficient networks”

An April 2011 report published by Infonetics Research (Campbell, Calif., U.S.A.) claimed 89% of the money spent on mobile backhaul equipment in 2010 was for IP/Ethernet platforms, with Ethernet cell site routers and gateways and Ethernet packet microwave equipment seeing the most growth.

With that in mind, to address the rising costs associated with meeting these demands, mobile operators have to re-examine the overall cost structure of their mobile deployment including the network architecture in order to reduce dramatically their operational costs (OPEX). A solution can be found in a new distributed architecture called Cloud Radio Access Network (C-RAN) which offers a new paradigm in base stations architecture that aims to reduce the number of cell sites while increasing the base station deployment density.

In a typical mobile deployment, each base station serves all the mobile devices within its reach. Each base station has its digital component manage its radio resources, handoff, data encryption and decryption and an RF component which transforms the digital information into analog RF. The RF elements are connected to a passive antenna that transmits the signals to the air.

Each base station should be placed in the geographical center of its coverage area. But even when such locations are selected, the mobile operators may have difficulty in renting the real estate, finding proper powering options, securing the location and protecting the equipment from weather conditions. Those cell sites carry with them a continuous stream of OPEX to address the high rental rates for real estate, electrical expenses, cost of backhaul for the cell site and security measures to protect the location from intruders.

Cloud Radio Access Network (C-RAN)

The C-RAN base stations architecture breaks down the base station into a Base Unit (BU) — a digital unit that implements the MAC PHY and AAS (Antenna Array System) functionality — and the Remote Radio Head (RRH) that obtains the digital (optical) signals, converts digital signals to analog, amplifies the power, and sends the actual transmission. By making the RRH an active unit capable of converting from analog to digital, operators can now place numerous BUs in a single geographical point while distributing the Remote Radio Units (RRUs) according to the Radio Frequency (RF) plans. The RRH becomes an intelligent antenna array which not only submits RF signals but also handles the conversion between digital and modular data. New RRH can also support multiple cellular generation (2G, 3G and LTE) eliminating the need for multiple antennas.

The C-RAN concept lowers operating expenses and simplifies the deployment process. By centralizing all the active electronics of multiple cell sites, at one location (aka the “Base Station


Hotel”), energy, real-estate and security costs are minimized. The RRH can be mounted outdoor or indoor – on poles, sides of buildings or anywhere a power and a broadband connection exist, making installation less costly and easier. The RRH is typically connected using fiber to the BU, creating cloud-like radio access network topology. This topology saves costs both during the installation and later in the on-going operation.

The C-RAN architecture poses unique requirements to backhaul a “base station hotel” containing multiple base stations in order to connect them to the different controller sites.

The requirements:  (a) Multiple 1Gig connection; (b) Multiple 10Gig uplinks from the “base station hotel”; (c) Resiliency and fast recovery; (d) Synchronization schemes; (e) Strong QoS; (f) Strong OAM and service management.

Today’s required bandwidth in 4G deployment is about 100Mbps per sector and is expected to grow during 2011-2012 to about 400Mbps and more in the future with LTE-Advanced. Therefore at least one 1Gig connection per base station will be required. In many cases two will be installed (dependent of the size of the cell) for redundancy purposes. Moreover if higher bandwidth is required in the future these two 1Gig connection could be unified using LAG.

As multiple 1Gig connections will be aggregated in the “Base Station Hotel” in order to ensure non-blocking architecture, the mobile provider will need multiple 10Gig backhaul from the “Base Station Hotel” toward the core mobile provider infrastructure.

As opposed to residential or commercial service where each link serves only small amount of end-users, a single cell site could serve very large amount of end-users. Network failure is not acceptable, and more over even small errors in the backhaul is something that provokes a service call to the backhaul provider from the mobile provider. Therefore service resiliency is an important requirement.

At the BU the mobile provider may want to map into a Multiprotocol Label Switching (MPLS) network or choose to keep its network over Carrier Ethernet. MPLS will enable the provider to benefit from the scalability that MPLS offers. Both MPLS and Carrier Ethernet can provide sub 50ms recovery time using the appropriate mechanisms (Carrier Ethernet using G.8031, G.8032 and MPLS using FRR).

Cellular technology requires inter-base station timing reference which guarantees transport channel alignment for handoff and guard band protection. While providing this reference timing is relatively easy when cell sites have Time Division Multiplex (TDM) (mostly E1/T1s), it is more difficult for packet based backhaul.

In the past, TDM (E1/T1 backhaul links) and GPS were used to synchronize base stations. As mobile operators migrate to newer generations of mobile technology (e.g. HSPA+, 4G, WiMAX etc.) which have Carrier Ethernet-based backhaul, synchronization between base stations, was also required from the mobile backhaul. Time Division Duplexing-based (TDD) technologies like WiMAX and LTE-TDD also required phase synchronization for the RAN. New synchronization technologies have been developed and deployed to deliver frequency and phase synchronization over Carrier Ethernet.

Among the synchronization schemes which are commonly found in this market are:

  • IEEE 1588v2 – 1588v2 is a packet-based protocol and is carried in-band with user traffic. It can support highly accurate frequency and phase synchronization (which is required by 4G technologies). In addition, devices in the path that do not support 1588v2 will transparently pass the protocol eliminating the need to replace these devices while allowing synchronization requirements to be met. However, 1588v2 can be affected by network congestion if it is not properly prioritized across the path.
  • SyncE (AKA G.8261) – SyncE is a physical layer technology and is not affected by network congestion. However it requires that every node in the path have hardware support for SyncE. Unfortunately, SyncE does not support phase synchronization, therefore in order to support 4G technologies, SyncE will need to be supplemented by 1588v2 or some other mechanism to provide the phase synchronization requirement.
  • GPS – GPS, used by many mobile providers, meets both phase and frequency time synchronization requirements. GPS synchronization has a large effect on CAPEX but has less of an effect on OPEX since once the device is installed, there are no additional costs. However, GPS must be installed in a location where there is a clear line of sight of the sky and is subject to reception issues. GPS is commonly used in North America but much less so in EMEA mostly due to security concerns.
  • Legacy T1/E1 clock – TDM-based synchronization, used for 2G and 3G cell sites, provides accurate frequency synchronization. However, TDM, if used only for synchronization, tends to be a more expensive solution then other alternative. Since phase synchronization is mandatory for 4G synchronization, TDM is not an applicable option

In many cases, multiple synchronization schemes will need to be supported, therefore inter-working and the ability to convert between the different schemes, is critical. In situations where the mobile provider and the backhaul provider use different equipment that support different schemes, timing could be still transferred over the backhaul without losing synchronization and accuracy, and without requiring equipment change for either the mobile operator or the backhaul provider.

C-RAN architecture may also need to support multiple services for customers that have differing quality of service priorities.  Therefore the ability to use HQOS (Hierarchal QoS) allows for higher quality of service granularity, enabling the backhaul provider to ensure the availability and quality of service parameters (latency, delay, amount of bandwidth) per user, per service.

Managing multiple hotels – and being proactive in troubleshooting issues is a key element in deploying reliable efficient C-RAN architecture. As Ethernet evolved from enterprise technology into carrier class, many standards have been put in place to allow the service to be fully managed, controlled and tested.

  • IETF RFC 2544 provides the tools to test an Ethernet service for the service level agreements (SLA). The standard provides a methodology to evaluate network devices performance using throughput, frame loss and latency.
  • Y.1731 specifies OAM functions and mechanisms as well as performance metrics that tests throughput, measures bit errors, or detects frames delivered out of sequence.
  • IEEE 802.1ag CFM (Connectivity Fault Management) helps administrators debug Ethernet networks using continuity check, link trace, loop back.
  • Service OAM (MEF) provides tools for operator level OAM, provider level OAM and customer level OAM including test suites for Ethernet Services and traffic management.

In closing, these capabilities not only help operators manage and protect the service, but also allow them to support guaranteed service level agreements to other operators. A robust service management system will enable operators to configure end-to-end services, remotely manage and maintain the equipment and monitor and report on service assurance, as well as optimizing the service.

About the Author:

As Product Line Manager, Nir Halachmi is responsible for the design and development of Telco Systems’ mobile backhaul solutions focusing on both cellular and wireless technology as well as QoS, data security and communications.  He has spent the past 12 years developing and managing telecommunication products within both the wired and the wireless industry and has worked across various technologies including Carrier Ethernet, circuit emulation, MPLS, IP, Wi-Fi and WiMAX.  He has acquired expertise in cellular and wireless technology, QoS, data security and communication field while working with partners and customers to implement various solutions. Nir holds a Master Degree in Computer Science from the Inter-Disciplinary Center (IDC) in Israel.

Related Articles

4G World: the deployment of microcellular networks

Delivering on the 4G user experience promise with micro-cellular networks