Академический Документы
Профессиональный Документы
Культура Документы
INFORMATION
UMT/IRC/APP/041734
INTERNAL
09/DEC/2014
Standard 02.07/EN
9771 WCE RNC Product Engineering Information
About Alcatel-Lucent
Alcatel-Lucent (Euronext Paris and NYSE: ALU) provides solutions that enable service providers,
enterprises and governments worldwide, to deliver voice, data and video communication services to
end-users. As a leader in fixed, mobile and converged broadband networking, IP technologies,
applications, and services, Alcatel-Lucent offers the end-to-end solutions that enable compelling
communications services for people at home, at work and on the move. For more information, visit
Alcatel-Lucent on the Internet:
http://www.alcatel-lucent.com
Notice
The information contained in this document is subject to change without notice. At the time of
publication, it reflects the latest information on Alcatel-Lucents offer, however, our policy of continuing
development may result in improvement or change to the specifications described.
Trademarks
Alcatel, Lucent Technologies, Alcatel-Lucent and the Alcatel-Lucent logo are trademarks of Alcatel-
Lucent. All other trademarks are the property of their respective owners. Alcatel-Lucent assumes no
responsibility for inaccuracies contained herein.
PUBLICATION HISTORY
June 2013
Issue V01/ EN, Preliminary
. Document creation started from Workshop and updated with R&D
March 2014
Update on
Dimensioning view
vMWare feature description
Capacity evaluation
Connectivity Model
VM reference Configuration
IP Engineering Architecture.
June 2014
June 2014
Issue V02.02/EN Preliminary
Correction on WCE RNC3G capacity licensing
August 2014
November 2014
December 2014
Issue V02.05/EN Approved Standard
December 2014
Issue V02.06/EN
Update for new hardware configurations (LR14.2W) due to some HP materials designed out
December 2014
Issue V02.07/EN
Corrections of some typos and about editorial change
TABLE OF CONTENTS
1 INTRODUCTION .................................................................................................................................. 11
1.1 OBJECT ......................................................................................................................................... 11
1.2 HOW THIS DOCUMENT IS ORGANIZED ..................................................................................................... 11
1.3 AUDIENCE FOR THIS DOCUMENT .......................................................................................................... 11
1.4 SCOPE OF THIS DOCUMENT ................................................................................................................. 11
1.5 RULES AND RECOMMANDATIONS .......................................................................................................... 12
1.6 RELATED DOCUMENTS........................................................................................................................ 13
1.7 STANDARDS ..................................................................................................................................... 13
1.7.1 3GPP .............................................................................................................. 13
1.8 PROCESS ........................................................................................................................................ 13
1.8.1 external .......................................................................................................... 13
1.9 PLM ............................................................................................................................................... 13
1.9.1 external .......................................................................................................... 13
1.10 TECHNICAL PUBLICATIONS ................................................................................................................ 13
1.10.1 external ........................................................................................................ 13
1.11 TECHNICAL PUBLICATIONS OPERATIONS ............................................................................................... 14
1.11.1 external ........................................................................................................ 14
1.12 R&D DOCUMENTS (INTERNAL) ........................................................................................................... 14
1.13 ENGINEERING ................................................................................................................................ 14
1.13.1 Global Document .............................................................................................. 14
1.13.2 external ........................................................................................................ 14
1.13.3 Internal ......................................................................................................... 14
1.14 I&C ............................................................................................................................................ 15
1.14.1 Customer Documentation .................................................................................... 15
LIST OF FIGURES
Figure 1: Wireless Cloud Element Structure....................................................................................................... 20
Figure 2: Virtual Platform ............................................................................................................................................ 21
Figure 3: Virtualization Operation ............................................................................................................................. 22
Figure 4: WIRELESS CLOUD ELEMENT RNC ARCHITECTURE .................................................................................. 24
Figure 5: HP Blade System c7000 Enclosure Front View ................................................................................... 27
Figure 6: HP Blade System c7000 Enclosure Rear View .................................................................................. 28
Figure 7: WCE Primary Cabinet Single Enclosure with OAM switches ............................................................... 30
Figure 8: WCE Primary Cabinet Dual C7000 Enclosure ......................................................................................... 31
Figure 9: WCE Expansion Cabinet Single C7000 Enclosure .................................................................................. 31
Figure 10: WCE Expansion Cabinet Dual C7000 Enclosure ................................................................................... 32
Figure 11: HP Blade Server BL460 G8 ..................................................................................................................... 33
Figure 12: HP Blade Server BL460 G8 Layout ....................................................................................................... 34
Figure 13: HP 6125XLG Faceplate Port Assignment ............................................................................................ 36
Figure 14: HP 6125G OAM Switch Ports Allocation ............................................................................................. 37
Figure 15: E5424 Controller Drive Tray Front view ............................................................................................... 38
Figure 16: NetApp E5424 Controller Drive .............................................................................................................. 39
Figure 17: HP DL380p WCE Management Servers ............................................................................................... 40
Figure 18: Management Server Disk Allocation .................................................................................................... 42
Figure 19: WCE UMTS Architecture .......................................................................................................................... 49
Figure 20: WCE Internal LAN ...................................................................................................................................... 50
Figure 21: WCE 4 Telecom Internal Configuration with link redundancy ....................................................... 51
Figure 22: WCE 1 Telecom Connectivitys .............................................................................................................. 52
Figure 23: WCE 2 Telecom Connectivitys .............................................................................................................. 53
Figure 24: Standard WCE4 Uplink Configuration (Single Network one pair of router) ............................... 54
Figure 25: Standard WCE4 Uplink Configuration (Dual Network two pair of routers)................................. 55
Figure 26: Packets Flow for two tenants ................................................................................................................. 56
Figure 27: Packets Flow for a single tenant ........................................................................................................... 56
Figure 28: OAM Network Context for a WCE 2 Configuration ............................................................................ 57
Figure 29: WCE4 OAM Configuration with in-rack aggregation switch (6125G) ........................................... 58
Figure 30: VMs Distribution ........................................................................................................................................ 63
Figure 31: CMU Role ..................................................................................................................................................... 65
Figure 32: VM CMU Structure ..................................................................................................................................... 66
Figure 33: UMU Role ..................................................................................................................................................... 67
Figure 34: VM UMU Structure ..................................................................................................................................... 68
Figure 35: PC Role......................................................................................................................................................... 69
Figure 36: VM PC Structure ........................................................................................................................................ 70
Figure 37: Carrier Grade Description ....................................................................................................................... 75
Figure 38: Tenant Description .................................................................................................................................... 77
Figure 39: IP Tunnelling ............................................................................................................................................... 81
Figure 40: Tunnelling Deployment ............................................................................................................................ 81
Figure 41: Tunnelling Addressing ............................................................................................................................. 82
Figure 42: RNC tenant VLAN Configuration ........................................................................................................... 83
Figure 43: IP@ requirements for UTRAN Telecom ............................................................................................... 84
Figure 44: IP@ requirements for UTRAN OAM ...................................................................................................... 85
Figure 45: VCenter Server ........................................................................................................................................... 87
Figure 46: Disk Access Tenant Configuration ....................................................................................................... 90
Figure 47: RNC Cplane and Uplane DataPaths ......................................................................................................... 92
Figure 48: WCE VLAN Non Telecom Strategy........................................................................................................... 92
Figure 49: Maximum set of telecom VLANs for the RNC ....................................................................................... 94
Figure 50: Maximal RNC Telecom VLAN Separation ............................................................................................... 95
Figure 51: RNC vNIC with IuPS terminates on UMU ................................................................................................ 96
Figure 52: WCE Transport Reference Architecture .............................................................................................. 96
Figure 53: WCE Transport Component .................................................................................................................... 98
Figure 54: NIC Teaming and Data Flow.................................................................................................................... 99
Figure 55: Shadow Upgrade ..................................................................................................................................... 112
LIST OF TABLES
Table 1: WCE Elements Physical Specifications ...................................................................................................... 43
Table 2: WCE Cabinet Single Enclosure Weight ...................................................................................................... 43
Table 3: WCE Cabinet Dual Enclosure Weight ......................................................................................................... 43
Table 4: WCE external power requirements ........................................................................................................... 44
Table 5: WCE internal power requirements ............................................................................................................ 44
Table 6: C7000 enclosure power dissipation ........................................................................................................... 44
Table 7: BL460G8 power dissipation ......................................................................................................................... 44
Table 8: WCE Primary Cabinet Power Level ............................................................................................................ 45
Table 9: WCE Primary Cabinet Dual Enclosure Power Level................................................................................ 45
Table 10: WCE Expansion Cabinet Single Enclosure Power Level ...................................................................... 46
Table 11: WCE Expansion Cabinet Dual Enclosure Power Level ......................................................................... 46
Table 12: LRC MGR Interface ....................................................................................................................................... 60
Table 13: WCE VM description .................................................................................................................................... 63
Table 14: WCE RNC Tenant Rules ............................................................................................................................... 64
Table 15: Multiple RNCs within a single data center ............................................................................................. 65
Table 16: NodeB Id Mapping ........................................................................................................................................ 71
Table 17: Carrier Grade Requirement....................................................................................................................... 77
Table 18: WCE IP Address Mapping ............................................................................................................................ 78
Table 19: WCE VMs Internal Address Usage ............................................................................................................. 79
Table 20: WCE Internal Reserved IP Ranges ............................................................................................................ 80
Table 21: WCE External IP Address Dimensioning .................................................................................................. 80
Table 22: Overload level actions .............................................................................................................................. 102
Table 23: WCE vRNC Dimensioning Rules ............................................................................................................... 103
Table 24: RNC Zones Traffic Profile......................................................................................................................... 107
Table 25: WCE Capacity evaluation zone1 ............................................................................................................. 107
Table 26: WCE Capacity evaluation zone 2 ............................................................................................................ 108
1 INTRODUCTION
1.1 OBJECT
This document aims at:
Providing reader with a list of pointers to reference documents on some of WCE specific aspects where
Engineering inputs, margins and actions are limited or which are a good source to understand WCE
background.
Providing platform description for WCE and the tenants modules description.
Providing global architecture and network interfaces
Providing the mandatory guideline for usual capacity and dimensioning.
Consolidating some information from miscellaneous sources and across several releases if needed to provide
with a good vision on WCE RNC context and/or requirements.
Being a repository of Engineering Guidelines for WCE RNC in relation with platform, system as well as some
functional aspects in order to help in taking best of its capabilities within customer contexts, to ease
avoidance of quite impacting re-engineering and to capture best practices.
Important Note: This document mainly focuses on topics in relation with WCE platform itself as well as
options offered on its side from an integration perspective into a Network Architecture; please refer to Iu
LR14.2 TEG for more information on a given interface from a both ends and detailed perspective. As a
matter of fact features related with Cell Selection/Re-selection, Call Management, Power Management
and associated algorithms are not to be covered here; please refer to UPUG for those.
Please note that as range of applicable engineering rules is quite large, the content of this document is
subject to changes without notifications.
Alcatel-Lucent Network Engineering, Presales, Tendering, Sales, Product Marketing and Account teams for
the internal version (for information & alignment)
This version is an internal document. All the technical rules shall be confidential
Rule:
Restriction:
Engineering recommendations (Alcatel-Lucent recommendation for optimal system behavior) are presented
as the following:
Engineering Recommendation:
1.7 STANDARDS
1.7.1 3GPP
1.8 PROCESS
1.8.1 EXTERNAL
1.9 PLM
1.9.1 EXTERNAL
NOTA: For any external communication on the content, it is the responsibility of each customer account
team to check information according to any previous communication done to the customer or specific
customer strategy.
1.10.1 EXTERNAL
1.13 ENGINEERING
1.13.2 EXTERNAL
1.13.3 INTERNAL
1.14 I&C
1.14.1 CUSTOMER DOCUMENTATION
[Ext_I&C_001] 8BL 00704 0087 DRZZA SPP-70 Specification for 9771 WCE site preparation
https://all1.eu.alcatel-
lucent.com/sites/Notesmigrati_2/Operation%20Support/Enviro
nmental%20Product%20Support/Site%20Preparation%20for%20P
roduct/Site%20preparation%20documents%20for%20BSS%20equi
pments/Forms/AllItems.aspx
[Int_I&C_004] IEH 522 Section 301 ed8 WCE RNC Commissioning and Integration Manual
9YZ-04157-0143-RJZZA https://wcdma-ll.app.alcatel-
lucent.com/livelink/livelink.exe?func=ll&objId=67367614&obj
Action=browse&viewType=1
2.1 ABBREVIATIONS
3GPP 3rd Generation Partnership Project
DA Disk Access
NBAP NodeB Application Part -- signalling protocol responsible for the control
of the Node B by the RNC
NI Network Interface
OA On Board Administrator
PC Protocol Converter
2.2 DEFINITIONS
NEBS (Network Equipment Building System): Telcordia standards for power cabling, grounding,
and environmental safety, power and operation interfaces for telecommunications equipment. The NEBS
frame is used to house telecommunications equipment.
The list of tenants offered within the Wireless Cloud Element can be configured as individual
systems or they can be combined to share hardware computer processing units (CPU), disk storage, and
transport networking equipment.
This consolidated product reduces the system hardware required for each individual product
while increasing capacity and network flexibility demanded by the Radio Access Networks.
The Wireless Cloud Element needs to span a wide variety of deployment models:
Single or multi-technology (WCDMA/LTE) solutions
Large configurations built by adding additional commoditized hardware
Software deployment on top of existing cloud
Virtualization of the controller applications and independence from the computing platform allows
us to address the entire market while controlling the verification costs.
Wireless Cloud Element Applications need to modified to limit the volume of inter VM messaging
and to mitigate the impact of extra latency and jitter in order to allow for horizontal scalability. As a result
the architecture of the controller applications differ from previous versions as described in the following
sections:
Virtualization technology effectively provides an environment where each application sees itself
operating on a simple PC interconnected to other VMs by a simple LAN.
Each VM essentially has private:
CPU & Memory Space,
Network Interface
Storage
A centralized management system (vCenter Server in this case) manages the hardware to
provide the virtual environment. Applications are unaware of the existence of the vCenter Server.
When a Guest attempts to execute kernel instructions (e.g. SYSCALL to enter a driver) the CPU
causes a VM exit and it allows the VMM to run. The VMM implements the restricted function on behalf
of the Guest then uses a VM entry to allow the Guest to continue from where it was stopped
The VMM can be invoked for interrupts (e.g. TTi) and network I/O which can significantly slow the
application vs. a non-virtualized implementation.
Virtual machines are created with a specific number of virtual cores thus emulating any size of
physical computer
VMs with a large number of virtual cores show vertical scalability while large numbers of smaller
VMs show horizontal scalability
Applications that scale horizontally will generally provide higher ultimate capacity
RNC software changes are still underway to achieve maximum scalability
The attributes of the RNC Architecture for the Wireless Cloud Element are as follows:
There are four roles within the RNC application and three roles provided by the Wireless Cloud Element
Platform. The RNC application roles are:
3gOAM (1+1 sparing) - Provides the wireless (OMU) and OAM interfaces.
CMU or Cell Management Unit (N+M sparing) - Maintains UMTS cells, acts as the SS7 termination
point and provides a proxy to the platform for transport resource management.
UMU or UE Management Unit (Unspared role) - Provides the entire control and user plane processing
for individual UEs.
PC or Protocol Converter (N+1 sparing) - Provides both UDP and GTP-U NAT points and transport
bandwidth management for Wireless Cloud Element applications.
The RNC architecture for the Wireless Cloud Element was changed from that of the 9370 primarily to
enable scalability by moving the bulk of the computing load to an unspared replicatable entity. The
numbers of VMs required achieving the capacity targets are described in the RNC Capacity Section
LRC Mgr (no sparing) - Manages the configuration of applications or tenants and the VMs used to
create them within the Wireless Cloud Element.
vCS or vCenter Server (no sparing) - The VMware provided VM manager.
Disk Access NAS Front end to SAN
These roles allow for one or more of each of the Wireless Cloud Element applications to be configured and
brought into operation on a wide variety of hardware systems.
Each of the roles in the RNC application and Wireless Cloud Element platform are described in the following
sections.
3.4.1 3GOAM
The 3gOAM role acts as the termination point for Operation and Maintenance of the RNC and consists of two
primary sub-roles:
3g application management as the termination as implemented by the RNC 9370 OMU largely
unchanged and
Platform management via a Netconf Interface (Interface to the WMS)
In addition the 3gOAM acts as a host for monitoring and control of the internal components of the
virtual RNC. The 3gOAM node is 1+1 spared.
More information is provided on the UMTS RNC VM Description.
The CMU is the role which is responsible for the creation and management of all of the UMTS
cells in the RAN. It consists primarily of the following sub-roles:
the C-Plane for cells which consists of the NodeB Core Process from the RNC 9370 architecture,
the U-Plane for cells which is an instance of the 9370 RAB processes but specially targeted to handle
common channel traffic on the cells,
the lower two layers of the SS7 networking stack (for IP only), specially the SCTP and M3UA
protocols, and
A proxy for a distributed version of TBM which will handle management of transport resources (UDP
port numbers and link bandwidth).
CMUs are N+M spared where M is defined as the number of instances of the emulated VxWorks
running within a virtual machine
The UMU role is responsible for all aspects of UE management and consists of the following sub-roles:
the C-Plane for UEs with consists of the UE Core Process from the RNC 9370 architecture,
the U-Plane for UE traffic which is an instance of the 9370 RAB processes but specially targeted to
handle UE traffic, and
The upper layer of the SS7 networking stack, specially SCCP.
All of the context and processing for a single UE happens within a single UMU role and does not
depend on the presence of any other UMU role. In-fact the UMU roles are not aware of the existence of
any other UMU role. As UMUs do not support any form of sparing at either the control, user or signalling
plane level a failure of an UMU will cause the calls that were hosted on that UMU to be lost. Notification
of UMU failure is provided by the new MTF messaging system and will result in each of the
CMUs independently recovering the resources that the failed UMUs were using.
One of the primary functions of the PC is UDP NAT which allows for private IP addresses unique
to the controllers to be hidden from external nodes thus allowing for many network advantages such as
reduction in consumption of externally visible addresses and enhanced security. As a central traffic
handling point a failure of the PC will impact the RAN but the impact is not uniform across all connections
- loss of a particular UEs traffic has very limited scope while loss of the common channel traffic for a cell
will result in that cell becoming unavailable (which potentially has wide ranging impact). The Wireless
Cloud Element architecture recognizes these differences and allows common channel connections to be
handled differently than other connections; especially common channel terminate directly on
the CMU that is associated with the cells while other traffic is terminated on the PCs but treated as fully
dynamic and not shared between PCs. A PC failure will result in dropped connections thus the Wireless
Cloud Element will inform other nodes such that appropriate actions much be taken to clean up the failed
calls. Note that this behaviour is similar to the impact of a failure CMU where current calls will be dropped
but the cells will stay active.
The transport bandwidth management function statically reserves bandwidth for common
channels, or other traffic, that is not accounted for by direct measurement within the PC.
The protocols as defined by the 3GPP standard requires the NAT function to be stateful as UDP
port numbers are exchanged via a NBAP protocol side channel; therefore, the PC requires two parts:
a component (TRM) that is responsible for allocation of UDP port numbers and setting up
of connections this is much simplified version of the 9370s TBM, and
The PC NAT component that implements the stateful NAT, again a simplified version of
the PC component from the 9370.
The simplifications result from the absence of a need to support ATM transport networks and the
actual conversion of traffic from AAL2/ATM to UDP/IP.
Up to 16 half-height Blade
Server BL460c G8
The heart of the c7000 enclosure management is the Onboard Administrator module. It performs
several management and maintenance functions for the entire enclosure:
Detecting component insertion and removal
Identifying components and required connectivity
Managing power and cooling
Controlling components
Managing component firmware upgrades
Each C7000 enclosure is equipped with an active and standby Onboard Administrator (OA) for
shelf management. The OAs appear in a tray at the bottom of the enclosure (below the interconnect
bays). At the center of the tray is a pair of 1Gbps ports which are used to connect multiple c7000 shelves
in an open daisy chain. This allows an OAM user to log into a single OA and have log-in access to all
OAs in the chain. The tray hosting the OAs also contains a 1G switch which connects the OAs of a given
shelf to all its integrated Lights Out (iLO) controllers. These iLO controllers are a management
microcontroller integrated onto each of the blades and provide blade hardware and firmware
management functions. The OAs also provide some limited management of the blade switches used in
the interconnect bays of each shelf. Each OA has an external 1Gbps management port to provide
connectivity to the HP Systems Insight Manager (HP SIM) software running on a remote server.
The WCE c7000 blade line-up consists of up to 16 half-height BL460c Gen8 server blades. The
WCE design for these blades consists of a pair of Intel Xeon e5-2680v2 10-core (Ivy Bridge) CPUs
running at 2.8GHz. Each blade is equipped with 64GB of DRAM, and the NIC (Network Interface Chip)
which connects to the interconnect fabric is an Emulex BE3 Converged Network Adapter (CNA). This
Emulex component offers two 10Gbps Ethernet ports with hardware iSCSI acceleration. Each 10Gbps
port is connected to a different blade switch for path redundancy. The blades are diskless and take
advantage of the hardware iSCSI capability of the BE3 to boot directly from the SAN used in the WCE
architecture. (See Hardware Description below)
Each c7000 enclosure is equipped with a pair of 6125XLG 10/40G Ethernet switches in the top
two interconnect bays. Each 6125XLG offers 16 x 10 Gbps ports for fabric interconnect (downlinks to
blades), 8 x 10 Gbps and 4 x 40 Gbps faceplate ports (uplinks), and 4 x 10 Gbps cross-connect
backplane ports between the 6125XLG switches in adjacent slots. The uplink ports provide for
connectivity to the external network (Next Hop Router) NHR ports as well as for inter-c7000 connectivity
in a multi-shelf WCE system. Connections to the storage array (SAN) are also made through the
6125XLG uplink ports.
On the primary cabinet WCE, the first c7000 enclosure (lower one) contains two redundant
switches (HP6125G) for OAM aggregation purpose.
The remaining element of WCE hardware is the storage array. This consists of an iSCSI SAN to
provide centralized storage for all the blades in the system. WCE uses a NetApp e5400 SAN as the
storage element. The NetApp e5400 has dual controllers with 12GB of replicated battery-backed write
cache. Each controller is equipped with a pair of 10Gbps iSCSI ports which are cross-connected to the
6125XLG switches in the primary (first) c7000 enclosure. The SAN is equipped with 24 SFF (small-form
factor) hard drives. An important factor in the selection of the NetApp e5400 for WCE is that it is available
in a DC powered, NEBS certified variant to meet Telecom customer needs.
WCE plans to make two cabinet/power configurations available, one suitable for deployment in a
central office (Telecom) environment (e.g. DC power, seismic rated cabinet, 50C operation), and the
other intended for datacenter deployment (AC power, no earthquake rating, 40C operation). The initial
offering to customers is the DC powered seismic system. A maximum of two c7000 enclosures are
supported in the Telecom rack. This is due to:
Available rack space in the seismic cabinet
Overall weight of the c7000 shelves and the seismic cabinet (e.g. one c7000 loaded with
blades weighs 227 kg)
Overall power consumption (one c7000 can use almost 6.4 kW) and heat release
The datacenter variant can support more c7000 enclosures in the physical cabinet racking space,
but for consistency with the seismic rack configuration, and due to weight and powering concerns of
multiple c7000 enclosures in a single cabinet, WCE has set the limit to 2 c7000 enclosures regardless of
cabinet type.
A large WCE system supports two cabinets for up to four c7000 enclosures and 64 blades as a
single large network node. The two cabinets in the large system configuration must be located adjacent
to each other due to length limitations of the 40G networking cables used to interconnect the two cabinets
(Max distance 100 m).
The following figures shows the different available shelf enclosure from one cabinet with one
c7000 enclosure through two fulfilled cabinets (Primary and expansion - Four C7000 enclosures)
This is the actual HP model Blade Server provided with the LR14.2W release. This model could
be change with blade evolution.
1. Two (2) PCIe 3.0 mezzanine I/O expansion slots 7. Up to two (2) Intel Xeon E5-2600 family processors
2. FlexibleLOM adapter 8. Internal USB 2.0 and Trusted Platform Module (TPM)
connectors
3. MicroSDHC card connector 9. Two (2) small form factor (SFF) hot-plug drive bays
4. FlexibleLOM connectors (supporting one (1) 10. HP c-Class Blade SUV (Serial, USB, VGA) connector
FlexibleLOM)
5. Sixteen (16) DDR3 DIMM memory slots (8 per 11. HP Smart Array P220i Controller with 512MB FBWC
processor)
6. HP Smart Array P220i Controller connector 12. Access panel
NB: For OAM aggregation purpose, an in-rack switch is installing on the primary WCE1 shelf.
Two HP6125G switches which support only 8 x 1 Gbps uplinks will be installed in the bays
just below the primary shelf's 6125 XLGs. The tenant OAM uplinks from the 6125XLGs will connect to
the 6125G faceplates. All the OA RJ-45 links and the two RJ-45 links from the SAN controllers will also
connect to the 6125G faceplates. Two 1 Gbps links (one from each 6125G) will connect to the customer's
OAM network edge equipment. This reduces the overall OAM link count from 12 to 2 and allows for 1
Gbps to 100 Mbps down-rating for all WCE OAM flows.
Depending of the WCE configuration, the connectivity and the port assignment will be
different. Please refer to the chapter internal and external connectivity description
Throughputs Ports
HP6125 XLG 8 X 10Gb/s and 4X40Gb/s uplink ports (towards the
network
HP6125 XLG 16 X 10Gb/s downlink ports (toward the blades)
HP6125 XLG 4 X 10Gb/s cross connect ports
For OAM aggregation purpose, an in-rack switch is installing on the primary WCE1 shelf.
Two HP6125G switches which support only 8 x 1 Gbps uplinks will be installed in the bays just
below the primary shelf's 6125 XLGs. The tenant OAM uplinks from the 6125XLGs will connect to the
6125G faceplates. All the OA RJ-45 links and the two RJ-45 links from the SAN controllers will also
connect to the 6125G faceplates. Two 1 Gbps links (one from each 6125G) will connect to the customer's
OAM network edge equipment. This reduces the overall OAM link count from 12 to 2 and allows for 1
Gbps to 100 Mbps down-rating for all WCE OAM flows.
WCE provides an integrated pair of OAM switches to consolidate all the internal OAM interfaces
onto a single pair of OAM ports facing the customer OAM network.
WCE OAM ports are tri-speed and can be configured for operation as follows:
o 10Base-T
o 100Base-T (Fast Ethernet)
o 1000Base-T (Gigabit Ethernet)
The following WCE internal OAM ports are aggregated onto this pair of customer OAM ports:
o c7000 Onboard Administrator ports: 2 per c7000 enclosure in the WCE system
o NetApp e5400 SAN controller management ports: 2 per WCE system
o RNC/VMware OAM ports: 2 per WCE system
For a WCE1 system, 6 internal OAM ports are aggregated onto a single pair of external
OAM ports
For a WCE4 system, 12 internal OAM ports are aggregated onto a single pair of external
OAM ports
Throughputs Ports
HP6125G 16 X 1Gb/s downlink ports (towards the blades)
HP6125G Up to 8 X 1GB/S uplink ports (towards the network)
HP6125G Up to 2 X 10Gb/s IRF stacking ports
HP6125G One 10Gb/s cross-link port
For the connectivity configurations please see the chapter internal and external connectivity
description
SAN STORAGE
Fully redundant path from host ports to drives
Each controller can access all drive ports
Each drive chip can access every drive
Top down, bottom up cabling ensures continues access
Expandable via an expansion unit from 24 to 48 drives
DC powered, NEBS certified
The NetApp SAN E5400 has active components on the back panel. While a failure of one of
these components does not cause any SAN outage, SAN is configured using DDP which is like RAID 6
with two logical spares drives. We cannot bring the whole WCE down while the back panel is replaced.
These servers are capable of managing a large number of Wireless Cloud Element systems and
therefore are intended to be installed as a central resource by the customer much like the OAM system.
Note that HPSim, vCenter and SANtricity are all Windows applications so these servers
are configured as Windows machines.
It is physically connected to the first c7000 Enclosure via 6125G switch module. Additional
c7000s are linked from 6125G switch to On Board Administrator modules by cat5 Ethernet cables
included in the connectivity kits.
The vCenter is a centralized management system. It is connected to all the ESXi hosts and to
each LRCE Mgr for VM creation, VM management.
The WCE Platform Management Server is the maintenance platform for HP hardware, VM
management, and NetApp SAN management
This server is not integrated into the WCE cabinets.
These are installed into racks provided by the customer in an operations center.
4.5.1 CHARACTERISTICS
Optical drive
Redundant 750W AC or DC power supplies
Dimensions (HxDxW):
3.44 x 27.5 x 17.54 in
8.74 x 69.85 x 44.55 cm
Racking:
2 RU
Weight (approx.):
50 lb / 22.7 kg
Disks 1 & 2: fast 300GB 15K RPM SAS drives configured as RAID 1 for fault tolerance
Contain Windows Server 2012 OS, SQL, and WCE management applications
Disks 3 & 4 : fast 300GB 15K RPM SAS drives configured as RAID 1 for fault tolerance
Contain SQL databases for vCenter and HP System Insight Manager
Disks 5: 1TB Enterprise SATA drive Contain SQL event/error logs and WCE load image
repository
Disks 6, 7 & 8 : 1TB Enterprise SATA drives configured as RAID 5 for fault tolerance
System backup drives for OS and SQL database emergency recovery
Redundant Power
Each c7000 has its own independent breaker panel with A and B feeds
Each of the six power supplies in the c7000 has a dedicated circuit breaker and -48V
feeder/return pair
The e5400 SAN has its own independent source of power, also with A and B feeds
c7000 BladeSystem
Redundant 6125G Blade Switches
Redundant 6125XLG Blade Switches
Redundant OnBoard Administrator OAM modules
N+N spared fan modules, 10 total.
N+N spared power supplies, 6 total.
Input Voltage Number of Inputs Max Current per Max Output Load
Input per circuit
DC Breaker Panel -36 to -72 VDC 6 (3A and 3B Power Feeds 100 A 80A
Power dissipation values include all elements of the blade active at this level of utilization. These
numbers are valid only for a BL460c G8 blade equipped as shown below.
2x Xeon e5-2680v2 CPUs
8x 8GB LV-DDR3 DIMMs
Dual Port 10GbE Network Interface
Different CPUs and more memory will change these power numbers.
Power dissipation values for the c7000 Enclosure and Blades at a given utilization level is
determines by looking up the utilization level (in %) for the Enclosure and adding the utilization level of
the Blade (multiplied by the number of blades) to the Enclosure power.
For a system with 16 blades at a utilization level of 40%, the power dissipation
would be:
Enclosure power (602W) + BL460 G8 power (157W) x 16 blades
602+ (157x 16) = 3114W
The site power infrastructure must to be sized appropriately to accommodate a power draw of
6300W at -40Vdc per c7000 Enclosure to allow for future-proofing and system capacity growth.
Warning: Site power and ground cabling is not provided by Alcatel Lucent. Electrical
codes may vary by country, region and locality, thus the site power and ground cables and lugs
must be determined by the customer or installation provider. Power and ground cables should be
assembled by a certified electrician only and must be compliant to all local regulations. All
cabling must be compatible with operation in a 55 C environment.
NOTES:
Auxiliary equipment is only required in the primary cabinet
Field wiring to the WCE cabinet must be designed to meet local electrical codes,
installed, and approved by a licensed electrician.
For Site preparation information and complementary knowledge about power supply requirements,
please refer to [Ext_I&C_001]
1 Gb p s RJ-45
Two OA links p er C7000
WMS
NTP Server OAa OAs
OAa OAs
WCE
HPI
SANtricity OAM
OAM Edge
Edge Switch
Switch 6125G L 6125G R BL460c
vCenter Appln & Mgmt
OAM Network OAM
OAM Edge
Edge Switch
Switch OOB OAM links
10G -> 1G 6125 IRF Domain
RAN Core
Network
In case of WCE1 platform with one shelf and one pair of routers, there is one LAG
instance on the WCE side over the two blade switches. The LAG instance is composed of four
links including two standby links. On the backbone side a MC-LAG is created over the two
adjacent routers
If two pairs of routers, there will be on the WCE side two LAG instances over the two
blade switches.
One LAG composed of four links including two standby links, on the interface to
the Iub backbone and
One LAG composed of four links including two standby links, on the interface to
the Core backbone
On the backbone side, there is a MC LAG over the two adjacent routers.
Engineering Recommendation:
Because of the ring architecture (IRF) connecting all the blade switches
within a WCE platform on an internal L2 LAN, then from the LAG point of
view, the Ethernet ports from distinct blade switches on same or distinct
shelves are considering belonging to the same Ethernet switch (same
System Identifier value for all the blade switches within a WCE platform).
In case of one pair of routers (single network), The WCE side is with one LAG instance
over the four blade switches. Four links with 2 standby links
On the backbone side, a MC-LAG is created over the two adjacent routers.
In case of two pairs of routers (dual network), the WCE side has two LAG instances over
the four blade switches.
- One LAG instance composed of 4 links (2 active and 2 standby) on the interface
to the Iub backbone and
- One LAG instance composed of 4 links (2 active and 2 standby) on the interface
to the Core backbone.
On the backbone side, a MC-LAG is created over the two adjacent routers on each
backbone.
On the WCE side, one LAG instance over the eight blade switches. LAG instance is
composed of 4 active (or selected) 10 Gbps links from the IRF domain and 4 standby (or
unselected) links.
All 8 links are in the same MC-LAG interface. The solid lines indicate links that have been
selected from an LACP perspective. The dotted lines indicate links that are in-service but
unselected by the LAG distributor and are therefore not forwarding frames.
Figure 24: Standard WCE4 Uplink Configuration (Single Network one pair of router)
Figure 25 below shows the recommended uplink configuration for a WCE4 system
connected to dual external networks in which the network edges implement MC-LAG in an
active/standby manner.
On the WCE side, two LAG instances are created in the WCE over the four blade
switches.
One LAG instance composed of 8 links including 4 standby links, on the interface
to the Iub backbone and
One LAG instance composed of 8 links including 4 standby links, on the interface
to the Core backbone
On the backbone side a multi chassis LAG is created over the two adjacent routers on
each backbone.
Figure 25: Standard WCE4 Uplink Configuration (Dual Network two pair of routers)
To get an even distribution of traffic over the LAG, the LAG hashing should be based on a
combination of source and destination IP address and source and destination L4 port, both in the
uplink (IRF perspective) and the downlink (NHR perspective). Within the IRF domain, this is
accomplished by:
link-aggregation global load-sharing mode destination-ip source-ip destination-
port source-port
Once the VMware restriction of 32 blades per cluster has been lifted, a single tenant may
occupy all 64 blades of a WCE4. As in the previous example, different traffic types within the
single routing domain may be separated by VLANs on the single LAG.
Note: For all over external links configuration, active/active network edge, limited tenant
bandwidth and complete links connectivity, please refer to the Telecom Link Topology
[Int_R&D_004]
The SAN's payload path is connected to the WCE via 10Gbps links on the 6125
faceplates. The SAN has two controllers and each controller has a 1 Gbps RJ-45 connection to
the SANtricity server. Tenant OAM traffic and VMware ESXi OAM traffic go through a pair of 10
Gbps faceplate ports. Configuration above assumes a pair of customer OAM network L2 edge
switches with 1 Gbps ports (hence the down-rating of the 10 Gbps tenant OAM links). However,
this interface to the OAM network may be serviced by a single L2 switch or L3 router and they
may have 100 Mbps ports. The WCE Management server is assumed to be somewhere in this
OAM network and may be serving more than one WCE instance. It has a dedicated iLO port
which may or may not be connected to the network.
The WCE offer the standard configuration in which a second pair of switches will be
included in the primary shelf only. These switches are the HP 6125G which supports only 8 x 1
Gbps uplinks. They will be installed in the bays just below the primary shelf's 6125 XLGs.
The tenant OAM uplinks from the 6125XLGs will connect to the 6125G faceplates. All the OA RJ-
45 links and the two RJ-45 links from the SAN controllers will also connect to the 6125G
faceplates. Two 1 Gbps links (one from each 6125G) will connect to the customer's OAM network
edge equipment. This reduces the overall OAM link count from 12 to 2 and allows for 1 Gbps to
100 Mbps down-rating for all WCE OAM flows.
OAM 6125G
Switch
Figure 29: WCE4 OAM Configuration with in-rack aggregation switch (6125G)
Since the standby OA's (OAs) must also communicate with the HPI, the 6125G pair cannot operate in
an active/standby mode. Therefore an IRF domain separate from the one formed by the 6125XLGs
will be configured for the 6125Gs. The 6125G crosslinks will provide the IRF link between the two
switches and one faceplate port on either switch will provide a MAD path.
Remind: The port 11 of each 6125XLG switch of the primary shelf is assigned to the
OAM. The throughput is downgraded to 1Gb/s by the rate adaptation. It is used to
communicate with the OMC. The OAM flow transmitted over the port 11is an aggregation of
the following flows:
RNC OutOfBand OAM, Call Traces, ESXi to vCenter, LRCEMgr to vCenter & WMS,
WNode and SEPE.
There are two OA (On Board administrators) in each WCE shelf. For each OA, one
1Gb/s Ethernet Management port connected to the OAM platform. HPSIM can remotely
manage all aspects of the WCE Shelf hardware including remote upgrades of firmware on
the blades.
There are two NetApp Controllers:
Per SAN controller, one 1Gb/s management port connected to OAM platform for
remote SAN management and configuration.
Nota Bene: For one WCE shelf, four 1gb/s management ports (2OA + 2SAN)
For four WCE shelves, ten 1gb/s management ports (8OA + 2SAN)
All these 1gb/s management ports may be aggregated by an Ethernet switch to
reduce physical links up to the OAM platform
OAM network context details can be found with R&D document located on Wiki
link: [Int_R&D_005]
6.2 FUNCTIONALITY
The LRCE Mgr functionality includes:
Interface to the WMS for the purpose of initiating Tenant level actions (e.g. initial
deployment, resource allocation, removal)
Interface to VmWare vCenter server as a virtualization platform
Common mechanisms to handle networking and storage needs for LightRadio Cloud
Elements
Manage cloud-internal network resources: this includes ensuring a unique VLAN tagged
interface per Tenant Instance, and a unique IPV4 IP address to be used for communication
between the LRCE Mgr and the root VM of a Tenant Instance.
LRCE Mgr unifies usage of the VmWare vSphere features in a consistent manner for all
types of Elements/Tenants
LRCE Mgr provides functionality equivalent to that of the hardware commissioning in the
non-virtualized environment. LRCE Mgr provides mechanism to supply a minimum set of critical
configuration data into the guest OS space of the Tenants VMs (similar to I&C parameters in
non-virtualized environment)
LRCE Mgr provides APIs that could be used by either external configuration interface (for
the initial Tenant creation) or by the Tenants themselves for various actions to be executed on
Tenants VMs.
Managing the Wireless Cloud Elements deploying the Element/Tenant into the cloud
respecting the Tenant deployment rules.
DiskAccess software management will use local VMDKs of the two NAS-FE VMs.
RNC software management will require DiskAccess tenant as a prerequisite. Once
DiskAccess tenant exists, sw management will be done by LrcMgr and use Software Disk
mounted via DiskAccess VM. 3gOAM VMs will simply use the sw available on the Software disk
(read-only mount).
Hardware Related Software Management:
For first release, HP provides the "cloud in a box" hardware. Software to be managed
from HP includes:
For first release, VmWare provides our virtualization "cloud" layer. Software to be managed from
VmWare includes:
Each server needs its own 5.2 GB LUN at the SAN. If this has not been created already
as part of I&C for the whole LRC, it needs to be done for commissioning the new server.
The C7000 has an On-board Administrator (OA) which handles management of
hardware. A web-based OA-client GUI is run to commission the new server. This GUI only needs
IP connectivity to the OA, so it can be run locally at I&C time, or from WMS if server is being
added later for system growth.
The RNC tenant consists of four different VM types - 3gOAM, CMU, UMU and PC- and it
requires the Disk Access (DA) tenant (which consists of a pair of VMs) and the LRCE Mgr VM to
create a complete system. Only one DA and LRC Mgr are required per WCE independent of the
number of other tenants in the system. Within the RNC a VXworks emulator is used called VXell
which essentially a single thread locked to an individual processor core with Linux core affinity so
it is important to also consider the number and location of the VXell instances. The size of each of
these VMs and the number of VXells is as follows:
UMU 65 4 3 0 6
LRC Mgr 1 1 0 4
Note: DA VM configures (on the SAN) and hosts 50 GB data volume for each RNC and 50 GB
software volume to be shared by all RNCs in a cluster.
The VMware DRS system (in semi-automatic mode) will distribute these VMs to physical
servers following the VM to VM anti-affinity rules that we define. These rules are simple, no more
than one 3gOAM, CMU or PC can reside on a single server. The hardware that we are using for
our first commercial release is the HP 460G8+ board with 20 cores which are sufficient to
generally host 4 VMs each with 4 VCPUs without over-subscription.
Please see examples below:
WCE uses fully dynamic VM allocation there is no static association between VMs and
blades
VMs are created on any blade assigned to the cluster even it exists in another physical
frame
Multiple Tenant can share a single data-center
To ensure services are highly available anti-affinity rules separate redundant VMs onto
different physical blades
To avoid dual failure scenarios that are not protected by 1+1 or N+1 sparing mechanisms,
the WCEs LRC Mgr has the ability to configure virtual machine with anti-affinity rules.
These simple rules ensure that a spare virtual machine is not allocated to the same server
as the active virtual machine during the dynamic virtual machine allocation phase.
CallP consists of several core processes. One of the primary core processes is the ue
core process, or UeCp, which handles User Equipment oriented processing residing in the UMU
- and the other is the nob core process, or NobCp, which handles NodeB oriented processing
residing in the CMU. The UeCp contains several components such as UeCall, UeRrc and IuCall
amongst others. The components within NobCp are NobRrc, which handles common channels,
and NobCall, which handles nbap procedures (like cell setup) and NobCch which handles
common channels.
CallP consists of two core processes. The first is called the ue core process, or UeCp,
which handles User Equipment oriented processing residing in the UMU - and the other is the
nob core process, or NobCp, which handles NodeB oriented processing residing in the CMU.
The UeCp contains several components such as UeCall, UeRrc and IuCall amongst others. The
components within NobCp are NobRrc, which handles common channels, and NobCall, which
handles nbap procedures (like cell setup) and NobCch which handles common channels.
7.4 PC VM STRUCTURE
The load shared protocol converter which acts as a NAT point and traffic-shaper for the
RNC.
The PC as implemented for the Wireless Cloud Element platform is different than the
implementation of the PC in the 9370 RNC. There is several significant ways such as using
internal IP addresses to represent external NodeBs and the use of the source IP address in the
UDP NAT translation. Please keep this is mind while reading further.
7.4.1 OVERVIEW
The PC role consists of two component parts: TRM which is a native Linux application
which manages the connections in the system and the "fast-path" which is a highly optimized
packet processor application based on the 6Windgate product provided by 6Wind which realizes
Intel's DPDK technology. In order to better balance load across the PCs and to allow new
features like dynamic load balancing across the PC, the UMUs need to be able to address the
PCs with finer granularity than what is possible with a single IP address for each PC; therefore,
the concept of an internal IP address representing a number of NodeBs - referred to as n@ below
- is introduced to the system. From many respects this n@ can directly replace the IP address of
the PC as used within the 9370 RNC. UMUs, for example, will use this internal IP address when
sending traffic without specific knowledge of which PC terminates this traffic. Note that in the
following figure internal IP addresses are designated with a lower case letters, so n@1 is the
internal IP address for the NodeB with the external IP address N@1. The internal IP address n@1
is used for both the control part of the PC (TRM) as well as the data-path part of the PC (Fast
Path).
To further illustrate the use of this address scheme, lets consider the case of traffic
flowing in the downlink direction - that is from the UMU to the NodeB. Once a connection is
operational, the UMU will communicate with the PC that is handling this connection with these
internal addresses (for example n@1). The IP infrastructure of the Wireless Cloud Element
ensures that this IP packet gets transferred to the PC that has bound n@1 but the UMU does not
have any specific knowledge of this binding. When this packet arrives at one of the PCs, it uses
the same NAT tables found in the 9370 implementation to create a new IP packet header with the
source IP address as that of the PC and the destination IP address of N@1. By using the same
NAT table technology as that of the 9370 RNC, each connection uses independent NAT tables
which support flexible address assignment at the NodeB including supporting multiple addresses
for a single interface.
In addition to the IP @ translation discussed the PC continues to operate the same
Bandwidth Pool functionality as found on previous versions of the RNC so that there is much
more processing required than discussed here. A significant difference between the 9370 RNC
Bandwidth Pool service and that of the Wireless Cloud Element is that all of the bandwidth pool
data is replicated across all of the PCs. This is possible as all of this data is less than 20MB and
therefore does not contribute significantly to the overall memory requirements of the PC.
In order to use the n@ addresses within the Wireless Cloud Element RNC application,
they must be mapped to physical NodeBs each identified by a unique NodeB Id. This mapping is
done once for any given configuration of NodeBs based entirely on configuration data and can be
considered pseudo static data afterwards. As NodeBs are added and deleted from the RNC this
mapping will have to be re-done. The goal of the mapping is to evenly distribute the load of the
Iub network across the entirety of the n@ addresses; however, as there are many n@ addresses
(most likely in the range of 100 to 1000), precise balancing is not required. The ability of the
UMT/IRC/APP/041734 02.07/EN Standard 09/dec/2014 Page 70 sur 112
9771 WCE RNC Product Engineering Information
system to dynamically balance the work represented by the n@ address further lessens the
requirement for precise balancing. The following table describes a simple assignment algorithm
that should be sufficient:
NBid @n
237 0
99 1
37 2
126 3
...
33 511
12 510
174 509
...
Note: the NBids are sorted in decreasing bandwidth usage.
Table 16: NodeB Id Mapping
It is the role of the 3gOAM to do this initial NBid to n@ assignment and distribute this
information to all of the CMUs.
Once the NBid to n@ mapping is complete the system is ready for the NodeBs to attach
to the RNC.
Once the n@s have been mapped and distributed to all of the ARP tables in the system,
the PCs are ready for NodeB traffic.
N2 Communication Mesh: By integrating both the control plane and user plane functions
into a single VM there is no need for the 9370s pooled architecture and therefore no need to
establish a mesh of connections between all of the TMU and all of the RAB. There is a need to
send messages between the UMUs and CMUs but the number of CMUs in a typical RNC will be
under ten and the UMUs do not require frequent communications with each other.
Scope of Failure: As both the control-plane and user-plane components for a single call
are hosted at a UMU, when that component fails it has minimal impact to the other nodes in the
system. Specifically, the CMUs and PCs need to clean-up any resources consumed by the failed
UMU but these actions do not need to be tightly coordinated between CMU and PC instances.
Another benefit of integrating the control-plane and user-plane components into a single
VM is automatic and fully dynamic balancing between the amounts of computing allocated to
each function. The ratio between control and user plane has shifted significantly, and occasionally
rapidly, with the introduction of new features and handsets. Automatic balancing reduces
operational expenses for the operator.
The 9370 RNC contains two central resource allocation components: RMAN and TBM
and a central SS7 protocol termination point, the NI. RMAN is a resource manager that allocates
user-plane resources (RABs) as part of the connection setup and release procedures as well as
being involved in recovery from failed nodes. TBM is a transport bearer manager that allocates
UDP port numbers for the external links used within the RAN network. The rate of change of
these ports is approximately once every ten seconds per active user due to both link allocation for
multiple services on mobile device and as a result of link optimization for minimizing mobile
device power consumption. Given the central role of both RMAN and TBM in the operation of the
RNC and their use in each connection setup, takedown or modification these nodes rapidly
become bottlenecks as the scale of the RNC is increased.
The virtual RNC eliminates the RMAN bottleneck by integrating the control-plane and
user-plane components into the new UMU virtual machine. By eliminating the pool of user-plane
resources and enforcing a one-to-one relationship between the two components RMAN becomes
a trivial selection algorithm with only one choice. Importantly, major changes to neither control-
plane nor user-plane component were required to eliminate RMAN.
TBM was not eliminated but instead distributed to all of the PC virtual machines within the
virtual RNC. Each of the PCs connects to a predetermined set of cell sites each with their own set
of UDP/IP links. A new component called TRM composed of software from the 9370s TBM
function is used to allocate all of the ports on the links allocated to a given PC. The overall
message sequence for allocation of resource is largely the same between the two versions of the
RNC thus maintaining maximum commonality between the two software streams.
The 3GPP standards that define the function of the 3G RNC specify that it must be
associated with a single SS7 signalling point code. The 9370 RNC satisfies this requirement by
terminating the higher layers of the SS7 protocol stack on a single, redundant, node referred to as
the Network Interface (NI). Since the rate of SS7 signalling is directly proportional to the size of
the RNC, the centralized NI rapidly becomes a bottleneck
The virtual RNC distributes the SS7 signalling associated with user equipment SS7
connection oriented messages - to the same UMU VMs handling the control and user plane for
the particular user. Not only does this change allow for near linear scalability of overall capacity,
but it also follows the previous pattern of limiting the scope of failure by co-locating all of the
functions required for a particular user in a single location. Connectionless messages (e.g. paging
requests for mobile terminated calls) are terminated on the CMU as there may not be a UMU
currently handling that users mobile device. More specifically, the UMU hosts the SCCP protocol
and the CMU hosts the SCTP and M3UA protocols although the implementation is a little more
complex than this to handle complex procedures like RANAP relocation.
The WCE recognizes that one or more of these mechanisms are likely to be active within
a virtualized network function and provide a platform fully compatible with such sparing
mechanism. Primarily this is achieved by ensuring that when the platform components
themselves fail, these failures result in a single fault in the application space. All components
within the WCE mini data centre are redundant as follows:
The HP c7000 chassis provides slots for 16 dual socket Intel Xeon servers, a set of load
sharing power supplies and fan units, and a pair of 6125XLG Blade Switches.
Each of the servers has a pair of 10 GE NICs which are connected to the 6125 switches
in an active-active configuration such that either a link or switch failure does not reduce bandwidth
below 10 Gbps per blade.
The NetApp e5424 Storage Area Network (SAN) with 24 600GB SAS drives DDP RAID 6
support. This isolation ensures that RAID operations, such as rebuilding a RAID set after a failed
drive is replaced, do not consume link bandwidth from the network functions as RAID operation
are unpredictable and consume bandwidth for a large amount of time (rebuild times can exceed
24 hours).
As iSCSI SAN technology only allows a single node to mount a unit, a fully redundant
Network Attached Storage (NAS) function is provided by WCE in the form of a pair of virtual
machines running a Symantec cluster file system. This implementation of a NAS is more cost-
effective than dedicated hardware while still providing highly available access to storage volumes
from virtual machines independent of where these virtual machines are physically allocated. The
Disk Access virtual machines are redundant such that the only impact to applications with
mounted volumes is a 20 to 30 second pause in connectivity all mounts remain valid.
In a nominal system configuration, single HW component failure (e.g. single server failure,
2 General single link/port failure, single disk drive failure) should not cause any reportable outage
which is equal to or greater than 10% of capacity and lasts longer than 30s
Capacity Impact: Single active CMU role failure should not cause more capacity loss than
the capacity supported by the failed CMU role
Capacity Impact: Single active PC role failure should not cause more capacity loss than the
capacity supported by the failed PC role
Return to capacity: New calls coming in from the impacted NodeB after a PC failure should
be handled by the spare PC in less than 30s. Iu/Iur connections for any new calls are
handled by the remaining PCs in the system including the spare PC.
Load balancing: Any PC load balancing shall not impact the existing calls.
Switchover: 3GOam role switchover/failover should be completed in less than 30s, i.e.
from the time the active role goes down till the time the standby role becomes fully
active.
Capacity: Single UMU role failure should not cause more capacity loss than the capacity
9 UMU
supported by the failed UMU role
Return to capacity: Failed UMU role needs to be recovered in 4 minutes (exclude server
failure)
Return to service: RNC reset* should complete within 7 minutes, i.e. from the time the
10 RNC
last cell is down to the first call is up
Return to service: RNC northbound interface is available in 7 minutes after RNC reset
After successful initialization, RNC should continue to provide call processing service (i.e.
12 RNC originating and terminating calls) when disk access is lost and recovered, i.e. no partial or
total service outage.
After successful creation or upgrade of RNC, any failure of LrcMgr should not impact
13 RNC normal operation of the RNC and cause no impact on services provided by RNC, e.g. no
partial or total service outage.
RNC should recover automatically after a total power outage in 12 minutes (dead office
14 RNC
recovery)
Note: All the informations regarding carrier grade architecture, sparing model and VMs failure
handling is described in R&D document: [Int_R&D_006]
IP addresses used within the tenants are typically statically allocated by the application
software and are in the 169.254.0.0 range which is a link-local address subnet (L.L.A), an IP
address that is intended only for communications within a local network. Routers do not forward
packets with link-local addresses so all internal communication is kept within the Wireless Cloud
Element.
OA 2
iLO 16
Fabric Switch 2
ESXi 16
Others
6125G Switch 2
Note: IPv6 addresses can be used for everything except ports and SAN disk access
addresses. For these we can addresses from the 169.254.0.0 subnet since they do not need to be
routable. We are also investigating the use of IPv6 addresses on the RNC OAM interface. Also note
that either GRE or VPN type tunneling can be used to reduce the number of public IP addresses
required, see the WceIpTunneling section for more information.
Fixed IP
Fixed IP = <internal_subnet> + <hw_id> + 1
NOTE - although the internal encoding is described here for information, it could change at any time
and assumptions must not be made about the IP address of a given RTE instance. Use the HwId apis
to derive an IP address since these will always be up to date.
Examples:
3GOAM-0-1 = 169.254.0.0 + 0x0001 + 1 = 169.254.0.2
Reserved IP Ranges:
The number of external IP@ used depends on network size and feature activation. The
following table describes the number of IP @s in a single shelf RNC:
In-order to limit the use of external, routable IP addresses in operator's networks it is possible
to use GRE - IPv4 / IPv4 tunnels - and possibly VPN tunnels to allow local non-routable addresses to
be used for the infrastructure components. The following diagram depicts how these tunnels could be
used:
The most significant benefit of the use of GRE (Generic Routing Encapsulation) is in a
situation where many Wireless Cloud Elements are configured within a single network as the number
of IP addresses required for the infrastructure can be very significant. Such an operators network is
depicted in the following diagram:
When using GRE the size of the IP packet increases as an extra IP header is required
which can be a problem if the original packet was already set to the maximum MTU size In-order
to avoid this problem the router doing the encapsulation must send out the ICMP Control
Message with "Don't Frag DF bit set" back to the Wireless Cloud Element.
For a configuration with 64 blades servers (WCE4), there will be 2 RNC tenants. The sub
netting will be separated between the two tenants.
With that respect, the IP addressing plan is as follows:
- One UTRAN Telecom and UTRAN OAM traffic IP requirements for Tenant#1
- One UTRAN Telecom and UTRAN OAM traffic IP requirements for Tenant#2
-
Additionally for each tenant, it assumed there is full traffic flows separation per interface
and traffic type (UP and CP). Each flow has its own VLAN and its own subnet. This multi VLAN
solution is compliant with the legacy RNC configuration. With that respect, each RNC has its own
set of VLAN.
IuPS CP PC
Gateway
Router IuCS UP
IuCS CP
CMU
IuR UP
IuR CP
/28
Subnet/VLAN
/26
IP sub netting requirements for the UTRAN Telecom CP and UP part of a WCE tenant, applicable for both
Tenant#1 and Tenant #2
IP sub netting requirements for the UTRAN OAM for Tenant#1 and Tenant #2
A subset of Ethernet OAM will be supported by the WCEs switches. The WCE will
support IEEE802.3ah/IEEE802.1ag in order to verify and troubleshoot connectivity with adjacent
L2 switches or NHRs.
The WCEs LRC Mgr is an application that resides in its own virtual machine that acts as
a bridge between a virtualized network function and a Cloud O/S. Given that there are many
Cloud O/Ss currently in existence, often with incompatible APIs, the LRC Mgr isolates network
elements from this complexity. LRG Mgr is not a carrier grade component in itself and is not
required for the normal operation of any network element. LRC Mgr extends the functionality of
standard Cloud O/S systems in order to lessen the overhead when virtualizing standard network
functions. Specially, the LRC Mgr introduces or refines the concept of a Tenant.
The OpenStack definition of a tenant is as follows: A container used to group or isolate
resources and/or identity objects. Depending on the service operator, a tenant may map to a
customer, account, organization, or project.
LRC Mgr explicitly extends this definition to a network function as virtualized network
functions typically consist of multiple sets of VMs with each set composed of multiple instances of
a specific VM (or servers in OpenStack parlance). Each of the sets of VMs may implement a
specific sub-function of the overall network element. For example, the 3G RNC tenant is
composed of the following sets of VMs: 3gOAM, CMU, UMU and PC. Each set of VMs is
described by its own VM template which describes requirements for virtual CPUs, memory, etc.
To ensure there are sufficient physical resources to create a tenant the LRC Mgr provides
the ability to specify either a resource reservation for the entire tenant or a single VM type or both.
These reservations take the form of MHz of compute and MB of dynamic memory.
LRC Mgr provides mechanisms to act on an entire tenant with a single operation. For
example it is possible to reset an entire tenant, power off or on an entire tenant or add a single
VM to an existing tenant with the LRC Mgr.
In recognition that network functions require and will have implemented application
specific sparing mechanisms to achieve high levels of availability, LRC Mgr supports and
facilitates these mechanisms with the ability to create affinity and anti-affinity rules. Such rules
ensure that VMs are appropriately distributed across the physical compute infrastructure such
that the failure of one of the hosts does not result in a VM level failure beyond what the
application is designed to handle. For example, within the 3G tenant there exists a set of PC VMs
that implement an N+1 sparing mechanism within the set. To ensure the N+1 mechanism
continues to function correctly in a cloud environment where the allocation of VMs to physical
hardware is dynamic, no more than one of the VMs within this set may be allocated to a physical
host.
The vCenter Server is the central point for configuring, provisioning, and managing virtualized IT
environments or datacenters.
vCenter Server aggregates physical resources from multiple ESX/ESXi hosts and
presents a central collection of flexible resources for the system administrator to provision to
virtual machines in the virtual environment. vCenter Server components are user access control,
core services, distributed services, plug-ins, and various interfaces.
The User Access Control component allows the system administrator to create and
manage different levels of access to vCenter Server for different classes of users.
For example, a user class might manage and configure the physical virtualization server
hardware in the datacenter. Another user class might manage virtual resources within a particular
resource pool in the virtual machine cluster.
ESXi hypervisor for single server with APIs for starting VM, restarting VM, killing VM,
querying VMs, etc.
vCenter centralized entity for managing many servers together. Uses ESXI apis.
UMT/IRC/APP/041734 02.07/EN Standard 09/dec/2014 Page 88 sur 112
9771 WCE RNC Product Engineering Information
Cluster set of servers managed by the same vCenter. A server can belong to only one
cluster.
Resource Pool logical representation of the computing capacity (CPU, Memory).
resource reservation represents resources (CPU, memory) that are guaranteed
vMotion ability to move a VM from one server to another server when both servers are
managed by the same vCenter (Not supported in LR14.2W release)
vApp set of VMs that logically comprise a single application. vApp can be defined and
deployed in a server or in a DRS-enabled cluster. DRS allows to define anti-affinity rules
to enforce rules for individual VM placement
HA feature that runs in a cluster and sets up ESXIs to monitor each other. If a server in
the cluster fails the VMs on that server are restarted on other server(s).
Failover host new HA feature available with vSphere 5.0 that allows one or more hosts
to be specified as failover hosts. No VMs are placed on a failover host. If a server fails, HA
will restart all of the VMs on a failover host.
DRS feature that runs in a cluster and is based on vMotion. DRS has 3 modes:
Manual - no automatic vMotion, no automatic startup of VMs on "the best"
server. Can still have vSphere API indications about what is suggested by DRS.
Semi-automatic - no automatic vMotion for load balancing, but will do automatic
startup of VMs on "the best" server.
Automatic - load balancing via vMotion (at various levels depending on how
balanced you care about servers being) and automatic startup of VMs on "the best"
server. The decisions are based on a past performance data collected from the hosts in
the DRS cluster and stored in vCenter server database, so for automatic DRS to function
properly it requires vCenter server to be up and running
Anti-affinity DRS feature that allows specification of VMs such that they cannot be
on the same server
DPM power management feature for DRS based clusters. Under-utilized servers
can be shut down and powered back up when needed. Uses vMotion to move VMs to
other servers.
Engineering Recommendation:
There is a VMWare white paper which address the WCE Carrier Cloud solution:
https://umtsweb.ca.alcatel-
lucent.com/wiki/pub/WcdmaRNC/LightRadioReferenceArchitecture/Alcatel_Lucent_WCE
_Success_Story_FINAL_20131210-r1v70.pdf
The WCE platform will include an optional Disk Access tenant. For release 1, the Disk
Access tenant must be deployed before any 3G tenant is deployed as the 3G tenant will make
use of this functionality.
Single copy of 3G software on the SAN for all VMs to use (software download of a given
version should not need to occur more than once)
Reliable spooling of counter data, call trace data all written from active 3gOAM VM
Reliable config data (specifically MIB) read/written from active 3gOAM VM
no single point of failure
ln general we need at minimum both 1:1 spared 3gOAM VMs to access the same disk
partition for software download, config data mgmt, and counter/CT spooling. The reliability comes
from RAID now we should not need software based shadowing/mirroring. We are trying to get
out of the middleware business.
In order to be able to safely share the same disk across more than one VM, we will make
use of Disk Access software. The Disk Access VMs will provide NFS and SFTP servers for
shared disk access.
A separate Disk per WCE -- Initially only used for 3g software but may be converted in a later
release to include LRC Platform software and other tenant software as well.
All 3gOAM VMs (across multiple RNC tenants in the same WCE) will have the ability to NFS
mount this disk via the Disk Access VMs.
file system provided by Disk Access VMs
partitions and directory structure used to separate software versions
populated with RNC software as part of I&C
LrcMgr northbound interface handles software version management on the disk after initial
deployment
3gOAM creates a RAM disk with that RNC's current software version
other VMs PXE boot from 3gOAM using RAM disk
The RNC Cplane is only visible to the external network through the CMUs and the Uplane is only
visible to the external network through the PCs. This means that externally routable subnets for
all traffic types will be provisioned against those two roles. Since the Iu Cplane, Iub Cplane and
Iub common channel Uplane will probably be implemented on different OS/core instances, three
subnets will be needed for the CMUs. The customer will have the option to provision one subnet
for all Uplane traffic on the PCs or provision individual subnets for each of the four Uplane traffic
types.
As further simplifications to the existing RNC architecture, there will no longer be
separate PCs for the Iu and Iub legs of a given call. Instead, both legs of a call will be handled by
the same PC. As well, PCs will no longer be warm-spared but will be cold-spared instead. When
a PC fails, all the calls associated with that PC will be dropped and the PC will be re-instantiated
elsewhere in the cloud.
The colored lines in this figure imply a separate VLAN and VIDs. The point at which each
of these colored lines meets a VM implies a separate vNIC in a VMs template. The VID for each
VLAN is configured by the customer in the port group VLAN attribute of the vNIC. Frames leaving
the VM via the vNIC will be tagged and assigned the appropriate VID by the virtual Distributed
Switch (vDS). It is important that each tenant maintain its own unique internal VLAN (shown as
dark blue and yellow lines) for communication between tenant components so that the
169.254.0.0 /16 IP subnet may be reused between tenants.
VMware also allows the assignment of a single p-bit value to a port group. However, this
will be insufficient for most application flows since any given flow (as identified by a VLAN or
vNIC) will carry different priority frames within the flow. An example of this from the telecom traffic
domain is the Iub Uplane flow whose frames require different priorities depending on the type of
call they represent (e.g. voice, streaming, interactive/background, etc). Therefore, a more flexible
p-bit marking mechanism is required. As frames leave the blade and enter the Blade Switch, the
DSCP value from a frames IP header will be translated to a configured p-bit value and inserted
into the Ethernet header (i.e. the VLAN tag) by the switch.
This tagging and p-bit marking strategy applies to all the VLANs in Figure above with the
exception of the ESXi/SAN and ESXi/vCenter paths which are depicted by the solid black and red
lines emanating from the ESXi elements.
Non Telecom VLAN Recommendations:
The WCE will support at least 3 non-telecom VLANs: OAM (red), LRCEMgr internal (grey) and
iSCSI (black).
The 3G tenant introduces 2 more non-telecom VLANs: 3G tenant internal (blue) and DA
internal (purple).
The iSCSI VLANs VID will be 4094 and the p-bit will be 0. These values are hardcoded in the
blades BIOS by System House but may be changed by I&C.
The iSCSI vNIC must have the same VID (4094) configured in its port group.
Recommendation for the OAM VID is 4093. This value must be configured manually in each
ESXi by I&C, followed by a reset of the blade.
The same VID must be configured in the OAM vNIC port group.
All other VLANs (LRCEMgr internal, DA internal, 3G tenant internal) will have a customer-
selected VID configured in the appropriate port group.
P-bit values for all frames leaving VMs will be set by the Blade Switch downlink ports based
on a configurable DSCP to p-bit mapping for DSCP values other than 0.
Recommendation for call trace frames coming from the CMUs and UMUs is a DSCP value of
CS1 which is the lowest possible value.
Another rule on the Blade Switch will identify OAM VLAN frames from the ESXi (DSCP = 0)
based on VID = 4093 and set the p-bit to 2 according to recommendations. I&C or the
customer may configure any other p-bit value in this rule.
I&C or the customer may also configure a rule for the iSCSI VLAN (VID = 4094) to override
the 0 p-bit set by the BIOS.
Uplink ports to the SAN and OAM devices (WMS, vCenter) are configured as access ports
with PVIDs of 4094 and 4093 respectively. Rules to set the p-bits in ingress frames must be
defined on these ports.
In addition to the non-telecom VLANs described earlier, customers will want to separate
telecom traffic types onto different VLANs. Figure below depicts a typical way of separating
external/telecom traffic types for the RNC. Most customers want to separate Cplane and Uplane
on all major interfaces as well as separating IuPC traffic. As explained later, IuBC and iSMLC
traffic cannot be put on different VLANs due to the way these functions are implemented in the
RNC. The telecom VLANs are depicted terminating on the gateway router since this is likely the
scope of their relevance. As packets for the various RNC traffic types are forwarded to the
external network (i.e. to the left of the gateway router) they may be put onto a number of different
technologies as described in the Transport Overview.
Figure below shows which VLANs (both telecom and non-telecom) are associated with
each VM type for WCE platform and RNC VMs. Each VLAN requires a separate vNIC on a VM.
Each colour indicates a different VID configured in the vNICs port group. Although the WCE/RNC
may be required to support as many as 15 telecom and non-telecom VLANs, each individual VM
type (i.e. RNC role) will only have to support a subset of those VLANs based on the type of
external traffic a given role generates. Note the largest number of VLANs any role will have to
support is 8 (on the CMU).
The red star under the OAM vNIC highlights the fact that this VLAN will carry several
different types of application traffic including 3G OAM, RNC CallTrace traffic from the 3G OAM,
CMU & UMU, LRCE Manager traffic to the vCenter & WMS, ESXi traffic to the vCenter and any
other WCE tenants OAM traffic. In addition, this VLAN will carry IuBC or iSMLC traffic if these
functions are configured on the RNC. These functions must use the OAM VLAN because they will
use the OAM IP address for the OMU which is part of the 3G OAM VM.
The Iub CP vNIC on the 3G OAM VM is there to allow the OMU to receive Attach
messages from the NodeBs. The first green star brings attention to the fact that this vNIC is on
the same VLAN as the Iub Cplane. The Attach functionality is assigned an IP address from the
Iub Cplane subnet and therefore must be on the same VLAN as the Iub Cplane.
The second green star in Figure above highlights the fact that this VLAN may carry a new
traffic type for the WCE version of the RNC. This new traffic type is the Iub Common Channel
(CC) Uplane. Common channels reside on the PC in the 9370 RNC but have been moved to the
CMU for the WCE. A few options were contemplated for this VLAN depending on the IP address
strategy adopted. The first option involves using the same IP address as the Iub Cplane for that
CMU. This means that the Iub CC Uplane frames must be carried on the Iub Cplane VLAN. This
is the preferred option for the final product due to IP address consumption concerns.
A second option involves defining a separate IP subnet for the Iub CC Uplane traffic. This
option would allow the Iub CC Uplane traffic to be carried on the same VLAN as the Iub Cplane or
on its own VLAN. The latter VLAN option is highlighted by the turquoise star in Figure above. This
IP address strategy is what is currently used in the labs because at the moment the software
cannot be configured to separate traffic for two different functions (Cplane and Uplane) on the
same IP address. This is a temporary arrangement until the software can make such a distinction.
Note that the CMU template will include a vNIC for the Iub CC Uplane. This will allow a customer
to segregate the Iub CC Uplane from other CMU traffic if IP addresses are not a concern.
Please note that the Figure above shows the maximum VLAN configuration. If all the
RNC functionalities are not available, in particular IuPS NAT function; that means IuPS Uplane
terminates on the UMUs. Please look at the figure below:
The WCE platform transport is built upon a single L2 Ethernet LAN. This is important for
several reasons, principal of which is to alleviate customer concerns regarding IPV4 address
usage by allowing internal WCE components to use link-local (169.254.0.0) addresses and to
facilitate failover of software components within the WCE. The platform will provide a basic set of
transport capabilities as shown in Figure above. This set includes 10 Gbps interfaces that may be
down-rated to 1Gbps if that is what is supported on the NHRs.
The WCE supports the segregation of different traffic flow types onto separate VLANs. A
number of infrastructures VLANs are defined within the WCE to separate non-telecom flows
such as OAM, disk access, and LRCE Manager Flows from tenant payload flows. In addition,
each tenant will have its own internal VLAN for inter-component communication. Since each
tenants VLAN will be unique within the WCE, the 169 component IP addresses may be reused
from one tenant to the next. In the case of the RNC tenant, VLANs are used to create distinct
telecom flows such as IuCS, IuPS, Iur and Iub Uplane and Cplane. The overall WCE VLAN
strategy is described in VLAN Tagging and Configuration.
Engineering Recommendation:
For the Transport Engineering Guidelines WCE RNC tenant, please
refer to the Iu TEG LR14.2 [Gl_ENG_001]
will filter frames based on VLAN IDs. The vDS has a presence on all blades of a given VMware
cluster.
Note the paths the various frames will take through the IRF domain. Since, in this
example, the NHR routers have implemented MC-LAG in an active/standby manner, there is only
one uplink path available in the domain. IRF will redirect frames over the IRF links to reach a
viable uplink.
PSE have 3 overload levels: minor, major, critical. CNP has 5 overload levels: minor,
major, critical, critical platform 1, critical platform 2.
In 9370 a mapping is used between them:
PSE minor overload level <-> CNP minor overload level
PSE major overload level <-> CNP critical overload level
PSE critical overload level <-> CNP critical platform 2 overload level
The mapping will be kept for LRC for computing the local overload for CMU and
UMU:
Local PSE Overload level = max of {PSE overload levels}
Local CNP Overload level = max of {CNP overload levels}
Local Overload Level = max of {PSE overload levels, CNP overload levels}.
The strategy is to keep 9370 behavior unless changes are necessary. The changes from
9370 are color-coded in the table:
1) RRC Connection Request filtering in CMU U-Plane due to TMU System Overload is
changed to Local CNP overload.
2) Paging filtering in NI due to RAB System Overload is removed.
3) U-Plane CAC functions due to overload are removed. Should there be a need to CAC,
we will add it in Callp and make it OAM configurable like PCH state transition case
Overload control functions performed by NI:
1) For ingress direction (from CN to RNC), since SCCP Access runs in C-Plane EE, it
makes sense for sccpRouting to check C-Plane EE Q overload level before sending messages to
it.
2) For egress direction, it does not make sense for NI to drop messages that are already
received, but this is what implemented in 9370.
CRITICAL PLATFORM 1
Reject all except
emergency calls1
CRITICAL PLATFORM 2
Filter all
Paging CMU U-Plane No Overload Nominal Behavior
Request [3] MINOR Filter 1 out of 2
MAJOR CRITICAL Filter all
UMU Callp No OVERLOAD Nominal Behaviour
(RncCom) MINOR
MAJOR Filter 1 out of Y
CRITICAL
CRITICAL PLATFORM 1 Filter all
&2
NI No OVERLOAD Nominal Behaviour
(SccpRouting) MINOR Filter 1 out of 2
MAJOR Filter all
CRITICAL
CRITICAL PLATFORM 2 Filter all
SCCP messages NI No OVERLOAD Nominal Behaviour
except Paging (SccpRouting) MINOR Filter 1 out of 10
MAJOR Filter 1 out of 5
CRITICAL and Above Filter all
PCH upsizing or UMU Callp No OVERLOAD Nominal Behaviour
Downsizing MINOR
MAJOR Filter 1 out of Z
CRITICAL
UMT/IRC/APP/041734 02.07/EN Standard 09/dec/2014 Page 101 sur 112
9771 WCE RNC Product Engineering Information
MAJOR
Suspension of CTa,
CTb, OT-RNC and
CTn sessions
MAJOR_OVERLOAD
CRITICAL_OVERLOAD
Engineering Recommendation:
For LR14.2W Release, one cluster can handle 32 blades; this is due to a
VMWare restriction. The WCE1 configuration handles 16 blades servers
through the WCE4 configuration which handles 64 blades servers. As
already write on the VM configuration, the table below gives the rules
applicable for the VMs implantation for the LR14.2W.
On a WCE system, blades/hosts can be added one or more at a time, in service with no
impact, and then roles dynamically added for growth. The minimum configuration supported is 6
blades/hosts, but this does not give the full carrier grade availability.
One vRNC (virtual or logical RNC) can handle one 32 hosts/cluster. See the table below
to see the max composition of one vRNC:
10.2.1 TRAFFIC
One UMU VM is composed of 3 UMU-Vxells each handling 4K total users (3K PCH, 1K
DCH/FACH)
Up to 780 000 users per vRNC
3K for PCH users of which is a max combined of 3K URA_PCH and 1K CELL_PCH
1K for FACH/DCH users of which is a max combined of 850 DCH and 700 FACH
Which means total of 585 000 PCH & 195 000 DCH/FACH per vRNC
Remind: The HP Blades used on WCE Shelf are builded with 20 cores each.
Rule:
WCE1 Conf: 16 blades X 20 = 320 Cores
WCE2 Conf: 32 blades X 20 = 640 Cores
WCE3 Conf: 48 blades X 20 = 960 Cores
WCE4 Conf: 64 blades X 20 = 1280 Cores
Virtual Machine Configuration:
1 x UMU VM uses 4 Cores
1 x CMU VM uses 4 Cores
1 X PC VM uses 4 Cores
1 x 3gOAM VM uses 2 Cores
1 x DA VM uses 2 Cores
1 x LRc Mgr uses 2 Cores
The maximum size of one vRNC is up to 65 UMU, 16 CMU, 30 PC, 2 3gOAM, 2 DA and 2 LRcMgr
located on a 32 blades (WCE2) system Configuration.
Rule:
Rule:
WCE in LR14 (Release 1) can manage up to 12000 cells with 2400 cells per
vRNC (up to five vRNCs per WCE).
The capacity licensing is the ability to issue a license related capacity. We will license the following
RNC capacity:
Erlangs
Throughput (Mbps)
Cells
Ref Subscribers (future proof for anticipated M2M)
Using the call profile of the customer, the network configuration/engineering team will translate the
required erlangs and Mbps into a specific number of UMUs and CMUs. The RNC will not limit the
throughput of a cell, but more the overall throughput of the RNC by limiting the number of UMUs and
CMUs.
Each UMU is rated for the maximum capacity with each release. The number of UMUs licensed to the
customer will depend on the contract negotiations for the system. The number of UMUs licensed will
be applied to the License Key Delivery Infrastructure (LKDI). During provisioning of the Radio Network
Controller (RNC) tenant, the Management System (WMS) verifies the UMU licenses and provisions
the correct number of UMUs on the RNC.
The WCE RNC combines the control plane and user plane functions within a single virtual
machine, on a per user basis.
This User Management Unit (called UMU) is the single element responsible for all UE
management aspects: including both data traffic as well as signalling.
UMU is under licensing control. 1 License per UMU
Please refer to the WCE MOPG for UMU licensing Order Code [Int_Eng_001]
Engineering Recommendation:
For LR14.2W, The current status indicates that one blade server is able
to bring about 100 Mbps. So 64 blades server with Ivy Bridge Processor
(scaling factor > 1.33 vs Sandy Bridge) are able to provide 6.4 Gbps Iu
Throughput. The result include normal mode provisioning but exclude
ciphering, multi VLAN and GTPU-NAT features.
Traffic Profile Characteristics influence the Iu throughput a 9771 WCE can achieve.
Virtual Machines CPU Utilization: represents the CPU utilization of each Virtual
Machine while processing calls and handling traffic.
For the Blades and VMs number estimation regarding a customer traffic profile, the RNC
RCT / Companion Tool is able to provide the evaluation.
1. Two different regions in the customers network were identified to have distinctly different call
profiles, particularly in terms of User Plane throughput:
Zone 1 Profile: (High smart phone network) Contains (8) 9370 RNCs represented by
RNC-A through H
Zone 2 Profile: (Data at home network) Contains (5) 9370 RNCs represented by RNC-I
through M
Zone1
RNC Thput in Mbps (UL+DL)
RNC A 362
RNC B 428
RNC C 359
RNC D 381
RNC E 375
RNC F 138
RNC G 329
RNC H 341
Aggregated 2713
Zone2
RNC Thput in Mbps (UL+DL)
RNC-I 625
RNC-J 634
RNC-K 753
RNC-L 667
RNC-M 542
Aggregated 3221
According to simulations for Zone 1, a 9771WCE could off-load the following capacity
given Zone 1 Traffic Profile:
Note: Capacities shown are from an application layer perspective, not at the L1 level
In this case, the (32) Bladeserver configuration, given Zone 1 traffic profile, achieves
~2750 Mbps
Recall from previous zone 1 table: Aggregated 2713
According to simulations for Zone 2, a 9771 WCE could off-load the following capacity
given Zone 2 Traffic Profile:
Note: Capacities shown are from an application layer perspective, not at the L1 level
In this case, the (22) Blade server configuration, given Zone 2 traffic profile, achieves
~3300 Mbps
Recall from previous zone 2 table: Aggregated 3221
To have an approximate value of the maximal IuPs that could be reaching while the
vRNC has the maximum number of VM (65 active UMUs in LR14.2), the following formula will be
used for the projection:
IuPS Throughput of vRNC = Max total throughput per UMU * (1+0.024) * 65
The 2.4 % represents the coefficient to scale the throughput from computing format to the
metric format.
65 is the max number of UMU in a vRNC in LR14.2
Restriction:
Note: For the WCE, there is not yet a capacity engineering limit clearly
defined. We cannot talk about a max CPU engineering limit of 80%.
Engineering limits are specific to the metric and system they are used
against.
even including a different or patched version of Linux. It means we have complete flexibility to
change anything we want on a software upgrade.
This is something we could not do in the physical world. Then, when the Shadow Tenant
becomes the Service Providing Tenant we will have minimal outage. We won't have a scalability
problem with needing to boot up, load, and deliver config data from centralized VM to many VMs.
This will have all been done in advance. It is a true improvement. It also means that after upgrade
happens, the n-1 VMs still exist and become the new Shadow Tenant. This means that rollback
is also very fast.
Another 3G specific use of Shadow concept in release 1 is for MIB activation. Today in
some critical reconfiguration the cnode mib should be rebuilt and the RNC is reset to come up
with the new MIB. If we have 1000 VMs an RNC reset will be much longer than the 5 minutes the
customer gets today. For WCE we can deploy a Shadow RNC with the new MIB and have
minimal outage.
Additionally, now that our machines are virtual instead of physical, we can actually modify
the machines that we run on by deploying new VM templates. So as part of deploying the
Shadow Tenant, we can modify the number of cores, memory and cpu reservation values, and
number and types of machines that make up the Tenant.
The following phases or states exist for the Shadow Management activity:
phase1: creation shadow Tenant Instance,
phase2: start Shadow,
phase3: switchover,
phase3r: rollback,
phase4: remove Old Service Providing Tenant Instance
When the customer is convinced they will not want to rollback, the shadow
TenantInstance is deleted as a config change at LrcMgr northbound interface
LrcMgr will power-down and remove the shadow VMs.
To ensure the shadow tenant does not interfere with operation of the service providing
tenant, the shadow tenant is isolated from the network until the switch of activity between the two
instances. This switch of activity is accomplished by selectively disabling and enabling virtual
network interfaces on the service providing and shadow tenant virtual machines.
A powerful by product of the shadow upgrade is the ability to perform a roll-back of an
entire network function to its prior state should the upgrade be unsuccessful. The freedom to
almost seamlessly switch between versions of a network element has the potential to radically
change traditional maintenance procedures.
i
i
End of the document