Вы находитесь на странице: 1из 160

UA06.

0 9370 RNC Product Specification


Document number: UMT/RNC/DD/010600
Document issue: 05.01 / EN
Document status: Preliminary
Date: 27/May/2008
Passing on or copying of this document, use and communication of its contents not permitted without
AlcatelLucent written authorization
Copyright

2008 Alcatel-Lucent, All Rights Reserved


Printed in Canada



UNCONTROLLED COPY: The master of this document is stored on an electronic database and is write
protected; it may be altered only by authorized persons. While copies may be printed, it is not recommended.
Viewing of the master electronically ensures access to the current issue. Any hardcopies taken must be regarded
as uncontrolled copies.
ALCATEL-LUCENT CONFIDENTIAL: The information contained in this document is the property of Alcatel-
Lucent. Except as expressly authorized in writing by Alcatel-Lucent, the holder shall keep all information
contained herein confidential, shall disclose the information only to its employees with a need to know, and shall
protect the information from disclosure and dissemination to third parties. Except as expressly authorized in
writing by Alcatel-Lucent, the holder is granted no rights to use the information contained herein. If you have
received this document in error, please notify the sender and destroy it immediately.
UA06.0 RNC Product Specification
Passing on or copying of this document, use and communication of its contents not permitted without AlcatelLucent written authorization
05.01 / EN Preliminary Page 2//160
Copyright 2008 by Alcatel-Lucent Technologies. All Rights Reserved.
About Alcatel-Lucent
Alcatel-Lucent (Euronext Paris and NYSE: ALU) provides solutions that enable service
providers, enterprises and governments worldwide, to deliver voice, data and video
communication services to end-users. As a leader in fixed, mobile and converged broadband
networking, IP technologies, applications, and services, Alcatel-Lucent offers the end-to-end
solutions that enable compelling communications services for people at home, at work and on
the move. For more information, visit Alcatel-Lucent on the Internet: HHhttp://www.alcatel-
lucent.com
Notice
The information contained in this document is subject to change without notice. At the time of
publication, it reflects the latest information on Alcatel-Lucents offer, however, our policy of
continuing development may result in improvement or change to the specifications described.
Trademarks
Alcatel-Lucent, and the respective logo is a trademark and service mark of Alcatel-Lucent.





UA06.0 RNC Product Specification
Passing on or copying of this document, use and communication of its contents not permitted without AlcatelLucent written authorization
05.01 / EN Preliminary Page 3//160
CONTENTS
1 UMTS RNC OVERVIEW..............................................................................................................13
1.1 UMTS NETWORK ARCHITECTURE ...........................................................................................13
1.2 RNC WITHIN THE UTRAN.......................................................................................................14
1.3 RNC GENERAL DESCRIPTION..................................................................................................18
1.3.1 Connectivity...................................................................................................................18
1.3.2 Packaging......................................................................................................................18
1.3.3 3GPP Release Compliance..........................................................................................18
1.4 RNC FUNCTIONALITY..............................................................................................................19
1.4.1 Control Plane ................................................................................................................19
1.4.2 User Plane.....................................................................................................................20
2 RNC IMPLEMENTATION............................................................................................................22
2.1 FUNCTION MAPPING................................................................................................................22
2.2 RNC CONFIGURATIONS ..........................................................................................................23
2.3 RNC PROCESSOR ROLES.......................................................................................................25
3 RNC HARDWARE DESCRIPTION .............................................................................................29
3.1 RNC SYSTEM-LEVEL HARDWARE DETAILS ..............................................................................29
3.2 MSS15K PLATFORM COMMON HARDWARE DESCRIPTION........................................................30
3.2.1 MSS15K Equipment Frame..........................................................................................30
3.2.2 Multi-service Switch 15000 Fabric................................................................................36
3.2.3 Power Distribution.........................................................................................................38
3.2.4 Cooling..........................................................................................................................47
3.2.5 MSS15K Control Processors ........................................................................................49
3.2.6 MSS15K Function Processors......................................................................................57
3.3 9370 RNC SPECIFIC HARDWARE DESCRIPTION.......................................................................72
3.3.1 Packet Server Function Processor ...............................................................................72
3.3.2 Dual-Core Packet Server Overview..............................................................................79
4 RNC SYSTEMS & ENVIRONMENT............................................................................................87
4.1 9370 RNC SINGLEMARKET PACKAGES................................................................................87
4.2 9370 RNC EXPANSION AND DUAL MARKET PACKAGES ............................................................90
4.3 PHYSICAL, ENVIRONMENTAL, ELECTRICAL CHARACTERISTICS ...................................................92
5 RNC SOFTWARE OVERVIEW...................................................................................................93
5.1 RNC SOFTWARE ARCHITECTURE OBJ ECTIVES.........................................................................93
5.2 RNC SOFTWARE DOMAINS .....................................................................................................94
5.3 RNC PLATFORM SOFTWARE ...................................................................................................96
5.4 RNC TRANSPORT INTERFACES ...............................................................................................99
5.5 PATH MANAGEMENT ............................................................................................................ 103
5.6 CONTROL PLANE SOFTWARE................................................................................................ 105
5.6.1 TMU Functions........................................................................................................... 106
UA06.0 RNC Product Specification
Passing on or copying of this document, use and communication of its contents not permitted without AlcatelLucent written authorization
05.01 / EN Preliminary Page 4//160
5.6.2 Call Processing.......................................................................................................... 107
5.6.3 Cell Processing.......................................................................................................... 110
5.6.4 IU Protocol Stacks...................................................................................................... 112
5.6.5 Configuration Data..................................................................................................... 112
5.7 USER PLANE SOFTWARE...................................................................................................... 113
6 RNC CARRIER GRADE........................................................................................................... 114
6.1 SPARING MODEL ................................................................................................................. 114
6.1.1 PMC-M sparing.......................................................................................................... 114
6.1.2 PC and path sparing.................................................................................................. 115
6.1.3 RAB and cell logical group sparing............................................................................ 118
6.1.4 TMU sparing for CELL C-plane.................................................................................. 118
6.1.5 NI and PDC sparing for SS7 stack............................................................................. 120
6.1.6 OMU sparing.............................................................................................................. 121
6.1.7 OC3 sparing............................................................................................................... 121
6.1.8 CP Sparing................................................................................................................. 122
6.2 IP UTRAN CARRIER GRADE STRATEGY AND SPARING MODEL .............................................. 122
6.2.1 Protected IP routes on 4pge card for IP transport sparing........................................ 123
6.2.2 NI and PDC sparing for C-plane traffic on iu/iur......................................................... 124
6.2.3 PC sparing for U-plane traffic on iub.......................................................................... 125
6.2.4 PMCM sparing ........................................................................................................... 126
7 OAM&P..................................................................................................................................... 127
7.1 RNC OAM STRATEGY ......................................................................................................... 127
7.2 OMC-R / OMC-B................................................................................................................ 127
7.3 MDM AND MDP .................................................................................................................. 127
7.4 OAM FRAMEWORK .............................................................................................................. 128
7.5 OFF-LINE CONFIGURATION................................................................................................... 128
7.6 ROC/NOC.......................................................................................................................... 128
7.7 RNC OAM.......................................................................................................................... 129
7.7.1 Platform...................................................................................................................... 129
7.7.2 Control Plane/User Plane OAM Applications............................................................. 130
7.8 FAULT MANAGEMENT........................................................................................................... 131
7.8.1 Alarms........................................................................................................................ 131
7.9 PERFORMANCE MANAGEMENT ............................................................................................. 132
7.9.1 Statistics & Counters.................................................................................................. 132
7.9.2 UMTS Application Counters....................................................................................... 133
7.9.3 Trace.......................................................................................................................... 133
7.10 CONFIGURATION MANAGEMENT............................................................................................ 134
7.10.1 Overview.................................................................................................................... 134
7.10.2 Configuration Management........................................................................................ 135
7.10.3 Off-line Configuration................................................................................................. 136
7.11 IN-BAND / OUT-OF-BAND CONNECTIVITY............................................................................... 136
7.11.1 Out-of-Band OAM&P.................................................................................................. 136
7.11.2 IN-Band OAM&P ........................................................................................................ 137
7.12 UPGRADE............................................................................................................................ 138
7.12.1 Hardware Upgrade..................................................................................................... 138
7.12.2 Software Upgrade...................................................................................................... 139
7.12.3 MIB Upgrade.............................................................................................................. 139
7.12.4 RNC Patching ............................................................................................................ 140
UA06.0 RNC Product Specification
Passing on or copying of this document, use and communication of its contents not permitted without AlcatelLucent written authorization
05.01 / EN Preliminary Page 5//160
8 TECHNICAL SPECIFICATIONS.............................................................................................. 141
8.1 DEPENDABILITY ................................................................................................................... 141
8.1.1 Definitions................................................................................................................... 141
8.1.2 System Outage Measurements and Objectives......................................................... 143
8.1.3 Reliability Block Diagram........................................................................................... 144
8.1.4 Hardware Reliability Objectives ................................................................................. 145
8.1.5 Software Reliability Objectives................................................................................... 145
8.2 DIMENSIONING..................................................................................................................... 147
8.3 CAPACITY............................................................................................................................ 148
8.4 KEY PERFORMANCE INDICATORS .......................................................................................... 149
8.5 SECURITY............................................................................................................................ 149
8.5.1 IPSEC With Radius Authentication............................................................................ 149
8.5.2 IPSEC With IKE ......................................................................................................... 149
8.5.3 Secure Shell............................................................................................................... 149
8.5.4 Security Warning Banner........................................................................................... 149
8.5.5 Security Logs ............................................................................................................. 149
8.6 POWER ............................................................................................................................... 150
8.6.1 Power consumption.................................................................................................... 153
8.7 REGULATORY COMPLIANCE .................................................................................................. 154
8.7.1 Grounding................................................................................................................... 154
8.7.2 Power and Grounding Safety Standards ................................................................... 154
8.7.3 General....................................................................................................................... 155
8.7.4 Customer Requirements Summary............................................................................ 156
9 ABBREVIATIONS AND DEFINITIONS.................................................................................... 158
9.1 ABBREVIATIONS................................................................................................................... 158


UA06.0 RNC Product Specification
Passing on or copying of this document, use and communication of its contents not permitted without AlcatelLucent written authorization
05.01 / EN Preliminary Page 6//160

FIGURES

Figure 1-1 : UMTS network architecture......................................................................................................... 13
Figure 1-2 UTRAN Architecture....................................................................................................................... 14
Figure 1-3: Role of Drift and Serving RNC..................................................................................................... 17
Figure 1-4: IuB Control Plane Protocol Stacks .............................................................................................. 19
Figure 1-5: Iur and Iu Control Plane Stacks................................................................................................... 20
Figure 1-6: IuB User plane protocol stacks .................................................................................................... 20
Figure 1-7 Iur and IuPS User Plane Stacks ................................................................................................... 20
Figure 1-8: RNC functions and data flows...................................................................................................... 21
Figure 3-1: 9370 RNC Single Equipment Configuration in MSS15K Frame .......................................... 30
Figure 3-2: RNC Frames in a Bolt Together Line-up.................................................................................... 31
Figure 3-3: RNC Frame Footprint with Aisle Allowances............................................................................. 32
Figure 3-4: RNC Standalone Frame, View with Side Panels and Front Door........................................... 33
Figure 3-5: Frame Showing Side Panels and Cable Covers....................................................................... 34
Figure 3-6: RNC Brandline Top Cover............................................................................................................ 35
Figure 3-7: RNC Brandline Top Cover with Alarm LED................................................................................ 35
Figure 3-8: Breaker Interface Panel Front View............................................................................................ 39
Figure 3-9: Power Distribution Interconnect Schematic............................................................................... 40
Figure 3-10: BIM Module................................................................................................................................... 41
Figure 3-11: BIM / Shelf Assignment............................................................................................................... 41
Figure 3-12: Power Interface Module.............................................................................................................. 42
Figure 3-13: PIM Placement............................................................................................................................. 43
Figure 3-14: BIP Alarm Module........................................................................................................................ 44
Figure 3-15: Alarm Module Interconnect Schematic..................................................................................... 44
Figure 3-16: BITS / Alarm Module Location................................................................................................... 46
Figure 3-17: MAC Address Module Location................................................................................................. 47
Figure 3-18: Lower Shelf Cooling Unit............................................................................................................ 48
Figure 3-19: Frame/Shelf Level Airflow........................................................................................................... 49
Figure 3-20: CP3 Block Diagram..................................................................................................................... 51
Figure 3-21: CP4 block diagram...................................................................................................................... 53
Figure 3-22: PDB4 block diagram.................................................................................................................... 54
Figure 3-23: Control Processor Faceplate...................................................................................................... 56
Figure 3-24: Datapath between the CP and PSFP CPUs............................................................................ 57
Figure 3-25: MSHS 16-Port OC3/STM1 FP Faceplate................................................................................. 60
Figure 3-26: MSHS 16-Port OC3/STM1 FP Block Diagram........................................................................ 61
Figure 3-27: MS3 16-Port OC3/STM1 FP Faceplate.................................................................................... 63
Figure 3-28: MS3 16-Port OC3/STM1 FP Block Diagram............................................................................ 64
Figure 3-29: 4 port GE FP Block Diagram...................................................................................................... 68
Figure 3-30: FQM Traffic Manager Daughtercard Block Diagram.............................................................. 69
Figure 3-31: 4 Port GbE PHY Daughtercard Block Diagram....................................................................... 71
Figure 3-32: Faceplate view of 4pGE FP........................................................................................................ 72
Figure 3-33: Packet Server FP Block Diagram.............................................................................................. 73
Figure 3-34: Packet Server Faceplate............................................................................................................ 75
Figure 3-35: PSFP PrPMC Module Block Diagram....................................................................................... 77
Figure 3-36: Dual-Core Packet Server Block Diagram................................................................................. 81
Figure 3-37: MPC8641D CPU Tile Block Diagram....................................................................................... 83
Figure 4-1: RNC Single System Level Packaging......................................................................................... 89
Figure 4-2: RNC Expansion Shelf.................................................................................................................... 91
Figure 4-3: RNC Single with Expansion Shelf (RNC Dual).......................................................................... 91
Figure 5-1 RNC Software Domains................................................................................................................. 94
Figure 5-2 9370 RNC Processor Role for a maximum configuration......................................................... 95
Figure 5-3 RNC Platform Functions ................................................................................................................ 96
Figure 5-4 Packet Server Base........................................................................................................................ 97
Figure 5-5 Control Plane Services................................................................................................................... 98
UA06.0 RNC Product Specification
Passing on or copying of this document, use and communication of its contents not permitted without AlcatelLucent written authorization
05.01 / EN Preliminary Page 7//160
Figure 5-6 OAM Functions................................................................................................................................ 99
Figure 5-7 RNC Iu Interfaces.......................................................................................................................... 100
Figure 5-8 RNC IuB Interface......................................................................................................................... 101
Figure 5-9 Control Plane Transport Functions............................................................................................. 102
Figure 5-10 Path Management....................................................................................................................... 103
Figure 5-11 TBM............................................................................................................................................... 104
Figure 5-12 Control Plane Components ....................................................................................................... 105
Figure 5-13 RNC Datapath............................................................................................................................. 113
Figure 6-1 AAL2 Path Binding........................................................................................................................ 115
Figure 6-2 Path Sparing on PC...................................................................................................................... 116
Figure 6-3 PC Swact........................................................................................................................................ 117
Figure 6-4 ATM Card Swact........................................................................................................................... 117
Figure 6-5 RAB and Path Sparing................................................................................................................. 118
Figure 6-6 NBAP Signaling and ALCAP path binding................................................................................ 119
Figure 6-7 SS7 Stack Distribution.................................................................................................................. 120
Figure 6-8 Sparing Model for SS7 Stack...................................................................................................... 121
Figure 6-9: Protected Static IP Routes.......................................................................................................... 123
Figure 6-10 IP IuPS C-Plane Path Redundancy between the RNC and a Core Network.................... 125
Figure 6-11 PC NAT Functionality for N:1 Sparing..................................................................................... 126
Figure 7-1: OAM Logical View........................................................................................................................ 130
Figure 7-2: CM Functional Architecture........................................................................................................ 135
Figure 7-3: Out-of-band OAM&P Solution.................................................................................................... 137
Figure 7-4: In-Band OAM&P Solution........................................................................................................... 138
Figure 8-1: RNC Reliability Block Diagram.................................................................................................. 144
Figure 8-2: BIP Location in NEBS 2000 Frame........................................................................................... 150
Figure 8-3: BIP Rear View with Power Entry Points ................................................................................... 151
Figure 8-4: Power Distribution Interconnect Diagram................................................................................. 152
Figure 8-5: BIM and Breaker Assignments for Lower Shelf....................................................................... 153
Figure 8-6: RNC Regulatory Label ................................................................................................................ 155

UA06.0 RNC Product Specification
Passing on or copying of this document, use and communication of its contents not permitted without AlcatelLucent written authorization
05.01 / EN Preliminary Page 8//160
ABOUT THIS DOCUMENT
This Product Specification describes the hardware, software, and technical specifications
for the UA06.0 release of Alcatel-Lucent UMTS 9370 Radio Network Controller (9370
RNC).

AUDIENCE
This document is intended for internal distribution within Alcatel-Lucent.
SCOPE
The scope of the document is the 9370 RNC product for the UA06.0 release.

This document does not provide a full feature description, or a full 3GPP standard
compliance statement for the RNC product. This information is provided in dedicated and
separate documents.
UA06.0 RNC Product Specification
Passing on or copying of this document, use and communication of its contents not permitted without AlcatelLucent written authorization
05.01 / EN Preliminary Page 9//160
PUBLICATION HISTORY
Refer to the following document for the RNC product specification applicable to UA05:
[R65] RNC1500 Product Specification UMT/RNC/DD/010600 Standard April 2007

22/February/2008
Issue 05.01 / EN, Draft
Document creation for UA06.0

UA06.0 RNC Product Specification
Passing on or copying of this document, use and communication of its contents not permitted without AlcatelLucent written authorization
05.01 / EN Preliminary Page 10//160
REFERENCE DOCUMENTS
The documents in the following list are relevant to the understanding of the RNC. Some of the
listed references provide background material, while other references provide more in-depth
information.

[R1] 3GPP TS 23.101 General UMTS Architecture
[R2] 3GPP TS 23.110 UMTS Access Stratum Services and Functions
[R3] 3GPP TS 23.221 Architectural requirements
[R4] 3GPP TS 25.201 Physical layer - general description
[R5] 3GPP TS 25.301 Radio interface protocol architecture
[R6] 3GPP TS 25.308 High Speed Downlink Packet Access (HSDPA); Overall
description; Stage 2
[R7] 3GPP TS 25.309 FDD enhanced uplink; Overall description; Stage 2
[R8] 3GPP TS 25.401 UTRAN overall description
[R9] 3GPP TS 25.410 UTRAN Iu interface: General aspects and principles
[R10] 3GPP TS 25.411 UTRAN Iu interface layer 1
[R11] 3GPP TS 25.412 UTRAN Iu interface signaling transport
[R12] 3GPP TS 25.413 UTRAN Iu interface Radio Access Network Application Part
(RANAP) signaling
[R13] 3GPP TS 25.414 UTRAN Iu interface data transport & transport
[R14] 3GPP TS 25.415 UTRAN Iu interface user plane protocols
[R15] 3GPP TS 25.419 UTRAN Iu-BC interface: Service Area Broadcast Protocol (SABP)
[R16] 3GPP TS 25.430 UTRAN Iub Interface: general aspects and principles
[R17] 3GPP TS 25.431 UTRAN Iub interface Layer 1
[R18] 3GPP TS 25.432 UTRAN Iub interface: signalling transport
[R19] 3GPP TS 25.433 UTRAN Iub interface Node B Application Part (NBAP) signalling
[R20] 3GPP TS 25.434 UTRAN Iub interface data transport and transport signalling for
Common Transport Channel data streams
[R21] 3GPP TS 25.435 UTRAN Iub interface user plane protocols for Common Transport
Channel data streams
[R22] 3GPP TS 25.420 UTRAN Iur interface general aspects and principles
[R23] 3GPP TS 25.421 UTRAN Iur interface layer 1
[R24] 3GPP TS 25.422 UTRAN Iur interface signaling transport
[R25] 3GPP TS 25.423 UTRAN Iur interface Radio Network Subsystem Application Part
(RNSAP) signaling
[R26] 3GPP TS 25.424 UTRAN Iur interface data transport & transport signaling for
Common Transport Channel data streams
[R27] 3GPP TS 25.425 UTRAN Iur interface user plane protocols for Common Transport
Channel data streams
[R28] 3GPP TS 25.426 UTRAN Iur and Iub interface data transport & transport for DCH
data streams
UA06.0 RNC Product Specification
Passing on or copying of this document, use and communication of its contents not permitted without AlcatelLucent written authorization
05.01 / EN Preliminary Page 11//160
[R29] 3GPP TS 25.427 UTRAN Iur/Iub interface user plane protocol for DCH data streams
[R30] 3GPP TS 25.321 Medium Access Control (MAC) protocol specification
[R31] 3GPP TS 25.322 Radio Link Control (RLC) protocol specification
[R32] 3GPP TS 25.323 Packet Data Convergence Protocol (PDCP) specification
[R33] 3GPP TS 25.324 Broadcast/Multicast Control (BMC)
[R34] 3GPP TS 25.331 Radio Resource Control (RRC); Protocol specification
[R35] 3GPP TS 25.450 UTRAN Iupc interface general aspects and principles
[R36] 3GPP TS 25.451 UTRAN Iupc interface layer 1
[R37] 3GPP TS 25.452 UTRAN Iupc interface: signaling transport
[R38] 3GPP TS 25.453 UTRAN Iupc interface Positioning Calculation Application Part
(PCAP) signaling
[R39] 3GPP TS 21.905 Vocabulary for 3GPP Specifications
[R40] Telcordia GR-63 NEBS, General Requirements for Network Equipment Building
Systems
[R41] UMT/SYS/DD/009966 v07.02 UA06.0 High Level Compliance to 3GPP
specifications
[R42] UMT/RNC/DD/000017 V06/EN UMTS OA&M V5.0 MOD - Volume 3
[R43] UMT/RNC/DD/000088 V08.03/EN RNC SYSTEM FUNCTIONAL SPECIFICATION -
OBSERVATION
[R44] UMT/RNC/DD/011155 V03.00/EN UMTS RNC Wireless Operation Measurements
List
[R45] UMT/RNC/DD/010040 V04.03 WCDMA RNC Dependability Report - UTRAN06
[R46] PE/BSC/SS/0427 BSC Control Node CM System Functional Specification
[R47] UMT/RNC/DD/000019 OA&M RNC: SFS Mediation Device
[R48] UMT/RNC/DD/017656 V01/EN UA5.0 Call Trace Enhancements FS
[R49] FRS 29777 UA06.0 RNC Availability
[R50] FRS 30410 RNC Dimensioning Boundaries
[R51] UMT/PLM/INF/004862 RNC Capacity Roadmap
[R52] PE/IRC/INF/0019 RNC Capacity Bulletin
[R53] FRS 30611 UMTS05.1 KPI Requirements
[R54] FRS 33518 UA06 Performance Targets
[R55] UMT/RNC/DJ D/017284 ALU 9370 RNC Full H/W Compliance Report
[R56] UMT/SYS/DD/4666 SPVCs on the Iu/Iur/Iub Interfaces
[R57] UMT/IRC/APP/0166 Iu Transport, Engineering Guide
[R58] UMT/IRC/APP/000050 Iur Transport, Engineering Guide
[R59] UMT/IRC/APP/023 RNC Product Engineering Information
[R60] UMT/RNC/DD/0044 RNC Tests and Commands
[R61] UMT/RNC/DD/018173 V01.08/EN PM29417 IuFlex Feature SFS
[R62] UMT/RNC/DD/006490 V02.15 PM30804 Multi Sccpch & PM20192 Cell Broadcast
Services SFS
[R63] Solectron KP001144-TR-THM-01-01 RNC1500 VSS with MSS7K Climatic Test
Report
UA06.0 RNC Product Specification
Passing on or copying of this document, use and communication of its contents not permitted without AlcatelLucent written authorization
05.01 / EN Preliminary Page 12//160
[R64] UMT/RNC/DD/018227 RNC Security Enhancements HLD
[R65] RNC1500 Product Specification UMT/RNC/DD/010600 Standard April 2007
[R66] UMT/RNC/DD/022918 UA06.0 RNC Availability Functional Specification
[R67] UMT/RNC/APP/010274 V01/EN PS J FK User Guide
[R68] MD-2001.0341 UMTS RNC-IN Connection Service PC Sparing
[R69] UMT/RNC/DD/022918 FRS 33366 DCPS Introduction
[R70] UMT/SYS/DD/023092 IP in UTRAN FN
[R71] UMT/RNC/DD/022710 4PGE Introduction and IP Support Functional Specification
[R72] UMT/RNC/DD/022260 Q01645257 Support for IP transport on Iub Functional
Specification
[R73] FRS 23479 Advanced Qos Transport Framework
[R74] FRS 34205 UTRAN Sharing Developments
[R75] Q00981885 PNNI Hairpin Removal FS
[R76] Q01722323 PM 34137 Integrated SMLC RNC Sub-System Functional Specification
[R77] UMT/SYS/DD/023087 UTRAN Transport Architecture UTRAN architecture and
transmission management






UA06.0 RNC Product Specification
1 UMTS RNC OVERVIEW
1.1 UMTS NETWORK ARCHITECTURE
UMTS is fully specified by the 3GPP standards body where all specifications can be
obtained from the following link:
http://www.3gpp.org/ftp/Specs/html-info/SpecReleaseMatrix.htm
The reader may want to refer to the following 3GPP specifications to understand overall
UMTS architecture, its services and general concepts:

[R1] 3GPP TS 23.101 General UMTS Architecture
[R2] 3GPP TS 23.110 UMTS Access Stratum Services and Functions
[R3] 3GPP TS 23.221 Architectural requirements
[R4] 3GPP TS 25.201 Physical layer - general description
[R5] 3GPP TS 25.301 Radio interface protocol architecture
[R6] 3GPP TS 25.308 High Speed Downlink Packet Access (HSDPA); Overall description;
Stage 2
[R7] 3GPP TS 25.309 FDD enhanced uplink; Overall description; Stage 2
[R8] 3GPP TS 25.401 UTRAN overall description

3GPP specifications define open interfaces allowing any vendor combination in a UMTS
network implementation.
UMTS supports compatibility with an evolved GSM network from the point of view of roaming,
and handover. This is possible because UMTS core network is an evolution of the
GSM/GPRS/EDGE core network and built on the same network nodes such as SGSN,
GGSN and MSC.
Figure 1-1 shows a high level view of UMTS network architecture, capturing the circuit-
switched and packet-switched core network elements that are common between UMTS and
GSM.
Figure 1-1 : UMTS network architecture
I P
B a c k b o n e
A T M
N e t w o r k
P T S N / I S D N
I P
X . 2 5
S G S N G G S N
P a c k e t D o m a i n
C i r c u i t D o m a i n
I u b
I u b
I u r
I u
I u
H L R
U T R A N
M S C M G W
R N C
R N C


Passing on or copying of this document, use and communication of its contents not permitted without AlcatelLucent written authorization
05.01 / EN Preliminary Page 13//160
UA06.0 RNC Product Specification
1.2 RNC WITHIN THE UTRAN
The UTRAN is composed of several Radio Network Subsystems (RNS). Each RNS is
composed of one Radio Network Controller (RNC) and many Node-Bs. The different RNS are
interconnected through a standard Iur interface between each RNC to form a network. As
seen in Figure 1-2 UTRAN Architecture, the RNC is the central element in the UTRAN. Its
main function is to control and manage the Radio Access Network (RAN) and the radio
channels. In UMTS Rel 99, the Node-B is essentially analogous to a radio modem and all the
intelligence to control and manage the network and the User Equipment (UE) attached to it
was based in the RNC. However, with the introduction of HSDPA in Rel 5 and E-DCH in Rel 6,
some of the responsibility to control and manage radio resources has been assigned to the
Node-B.
ned to the
Node-B.

Figure 1-2 UTRAN Architecture Figure 1-2 UTRAN Architecture



















OMC
Hybrid
Node B
RNC
MSC
Node B
Uu
Uu
Uu
Uu
IP
IP Backbone
ATM Backbone
IP Backbone
ATM Backbone
MGW
SGSN
GGSN
Iub
Iub
CBC
ATM Backbone
Iu
Iub
IP Backbone
Iub Iub
Iur
Iur
Iur
Iur
Iu
RNC
SAS
Core
Network
Last Mile
UTRAN

The RNC has three physical interfaces and one logical interface to the UE: The RNC has three physical interfaces and one logical interface to the UE:
Iu is the interface between the RNC and the Core Network. UMTS defines two distinct
types of traffic: packet and circuit, thus two types of Iu interfaces are also defined:
Iu is the interface between the RNC and the Core Network. UMTS defines two distinct
types of traffic: packet and circuit, thus two types of Iu interfaces are also defined:
o Iu-PS for packet traffic o Iu-PS for packet traffic
o Iu-CS for circuit traffic o Iu-CS for circuit traffic
Passing on or copying of this document, use and communication of its contents not permitted without AlcatelLucent written authorization
05.01 / EN Preliminary Page 14//160
UA06.0 RNC Product Specification
Passing on or copying of this document, use and communication of its contents not permitted without AlcatelLucent written authorization
05.01 / EN Preliminary Page 15//160

The following 3GGP specifications define the Iu interface:
[R9] 3GPP TS 25.410 UTRAN Iu interface: General aspects and principles
[R10] 3GPP TS 25.411 UTRAN Iu interface layer 1
[R11] 3GPP TS 25.412 UTRAN Iu interface signalling transport
[R12] 3GPP TS 25.413 UTRAN Iu interface Radio Access Network Application
Part (RANAP) signaling
R13] 3GPP TS 25.414 UTRAN Iu interface data transport & transport signalling
[R14] 3GPP TS 25.415 UTRAN Iu interface user plane protocols
[R15] 3GPP TS 25.419 UTRAN Iu-BC interface: Service Area Broadcast Protocol
(SABP)

With the introduction of UA06 feature 33363 RNC Support for Iu-Ps Transport over IP,
the RNC can be connected to the SGSN through the core network IP backbone and,
optionally, it can be connected to the GGSN using a direct GTP tunnel for the User
Plane traffic. This can be done without impacting the RNC side nor the configuration
because the SGSN is responsible for providing the User Plane address of the GGSN
for Control Plane signaling [R70].
On Iu-PS, a mix of ATM transport and IP transport is supported even in the same pool
for configurations making use if Iu Flexibility [R61], but both Control Plane and User
plane stacks must be either IP or ATM, that is, there is no support for mixing ATM
Control Plane and IP User plane or vice versa.
Iu-Bc is the interface between the RNC and the Cell Broadcast Centre (CBC).
The feature Iu Flex in conjunction with the feature UTRAN Sharing enables RNC to
interface with up to 24 Core Network nodes per domain (PS or CS). Please refer to
[R61] UMT/RNC/DD/018173 V01.08/EN PM29417 IuFlex Feature SFS and [R74] FRS
34205 UTRAN Sharing Developments for more information.
The UA06 feature IuPS over IP enables RNC to interface with the Core Network using
IP transport [R70].

IuB is the interface between the RNC and the Node-Bs, defined by the following 3GPP
specifications:

[R16] 3GPP TS 25.430 UTRAN Iub Interface: general aspects and principles
[R17] 3GPP TS 25.431 UTRAN Iub interface Layer 1
[R18] 3GPP TS 25.432 UTRAN Iub interface: signalling transport
[R19] 3GPP TS 25.433 UTRAN Iub interface Node B Application Part (NBAP)
signalling
[R20] 3GPP TS 25.434 UTRAN Iub interface data transport and transport
signaling for Common Transport Channel data streams
[R21] 3GPP TS 25.435 UTRAN Iub interface user plane protocols for Common
Transport Channel data streams

The UA06 feature Hybrid IuB introduces the Hybrid NodeB and capabilities on the
RNC to manage the Iub interface with IP transport [R70]. Hybrid Iub refers to the
support of hybrid ATM / IP transport on the Iub interface with:
ATM being used for control plane (NBAP, ALCAP), Node B O&M and R99
user plane. ATM also carries HSDPA streaming and SRB on HSPA.
IP being used for HSDPA and HSUPA user plane traffic with
interactive/background traffic class.
UA06.0 RNC Product Specification
Passing on or copying of this document, use and communication of its contents not permitted without AlcatelLucent written authorization
05.01 / EN Preliminary Page 16//160

Iur is the interface between the RNCs within a RAN. The following 3GPP specifications define
the Iur interface, including the two specifications common with the Iub interface:
[R22] 3GPP TS 25.420 UTRAN Iur interface general aspects and principles
[R23] 3GPP TS 25.421 UTRAN Iur interface layer 1
[R24] 3GPP TS 25.422 UTRAN Iur interface signaling transport
[R25] 3GPP TS 25.423 UTRAN Iur interface Radio Network Subsystem Application Part
(RNSAP) signaling
[R26] 3GPP TS 25.424 UTRAN Iur interface data transport & transport signaling for
Common Transport Channel data streams
[R27] 3GPP TS 25.425 UTRAN Iur interface user plane protocols for Common Transport
Channel data streams
[R28] 3GPP TS 25.426 UTRAN Iur and Iub interface data transport & transport signaling
for DCH data streams
[R29] 3GPP TS 25.427 UTRAN Iur/Iub interface user plane protocol for DCH data streams

Uu is the logical interface between the RNC and the UE, defined by the following 3GPP
specifications:
[R30] 3GPP TS 25.321 Medium Access Control (MAC) protocol specification
[R31] 3GPP TS 25.322 Radio Link Control (RLC) protocol specification
[R32] 3GPP TS 25.323 Packet Data Convergence Protocol (PDCP) specification
[R33] 3GPP TS 25.324 Broadcast/Multicast Control (BMC)
[R34] 3GPP TS 25.331 Radio Resource Control (RRC); Protocol specification

Uu, Iu, Iub, and Iur are all open, multi-vendor standard interfaces.
Network assisted GPS (A-GPS) requires a connection between the RNC and a Stand-alone
A-GPS SMLC (SAS). Iupc is the interface between the RNC and the SAS, which is covered
by the following 3GPP specifications:
[R35] 3GPP TS 25.450 UTRAN Iupc interface general aspects and principles
[R36] 3GPP TS 25.451 UTRAN Iupc interface layer 1
[R37] 3GPP TS 25.452 UTRAN Iupc interface: signaling transport
[R38] 3GPP TS 25.453 UTRAN Iupc interface Positioning Calculation Application Part
(PCAP) signaling

UE-based AGPS requires a TCP/IP connection and takes advantage of a proprietary interface
between the 9370 RNC and a TeleCommunication Systems, Inc. (TCS) Satellite Reference
System (SRS). This configuration is introduced in UA06 by feature Integrated SMLC RNC
Sub-System [R76].

The RNC also interfaces with external OAM hardware for configuration, performance, fault,
administrative, installation and commissioning purposes. Please refer to chapter on OAM&P
for more information.
1.2.1 RNC Definitions
The terms Serving RNC, Drift RNC, and Controlling RNC are sometimes used to refer to
different functions of the product.
Serving RNC (SRNC) refers to the role of one RNC with respect to one specific UE. It is the
RNC handling the Iu interface for this particular UE when the mobile moves into the
controlling area of another RNC. In this case, the two RNCs communicate over the Iur
interface. Figure 1-3: Role of Drift and Serving RNC illustrates this definition. A UE can be
connected to more than one RNC during a session due to macro diversity, but there is only
UA06.0 RNC Product Specification
one link over the Iu interface between the UE to the Core Network. The SRNC provides the
interface to the Core Network (Iu) for a specific UE. Each UE has only one SRNC. This RNC
is responsible for the handover decisions that require signaling to the UE.
Drift RNC (DRNC) refers to a role of an RNC with respect to a specific UE. It is the RNC
controlling the Node-B that the UE is connected to, whereas Iu is not managed by this RNC.
Figure 1-3 also illustrates the DRNC.
Controlling RNC (CRNC) refers to a role that an RNC assumes with respect to a specific
set of UTRAN access points. The Controlling RNC has the overall control of the logical
resources of its UTRAN access points. The RNC is the controlling RNC of all the Node-Bs
parented to this RNC. It manages all the radio resources of the Node-Bs. It executes
operation and maintenance functions under a central OMC-R management control.
Figure 1-3: Role of Drift and Serving RNC

The SRNC provides all the functions required to set up or re-establish circuit-switched or
packet-switched calls, release previously secured resources, as well as specific call
sustaining procedures. It also supervises the radio subsystem as a whole and is responsible
for it. The RNC executes operation and maintenance functions under central OAM
management control consisting of the OMC-R, OMC-B, MDM, and MDP.
Additionally, the RNC is responsible for:
UE outer loop power control
handover decisions that require signaling to the UE
recombination of two or more user data streams originating at multiple Node-Bs -
referred to as macro diversity
Each UE is connected to the Core Network through its SRNC which stores the
communication context and transports the UE data stream over the Iu interface.
Transcoding functions are located in the Core Network and the UE.
Passing on or copying of this document, use and communication of its contents not permitted without AlcatelLucent written authorization
05.01 / EN Preliminary Page 17//160
UA06.0 RNC Product Specification
Passing on or copying of this document, use and communication of its contents not permitted without AlcatelLucent written authorization
05.01 / EN Preliminary Page 18//160
1.3 RNC GENERAL DESCRIPTION
The 9370 RNC is a single platform, single shelf product, built on the Multi-
Service Switch 15000 (MSS 15K).
The cabinet of the MSS platform can host up to two RNCs to reduce the footprint and
improve the density of RNC deployment, but the second shelf could also be left empty or
re-used to deploy compatible products.
1.3.1 CONNECTIVITY
There are three connectivity options for the RNC:
OC-3 clear channel (SONET is supported)
STM-1 clear channel (SDH is supported)
Gigabit Ethernet interfaces

Any adaptation required in the transport network (e.g. STM-1 to E1) is independent of the
RNC and outside the scope of this product specification.
1.3.2 PACKAGING
RNC packaging refers to the different number of 9370 RNC shelves that can fit into a single
frame. For more information on the various options, refer to Section 4 RNC Systems &
Environment.
1.3.3 3GPP RELEASE COMPLIANCE
In UA06.0, the reference release is 3GPP Rel 6 for all interfaces applicable to the RNC. The
reference versions of the 3GPP specifications correspond at least to the versions available
after 3GPP #31 (March 2006) for the specifications that are applicable to the RNC and
specifically at minimum on a per interface basis the RNC compliance is as follows:
Uu: Rel 6, March 06
Iu: Rel 6, March 06
Iub: Rel 6, March 06
Iur: Rel 6, March 06
Iubc: Rel 6, March 06
Iupc: Rel 6, March 06

For more details on RNCs 3GPP release compliance, please refer to [R41]
UMT/SYS/DD/009966 v07.02 UA06.0 High Level Compliance to 3GPP specifications.










UA06.0 RNC Product Specification
1.4 RNC FUNCTIONALITY
The functions of the RNC can be divided into two categories:
Control Plane
User Plane


1.4.1 CONTROL PLANE
This category includes all of the functions required for the set-up, take-down, and
management of connections between the UE and the Core Network. Typically, Control
Plane functions involve signaling channels and consume RNC resources as a result of
external events, such as call origination or handoff between Node-Bs. The Control Plane
handles:
Protocol termination for RANAP, RNSAP, NBAP, ALCAP, PCAP and SABP
Radio resource management for RRC terminations, RRM strategy, QOS
management, and mobility management
Logical management of the Node-Bs connected to the RNC
Admission control, communication maintenance, and release for each user
Setup, maintenance, and release of transport network resources optimization of
the radio spectrum and terrestrial transport network
Resources to provide a maximum number of simultaneous users according to their
service requirements within the UTRAN
RNC operations, administration, maintenance and provisioning
NodeB logical OAM-P
Mobility management

Figure 1-4: IuB Control Plane Protocol Stacks
SCCP
M3UA
ETHERNET
SCT
SCCP
M3UA
ETHERNET
SCT
SSCF
SSCOP
ETHERNET ATM
AAL5
RNSAP NBAP
SCCP
M3UA
ETHERNE
SCT
SCCP
M3UA
ETHERNE
SCT
SSCF
SSCOP
ETHERNE ATM
AAL5
RNSAP ALCAP
SCCP
M3UA
ETHERNE
SCCP
M3UA
ETHERNE
SCTP
IP
ETHERNE ETHERNET
RNSAP NBAP
Q2630 +Q2150






Passing on or copying of this document, use and communication of its contents not permitted without AlcatelLucent written authorization
05.01 / EN Preliminary Page 19//160
UA06.0 RNC Product Specification

Figure 1-5: Iur and Iu Control Plane Stacks
Passing on or copying of this document, use and communication of its contents not permitted without AlcatelLucent written authorization
05.01 / EN Preliminary Page 20//160

1.4.2 USER PLANE
The User Plane is responsible for maintaining the flow of traffic over three interfaces. The
User Plane provides:
Data interface to the User Plane of the Iu link to the Core Network
Data interface to the Iub links to the Node-B and the Iur links to other RNCs
Termination for the Frame, MAC and RLC radio protocols for the data links between
the RNC and the UEs

User Plane processing is responsible for providing radio protocol stacks for the radio
bearers and associated transport channels and maintaining User Plane logical contexts for
individual cell context and UE calls.
The User Plane handles:
Physical connectivity with the other UMTS nodes based on ATM over PCM trunks
or higher-speed interfaces (OC-3 and STM-1) or IP transport.
User Plane radio protocol termination
UTRAN packet switching between the Core Network and the users terminal

Refer to the figure below for the User Plane protocol stacks.

Figure 1-6: IuB User plane protocol stacks

Figure 1-7 Iur and IuPS User Plane Stacks
ETHERNET ETHERNET
AAL2
ETHERNET ATM
RNSAP IuBFP
SCCP
M3UA
ETHERNE
SCCP
M3UA
ETHERNE
UDP
IP
ETHERNET
RNSAP IuBFP
RNSA
SSCP
AAL
AT
SSCOP
SSCF
MTP3
RANA RNSA
SSCP
AAL
AT
SSCOP
SSCF
MTP3
RANA
SSCP
AAL5
AT
SSCOP
SSCF
MTP3B
RANA RNSAP RNSAP
SSCP
AAL5
ATM
SSCOP
SSCF
MTP3B
RANAP RANAP RANA
SCCP
M3UA
ETHERNET
SCTP
IP
RNSA RANA
SCCP
M3UA
ETHERNET
SCTP
IP
RNSA RANAP RANAP
SCCP
M3UA
ETHERNET ETHERNET
RNSAP RNSAP
SCTP
IP
Q.2150

AAL5
AT
SSCOP
SSCF
MTP3B
Q.AAL2
Q.2630
Q.2150

AAL5
AT
SSCOP
SSCF
MTP3B
Q.AAL2
Q.2630
Q.2150
Q.AAL2
AAL5
AT
SSCOP
SSCF
MTP3B
Q.2630
Q.2150
AAL5
AT
SSCOP
SSCF
MTP3B
Q.2630
Q.2150
AAL5
ATM
SSCOP
SSCF
MTP3B
Q.2630
IuR + IuCS, IuPS
IuR , IuCS
IuR + IuCS, IuPS
UA06.0 RNC Product Specification
R P NSA IuUp
SCCP SCCP AAL2
Passing on or copying of this document, use and communication of its contents not permitted without AlcatelLucent written authorization
05.01 / EN Preliminary Page 21//160

The figure below shows the separation of the Control Plane and User Plane (with a packet
distribution network interconnecting them).
Figure 1-8: RNC functions and data flows

Each of these two planes is further divided into two logical layers:
the radio network layer, which includes all the UMTS-specific interfaces and
protocols
the transport layer, which includes all the non-UMTS specific protocols used for
signaling and data transport

This distinction allows the decoupling of the UMTS application from the transport
network so that they can independently evolve.

For a description of the control plane and user plane stacks from an IP interface see
Section 5.4 RNC Transport Interfaces.
ETHERNET ETHERNET ETHERNET
M3UA M3UA
ATM
ETHERNET ETHERNET
UDP
IP
ETHERNET
RNSAP IuUP
SCCP
M3UA
ETHERNET
SCCP
M3UA
ETHERNET

UDP
IP
ETHERNET ATM
RNSAP GTP-U
IuR, IuPS
IuR, IuCS
IuPS
SCCP
M3UA
ET
AAL5
IuPS
HERNET
SCCP
M3UA
ETHERNET
RTP
UDP
ETHERNET ETHERNET
IP
RNSAP IuUP
UA06.0 RNC Product Specification
Passing on or copying of this document, use and communication of its contents not permitted without AlcatelLucent written authorization
05.01 / EN Preliminary Page 22//160
2 RNC IMPLEMENTATION
The 9370 RNC-DCPS was introduced in UA05.2 to double the capacity of the 9370 RNC-
PS1 through the addition of the DCPS hardware upgrade while still using a single cabinet
[R69] UMT/RNC/DD/022918 FRS 33366 DCPS Introduction. With additional software
updates, the 9370 RNC-DCPS capacity in UA06 is 2.5 times that of the 9370 RNC-PS1 .
The 9370 RNC software base continues to support both the Packet Server Functional
Processor (PSFP1) as well as the new Dual Core Packet Server (DCPS). Hardware specific
changes are limited to the low level board support package, component drivers and other
hardware related changes. Existing UMTS application layer software is not functionally
impacted by this hardware upgrade. The application software however has been updated to
make use of the additional capacity and scaling the DCPS hardware provides (i.e. increases
in calls/sec, Mbps, number of active calls, and the number of cells / attached Node-B)
UA06 also introduces IP UTRAN using a pair of 4-port GE interface cards to carry IP traffic
[R70] UMT/SYS/DD/023092 IP in UTRAN FN. Feature implementation in IP UTRAN for
UA06 is done in the following areas:
Hybrid Iub support on the RNC. In the hybrid Iub interface, the R99, signaling, GBR
and OAM traffic are still on ATM transport layer. The I/B HSPA(HSDPA and E-DCH)
is supported on IP/Ethernet. This configuration is used to support the Macro BTSs.
IP IuPs. All traffic carried through IuPs including both C-Plane traffic and U-Plane
traffic is on IP/Ethernet. This interface is used to connect to the SGSN through an IP
network.

2.1 FUNCTION MAPPING
The mapping of the RNC functions is as follows:
All of the RNC is contained within a single Multiservice Shelf 15000, including Control
Plane, User Plane, Interfaces and OAM systems
All of the external physical interfaces of the RNC are implemented on the same shelf
(IuCS, IuPS, IuPC, IuR, IuB, IuBC and SMLC/I-SMLC )
The card level interconnect is implemented by the Multiservice Switch switching fabric
Both the Control Plane and User Plane functions are primarily implemented on the
Packet Server (PSFP or DCPS) modules
The ATM IuB physical interface terminates in the RNC 16pOC-3/STM-1 module. UA06
feature Haripin removal allows the RNC to support the IuB with direct ATM PVC and
SPVC in Clear Channel OC-3/STM-1 [R75] Q00981885 PNNI Hairpin Removal FS.
Any transport adaptation (e.g. E1 to STM1) is outside the scope of the RNC and is
addressed by the transport network.
Other RNC interfaces terminate on ATM 16pOC3/STM-1 module or GE module (Hybrid
IuB, IuPS).

UA06.0 RNC Product Specification
Passing on or copying of this document, use and communication of its contents not permitted without AlcatelLucent written authorization
05.01 / EN Preliminary Page 23//160
2.2 RNC CONFIGURATIONS
The UA06.0 release RNC supports two configurations using either:
1) Packet Server Functional Processor cards (PSFP) providing capacity of the existing
9370 RNC-PS1 product,
2) Dual-Core Packet Server (DCPS) cards providing a capacity up to 2.5x 9370 RNC-PS1
with the 9370 RNC-DCPS.
Both configurations are supported with ATM 16pOC-3/STM-1 and 4p Gig Ethernet interface
modules as shown in Table 2-1 RNC Shelf Configuration using ATM Interfaces and Table
2-2 RNC Shelf Configuration using ATM and GE Interfaces respectively.


Slot Card Type 0 1 2 3 4 5
0 CP3
1 CP3

2 PS1 | PMC- TMU RAB RAB PC RAB
3 PS1 | PMC- TMU RAB RAB PC RAB
4 PS1 | RAB TMU NI RAB PC OMU
5 PS1 | RAB TMU NI RAB PC OMU
6 PS1 | RAB TMU RAB RAB PC RAB
7 PS1 | RAB TMU RAB RAB PC RAB
8 OC3/STM1
9 OC3/STM1

10 PS1 | RAB TMU RAB RAB PC RAB
11 PS1 | RAB TMU RAB RAB PC RAB
12 PS1 | RAB TMU RAB RAB PC TMU
13 PS1 | RAB TMU RAB RAB PC TMU
14 PS1 | RAB TMU RAB RAB PC RAB
15 PS1 | RAB TMU RAB RAB PC RAB
Table 2-1 RNC Shelf Configuration using ATM Interfaces











UA06.0 RNC Product Specification
Passing on or copying of this document, use and communication of its contents not permitted without AlcatelLucent written authorization
05.01 / EN Preliminary Page 24//160

Slot Card Type 0 1 2 3 4 5
0 CPx
1 CPx

2 DCPS PMC-M TMU RAB RAB PC RAB
3 DCPS PMC-M TMU RAB RAB PC RAB
4 DCPS RAB TMU NI RAB PC OMU
5 DCPS RAB TMU NI RAB PC OMU
6 DCPS RAB TMU RAB RAB PC RAB
7 DCPS RAB TMU RAB RAB PC RAB
8 OC3/STM1
9 OC3/STM1

10 DCPS RAB TMU RAB RAB PC RAB
11 DCPS RAB TMU RAB RAB PC RAB
12 DCPS RAB TMU RAB RAB PC TMU
13 DCPS RAB TMU RAB RAB PC TMU
14 GE
15 GE


Table 2-2 RNC Shelf Configuration using ATM and GE Interfaces



The modules in the UA06.0 RNC consist of:
Two Control Processor Modules (CP) with associated disk two types:
o CP3 for the UA05.0 and UA05.1 compatible dimensioning
o CP4 for the larger network dimensioning
Two or four interface modules:
o Dual 16pOC-3/STM-1 ATM interfaces
o Dual 4pGE interfaces (optional)
Packet Server modules two types:
o PS1 for UA05.0 and UA05.1 equivalent capacity
o DCPS for up to 2.5 times RNC-PS1 equivalent capacity
Filler modules are inserted into any slot not occupied by a card.
9370 RNC configurations can be deployed with the following number of Packet Server
cards:
4 Packet Server cards (all PS1 or all DCPS)
6 PS cards (all PS1 or all DCPS)
7 PS cards (all PS1)
8 PS cards (all PS1 or all DCPS)
UA06.0 RNC Product Specification
Passing on or copying of this document, use and communication of its contents not permitted without AlcatelLucent written authorization
05.01 / EN Preliminary Page 25//160
10 PS cards (all PS1 or all DCPS) Maximum number of PS cards when 4 interface
cards are used (see Table 2-2)
12 PS cards (all PS1 or all DCPS) This configuration is not possible when 4 interface
cards are used (see Table 2-1 and Table 2-2).
2.3 RNC PROCESSOR ROLES
The role assignment of Application Processors (APs) on the Packet Server modules is
managed by software. A deterministic role assignment approach is taken to bind the APs
loaded with particular software loads:
Control Processors (both CP3 or CP4) are assigned to slots 0 and 1
16 Port OC3/STM1 ATM Interface modules are assigned to slots 8 and 9, and optionally
4 Port GE Interface modules to slots 14 and 15
Remaining slots are assigned to Packet Servers cards; if no GE Interface modules are
present then slots 14 and 15 may also be used to hold Packet Server cards
The PSFP (PS1 or DCPS) cards contain 1 PDC processor primarily for card management
and lower layers of the SS7 stack (SCCF-NNI and SSCOP for ATM transport or SCTP for IP
transport) and 6 AP (processors) used for UMTS control plane and user plane processing
roles. The 9370 RNC architecture defines these 6 unique AP roles as:
Two PMC-OMU (Operation and Maintenance Unit) which are 1:1 spared
Multiple PMC-TMU (Traffic Management Unit), 14 maximum, at least one per PS1 or
DCPS
Two PMC-Ms (PMC Manager), which are 1:1 spared
Two PMC-NIs (Network Interface), which are 1:1 spared
Multiple PMC-RABs (Radio Access Bearer), 40 maximum, at least two per PS1 or
DCPS (in large configurations)
PMC-PCs (Protocol Converter), one per PS1 or DCPS module

The location of the roles to APs is illustrated in Table 2-1 RNC Shelf Configuration using
ATM Interfaces and Table 2-2 RNC Shelf Configuration using ATM and GE Interfaces. The
Packet Server AP roles that are available in 9370 RNC are described below:
Network Interface (NI): 1 pair per RNC (introduced in UA04)
The Network Interface is responsible for several layers of the SS7 protocol
stack and for relaying A-GPS based information.
SCCP
MTP3B for ATM transport
M3UA fro IP transport
TAL relay (PCAP/TAL/TCP/IP/AAL5/ATM)
Traffic Management Unit (TMU): up to 14 per RNC
The Traffic Management Unit terminates the radio network interface protocols
such as:
RANAP
RNSAP
NBAP
UA06.0 RNC Product Specification
Passing on or copying of this document, use and communication of its contents not permitted without AlcatelLucent written authorization
05.01 / EN Preliminary Page 26//160
The PMC-TMU also implements the radio resource management and the
layer 3 of the radio protocol:
RRC termination
RRM strategy
QoS management
CAC algorithm
Call handling and mobility management
The PMC-TMU also supports L1/L2 Iub SSCOP-UNI.
The capacity of the RNC Control Plane is determined by the number
of PMC-TMU which can vary from four to 12.

OAM Management Units (OMU): 1 pair per RNC
The operation and management unit (OMU) is responsible for the following
functions:
Configuration and Performance management of the Control Plane
Call Trace management of the Control Plane
Overload management / load balancing of the Control Plane
processors (with the exception of the PMC-M)
Fault tolerance / management of the Control Plane processors (PMC-
TMU) in conjunction with the CP3 and PMC-M
Radio Network Subsystem (RNS) OAM&P
Management of Cell Broadcast Service (CBS) broadcast
SABP protocol termination
Control Plane Call Processing global management
I-SMLC

PMC Manager (PMC-M): 1 pair per RNC:
The Master PMC manages the majority of the User Plane of the RNC. It is
responsible for:
Resource management of the User Plane
Transport Bearer management of the User Plane
Performance management of the User Plane e.g. Call Traces
Configuration management of the User Plane
Overload management / load balancing of the User Plane processors
Fault tolerance / management of the User Plane processors (PMC-
RABs and PMC-PC) in conjunction with the CP
RNC OAM&P in conjunction with the CP
Q.2630.2


UA06.0 RNC Product Specification
Passing on or copying of this document, use and communication of its contents not permitted without AlcatelLucent written authorization
05.01 / EN Preliminary Page 27//160
Radio Access Bearer (RAB) up to 40 per RNC
The Radio Access Bearer AP implements the layer 1 and layer 2 or the
Uu interface which includes the following functions:
Ciphering
DHO (macro-diversity)
RLC radio protocol
MAC radio protocol
QoS Management
Termination of Frame protocol layers for the following interfaces:
Iu
Iur
Iub
PMC-RAB also supports Broadcast Multicast Center (BMC) for CBS.
Protocol Converter (PC): up to 12 per RNC
The Protocol Converter AP terminates the ATM AAL2 protocol layer up to
and including the SSSAR function. The resulting frames are distributed to the
other computing nodes by standard IP protocols within the RNC. The PMC-
PC is the only AP that interfaces to ATM traffic directly.
Network Address Translation (NAT) functionality is added to the PC in UA06
to support IP Traffic Management.

Along with the Packet Server modules the RNC uses the following processing roles:
Control Processor (CP): 1 pair per RNC
Disk Management
Software release management
CDL provisioning
Trace collection (Apptr, UPOS, CT)
TBM
ATM signaling and routing (PNNI)
IP Virtual Routing
10Base-T and 100Base-T Ethernet ports for OAM access

PDC CPU on PSFP cards:
The PDC coordinates AP access to the RNC platform services. Functions performed by
the PDC include:
SS7 layer 2 SSCOP and SCTP protocols
Maintenance to support APs and interface to Platform Base and OAM proxy
OAM alarms, statistics, logs are collected from APs
UA06.0 RNC Product Specification
Passing on or copying of this document, use and communication of its contents not permitted without AlcatelLucent written authorization
05.01 / EN Preliminary Page 28//160
16 port OC3/STM-1 Interfaces and 4pGE Interfaces
The ATM Interface modules support ATM connection management functions, PNNI
signaling for SPVCs, 1+1 SONET/SDH APS and 1:1 equipment protection, as well as
ATM traffic management functions. The Ethernet Interface modules support IP static
routing, protected static routes and IP QOS functions.
Switching Fabric
The RNC contains a non-blocking switching fabric that provides point-to-point connectivity
between all cards. All RNC cards perform the necessary queuing, prioritization and routing to
switch cells and frames through the fabric; this includes segmentation and reassembly to
transport variable length frames through the fabric.
See Chapter 6 RNC Carrier Grade for details on the sparing models for the various roles.
UA06.0 RNC Product Specification
Passing on or copying of this document, use and communication of its contents not permitted without AlcatelLucent written authorization
05.01 / EN Preliminary Page 29//160
3 RNC HARDWARE DESCRIPTION
The RNC implements a high-density, high-performance computing architecture to efficiently
handle UMTS User Plane and Control Plane functions. High density computing is achieved
by the addition of a scalable number (4-12) of "Packet Server Function Processor" (PSFP)
modules in every RNC. Each PSFP module consists of six "Application Processors" and
one maintenance processor.
The RNC has been designed for maximum flexibility in a UMTS access network by
decoupling the network transport aspects from the UMTS protocol processing aspects.
Network transport is provided by the Multiservice Switch 15000 (MSS15K) platform. The
switch provides the ATM and IP transport network capabilities, supporting full Layer 3 IP
routing on all Ethernet interfaces. In fact, most internal communication operates over an IP
network, including IP over ATM traffic on the Iu-PS interface.
RNC-specific hardware components are added into the MSS15K platform to create a packet
server architecture geared to handling the W-CDMA protocols for both the Control Plane
and the User Plane. These protocols are quite compute intensive and in some cases very
high-touch, leading to the PSFP hardware which is highly optimized for packet processing
of Control and User Plane traffic.
The sections that follow describe the RNC hardware and assume the reader is familiar with
the system level material presented in Section 2.
3.1 RNC SYSTEM-LEVEL HARDWARE DETAILS
The RNC is based on the generic MSS15K carrier grade platform which provides the
system level hardware infrastructure for the W-CDMA RNC application hardware. The
generic platform includes the frame/cabinet, equipment shelves, data fabrics, power,
grounding, cooling and infrastructure cabling. Designed for the carrier-grade environment,
the MSS15K system supports Frame Relay, IP, ATM and voice services.
The MSS15K consists of redundant switch fabrics interconnecting the 14 functional
processors (FPs) and two control processors (CPs) supported by the MSS15K shelf. Each
FP and CP is connected to each of the switch fabrics via full duplex serial links based on an
IBM technology called Data Aligned Synchronous Link (DASL).
Two FPs have been developed specifically for the RNC, the Packet Server Function
Processor (PSFP) and the high-capacity Dual-Core Packet Server (DCPS) Function
Processor. Unlike most other MSS15K FPs, these do not require any physical I/O ports.
These modules were designed specifically for high-density, high-performance general-
purpose packet processing of W-CDMA protocols.
The RNC architecture is based on the principle that the interfaces are decoupled from the
packet processing. To achieve this goal, all of the traffic destined for packet processing is
converted to a common IP frame format.
There are two packaging versions of the 9370 RNC in the MSS15K frame:
o a single shelf version (RNC Single)
o a dual shelf version (RNC Dual)
UA06.0 RNC Product Specification
The 9370 RNC Dual configuration provides two complete independent RNCs in a compact
MSS15K frame footprint. The 9370 RNC Single configuration shown in Figure 3-1: 9370
RNC Single Equipment Configuration in MSS15K Frame will be discussed in more detail in
the sections that follow.
Figure 3-1: 9370 RNC Single Equipment Configuration in MSS15K Frame

3.2 MSS15K PLATFORM COMMON HARDWARE DESCRIPTION
The following sections describe the hardware elements that collectively make up the generic
MSS15K platform and modules used in the RNC.
3.2.1 MSS15K EQUIPMENT FRAME
The 9370 RNC is deployed in a common NEBS Zone 4 seismic frame with 21 wide
mounting rails in front and rear. This indoor frame supports 2 shelves including power/alarm
and cooling units. The frame can be mounted independently, or bolted together as an
ensemble of frames, as shown in Figure 3-2: RNC Frames in a Bolt Together Line-up.
Passing on or copying of this document, use and communication of its contents not permitted without AlcatelLucent written authorization
05.01 / EN Preliminary Page 30//160
UA06.0 RNC Product Specification
Figure 3-2: RNC Frames in a Bolt Together Line-up

Frame specifications (see Figure 3-3):
o Dimensions: 600mm (23.6 in) W x 600mm (23.6 in) D x 2125mm (83.6 in) H.
o Weight: 125 kg / 275 lbs (empty frame)
There is a Footprint Deviation required when cosmetic options are selected:
o 732mm (28.8 in) D with front and read doors
o Add 66.0mm / 2.625 in. to front and rear for doors
o 660mm (25.7 in) W with cosmetic side panels

Passing on or copying of this document, use and communication of its contents not permitted without AlcatelLucent written authorization
05.01 / EN Preliminary Page 31//160
UA06.0 RNC Product Specification
Figure 3-3: RNC Frame Footprint with Aisle Allowances
600mm
600mm
Front Aisle
Allowance
Rear Aisle
Allowance
Frame Footprint
760mm
600mm
1960mm
600mm

Features of the frame cabinet:
NEBS, Zone 4 Seismic compliance
direct to concrete or raised floor installation
bottom up or top down cable access
bolt together for a line-up or standalone options
side panels for end of line-up or standalone frames
regular and extended cable management brackets
cosmetic cable cover panels
optional front and rear doors
1

brandline top cover with LED alarm annunciator when doors are required
four eye bolt anchor points for hoisting the frame

1
The door kit is not intended for use when extended cable management brackets are installed.
Passing on or copying of this document, use and communication of its contents not permitted without AlcatelLucent written authorization
05.01 / EN Preliminary Page 32//160
UA06.0 RNC Product Specification
3.2.1.1 COSMETIC SIDE PANELS AND DOORS
All 9370 RNC Market Packages support optional side panels and doors. These are
identified as follows:
Side panels provide a cosmetic finish to the exposed side of a frame, either at the
end of a bolt together line-up or applied to sides of a standalone frame. Please refer
to Figure 3-4: RNC Standalone Frame, View with Side Panels and Front Door.
Lockable doors are available for front and rear of the frame. The mesh door panels
maintain ventilation for system cooling. Please refer to Figure 3-4: RNC Standalone
Frame, View with Side Panels and Front Door.
Top cover brandline assembly with LED alarm annunciator to signal a BIP alarm is
active. This is part of the door kit to extend the BIP alarm indicators which are
covered by the door.
When doors will be installed on the RNC frame, there needs to be appropriate aisle
allowance to allow for the swing of the doors. The aisle allowances shown in Figure 3-3 are
sufficient to accommodate the door swing.
Figure 3-4: RNC Standalone Frame, View with Side Panels and Front Door


3.2.1.2 CABLE COVERS
The 9370 RNC market package frames are provided with four cosmetic cable covers (one
for each corner) to hide vertically routed cables within the cable management brackets.
Please refer to Figure 3-5: Frame Showing Side Panels and Cable Covers.
Passing on or copying of this document, use and communication of its contents not permitted without AlcatelLucent written authorization
05.01 / EN Preliminary Page 33//160
UA06.0 RNC Product Specification
Figure 3-5: Frame Showing Side Panels and Cable Covers

3.2.1.3 TOP COVERS
Front and rear top cover kits are provided with all 9370 RNC market package frames. Both
front and rear top cover panels bear the 9370 RNC brandline label and if the frame is
equipped with doors, the front cover also includes a BIP alarm LED. Please refer to Figure
3-6: RNC Brandline Top Cover and Figure 3-7: RNC Brandline Top Cover with Alarm LED.
Passing on or copying of this document, use and communication of its contents not permitted without AlcatelLucent written authorization
05.01 / EN Preliminary Page 34//160
UA06.0 RNC Product Specification
Figure 3-6: RNC Brandline Top Cover

Figure 3-7: RNC Brandline Top Cover with Alarm LED
Brandline
Cover

Passing on or copying of this document, use and communication of its contents not permitted without AlcatelLucent written authorization
05.01 / EN Preliminary Page 35//160
UA06.0 RNC Product Specification
Passing on or copying of this document, use and communication of its contents not permitted without AlcatelLucent written authorization
05.01 / EN Preliminary Page 36//160
3.2.2 MULTI-SERVICE SWITCH 15000 FABRIC
The MSS15K switch fabric consists of two redundant fabric cards, located on the rear of the
shelf backplane, which perform cell switching between the backplane front slots. Each fabric
card is connected to the sixteen front shelf slots using high-speed Data Aligned
Synchronous Links (DASL) for serial full duplex connectivity to each slot. Control or
Function Processors connect to the fabric cards via a pair of Cross Point Access Controller
(CPAC) ASICs. These devices provide rate adaptation and bus arbitration between the
fabric and the backplane slots. The CPAC devices are located on each module (i.e. Control
Processors or Function Processors) that fits in a backplane slot of the MSS15K.
The MSS15K switch fabric is engineered to provide a committed aggregate capacity of
40Gbps to the 16 shelf slots per fabric plane. This ensures each of the 16 slots has a
committed faceplate capacity of 2.5Gbps under all conditions, including a failure of one of
the redundant fabric cards. Under normal operating conditions, a load-sharing algorithm
utilizes both fabric planes to make the most effective use of the fabric I/O bandwidth.
The fabric cards (one per fabric plane) use AMCCs Prizma-EP fabric switch devices which
are configured into 16x16 switching fabrics (i.e. 16 input ports, 16 output ports).
To support both cell and frame switching, the fabric interface on each of the FPs (or CPs)
does traffic segmentation and reassembly (SAR) into fixed 64 byte cells. Four of these bytes
are for routing and integrity (i.e. overhead), and the remaining 60 bytes carry the payload.
The three of the four overhead bytes are used for routing, and the fourth byte contains a
CRC. This CRC is transparent to the fabric and is checked and stripped by the receiving
CPAC.
3.2.2.1 SECONDARY SWITCH CONTROLLER (SSC)
A microcontroller supports the Prizma cell switching devices on the fabric card. This
microcontroller initializes the Prizma switches. Following initialization of the fabric card, the
microcontroller monitors the Prizmas for errors and implements configuration changes as
requested by the shelf Control Processor. Because the microcontroller makes minimal
decisions on its own, it is referred to as the Secondary Switch Controller. In controlling the
behavior of the fabric cards, the Control Processor is referred to as the Primary Switch
Controller (PSC). The PSC makes requests to the SSC to enable and disable Prizma ports,
while the SSC reports errors to the PSC. Communications between the PSC and the SSC
take place over redundant HSCX serial buses. Software refers to these buses as SCBx and
SCBy (secondary control bus).
The SSC software is contained in EPROM on the fabric card. There are two copies of the
software driver on the card where one image is programmed at the factory and never
overwritten. The second copy can be overwritten by the CP downloading a new driver image
to the SSC via one of the HSCX buses. This is required only if a new MSS15K platform
software release is introducing new switching features bug fixes.
3.2.2.2 PRIMARY SWITCH CONTROLLER
The active Control Processor communicates with the Secondary Switch Controller on both
the X and Y fabric cards using the fabric control (HSCX) buses. The software component on
the CP which communicates with the Secondary Switch Controller is referred to as the
Primary Switch Controller.
UA06.0 RNC Product Specification
Passing on or copying of this document, use and communication of its contents not permitted without AlcatelLucent written authorization
05.01 / EN Preliminary Page 37//160
The Primary Switch Controller configures its HSCX ports for use on the fabric control buses
with addressing based upon the CPs card number (which includes the shelf ID). Through
requests to the Secondary Switch Controller, the CP Primary Switch Controller can:
reset the fabric card (both power on reset and reset as a result of diagnostic testing
are possible)
upgrade the microcode in the flash on the fabric card
specify which fabric control bus and which CP are active
modify default fabric card initialization
monitor fabric card operation, statistics, and errors
3.2.2.3 REDUNDANT BACKPLANE
Within an MSS15K shelf, two fabric cards provide backplane redundancy. These cards are
referred to as:
The X fabric
The Y fabric
The X and Y fabric cards operate in load sharing mode. Upon failure of either fabric, all
traffic is carried by the remaining fabric. Physically there is only one type of fabric card. The
designation of X vs. Y fabric is determined by where the fabric card is installed. The fabric
card reads the X or Y designation from the backplane pins.
These fabric cards provide redundant communication between 16 backplane slots.
3.2.2.4 BACKPLANE CLOCKING
The fabric card provides clocking for DASL links connecting Multiservice Switch processor
cards.
3.2.2.5 CLOCKING REDUNDANCY
Hitless backplane clock switchover is achieved by having the two fabric cards operate in a
master-slave relationship. One fabric card is the master and provides clocking for both
cards. Clock mastership is independent of CP switchover. The CP can request master-slave
switchover.
3.2.2.6 FLOW CONTROL
Flow control is provided between the Multiservice Switch processor cards by the use of
grant indications in hardware. The fabric card passes all send grant indications to all
Multiservice Switch cards in IBM DASL cell header bytes. Send grant indications are based
on the size of output queues in the fabric for each priority. The fabric card also provides
memory grant indications on pins connected to all Multiservice Switch processor cards
based on its shared memory usage. Each Multiservice Switch processor card must follow
send grant and memory grant rules before sending packets. The CPAC device is
responsible for following these rules in hardware.
UA06.0 RNC Product Specification
Passing on or copying of this document, use and communication of its contents not permitted without AlcatelLucent written authorization
05.01 / EN Preliminary Page 38//160
3.2.2.7 ERROR HANDLING
Data integrity is provided by a CRC in data cells transmitted by the CPAC which is verified
by the receiving CPAC. The fabric card is not involved in the process.
A CRC is also carried in idle cells. This CRC covers all data cells since the previous idle cell.
CRC errors give an indication of link quality. For cells moving from the CPAC to the fabric
card, the idle CRC is generated by the CPAC and checked by the fabric card. For cells
moving from the fabric card to the CPAC, the idle CRC is generated by the fabric card and
checked by the CPAC. Cells are not discarded in the fabric because of idle cell CRC errors
since the idle CRC spans multiple data cells (note that the receiving CPAC detects data cell
CRC errors and discards when required).
Parity checking is done on the 3 byte switch routing header. Cells are discarded when:
there is a switch routing header parity error
a cell cannot be stored in shared memory (this is reported as a flow control violation
since it should not happen if the memory grant is being obeyed).
3.2.3 POWER DISTRIBUTION
The Breaker Interface Panel (BIP) provides redundant power distribution via Breaker
Interface Modules (BIM). Alarm indicators are provided by the Alarm Module. The BIP is a
shell that holds either 2 BIMs (plus two fillers) and the Alarm Module for a 9370 RNC
Single configuration, or 4 BIMs and the Alarm Module for a 9370 RNC Dual configuration
(see Table 3-1: BIM Quantities for MSS15K Systems.
Table 3-1: BIM Quantities for MSS15K Systems
Single Shelf Dual Shelf
2 BIMs +2 Fillers 4 BIMs
The major components of BIP power distribution are:
Breaker Interface Modules (each BIM contains 5 breakers)
Power Interface Modules (PIMs are the MSS15K shelf power entry point)
These are shown in Figure 3-8: Breaker Interface Panel Front View.
UA06.0 RNC Product Specification
Figure 3-8: Breaker Interface Panel Front View

The MSS15K power distribution infrastructure features:
Redundant DC voltage power feeds designated A1, B1 and A2, B2 (see Table 3-2:
BIP Power to Shelf Assignment and Figure 3-11: BIM / Shelf Assignment).
Logic grounds referenced to DC voltage feeds to provide isolation from frame
ground
Redundant hot-swappable BIMs
Alarm receiving and reporting functions
FFigure 3-9: Power Distribution Interconnect Schematic shows the connections between the
BIP and the various components mounted in the frame.

Passing on or copying of this document, use and communication of its contents not permitted without AlcatelLucent written authorization
05.01 / EN Preliminary Page 39//160
UA06.0 RNC Product Specification
Figure 3-9: Power Distribution Interconnect Schematic

Table 3-2: BIP Power to Shelf Assignment
BIP Power
Designation
Assignment
BIP Power
Designation
Assignment
A1 - A2 -
A1 + A2 +
B1 - B2 -
B1 +
MSS15K Shelf located in
the Upper Frame.
(16 slots and cooling unit)
B2 +
MSS15K Shelf located in
the Lower Frame.
(16 slots and cooling unit)



3.2.3.1 BREAKER INTERFACE MODULES (BIM)
Each MSS15K breaker interface module (see Figure 3-10: BIM Module) combines a group
of 5 individual breakers into a single hot-swappable unit. Redundant power is achieved by
using 2 BIMs for each 9370 RNC shelf. BIM output power is routed to Power Interface
Modules (PIM) located on the back of the shelf.
The Breaker Interface Modules feature:
low frequency filtering
Passing on or copying of this document, use and communication of its contents not permitted without AlcatelLucent written authorization
05.01 / EN Preliminary Page 40//160
UA06.0 RNC Product Specification
current limiting startup circuit
alarms outputs for: filter fail, power loss, breaker trip
a faceplate status LED
-48VDC / -60VDC, 105 amp input capability (4x 25A plus 1x 5A breakers)
The BIM breaker ratings and assignments are listed in Table 3-3: BIM / Breaker - Rating
and Assignment.
Figure 3-10: BIM Module

Figure 3-11: BIM / Shelf Assignment

B2 A2 B1 A1
BIMs
Alarm
Module
BIP
1 2 3 4 5 1 2 3 4 5 1 2 3 4 5 1 2 3 4 5
Lower
Shelf
Breakers
Upper
Shelf
Breakers
Passing on or copying of this document, use and communication of its contents not permitted without AlcatelLucent written authorization
05.01 / EN Preliminary Page 41//160
UA06.0 RNC Product Specification
Table 3-3: BIM / Breaker - Rating and Assignment
BIMs
2
Breaker
Number
Amperage Supported 9370 RNC Shelf
Hardware
1 25
2 25
Slots 0 to 7, Fabric Y
3 5 Cooling Unit
4 25
A1
and
B1

5 25
Slots 8 to 15, Fabric X

3.2.3.2 POWER INTERFACE MODULES (PIM)
Power interface modules are the shelf connection point for power from the BIMs. PIMs
distribute power to the shelf modules via backplane connectivity. Each shelf is equipped
with 4 PIMS, two connected to the A power feeds and two to B power feeds.
The PIM features:
power filtering
termination of shelf clocks and secondary control bus
The PIM module is illustrated in Figure 3-12: Power Interface Module and its location (4
places) on the back of the chassis is shown in Figure 3-13: PIM Placement.
Figure 3-12: Power Interface Module




2
BIMs A1 and B1 support the upper shelf. BIMs A2 and B2 support the lower shelf. Breaker ratings and assignments are
identical.
Passing on or copying of this document, use and communication of its contents not permitted without AlcatelLucent written authorization
05.01 / EN Preliminary Page 42//160
UA06.0 RNC Product Specification
Figure 3-13: PIM Placement

3.2.3.3 BIP ALARM MODULE
The BIP alarm module (see Figure 3-14: BIP Alarm Module) shares the same chassis as the
BIMs and is distinguished by the alarm LED bank. Refer to Figure 3-8: Breaker Interface
Panel Front View for the alarm modules location. Alarm communication with the shelf is
provided by a cable link to the BITS/Alarm module (refer to section 3.2.3.4).
The Alarm Module provides the following functions:
monitoring and filtering of alarms for hardware indicators and software displays
drive circuit for the alarm LED board
monitoring the state of the BIMs (circuit breakers)
consolidation of frame LED system status indicators
external connectivity for customer-provided alarm reporting
monitoring of two MSS15K shelves.
Passing on or copying of this document, use and communication of its contents not permitted without AlcatelLucent written authorization
05.01 / EN Preliminary Page 43//160
UA06.0 RNC Product Specification
Figure 3-14: BIP Alarm Module

The Alarm Module interconnect schematic is shown in
Figure 3-15: Alarm Module Interconnect Schematic

Passing on or copying of this document, use and communication of its contents not permitted without AlcatelLucent written authorization
05.01 / EN Preliminary Page 44//160
UA06.0 RNC Product Specification
Passing on or copying of this document, use and communication of its contents not permitted without AlcatelLucent written authorization
05.01 / EN Preliminary Page 45//160
3.2.3.4 BITS / ALARM MODULE
The BITS / Alarm module is a dual function shelf interface unit which provides external
timing inputs as well as alarm reception and forwarding. Each MSS15K shelf is equipped
with one module located at the rear of the shelf (see Figure 3-16: BITS / Alarm Module
Location).
The BITS
3
function of the module has the following features:
E1 120 ohm balanced timing inputs (or 75 ohm unbalanced using a balun cable)
DS1 timing inputs
redundant timing inputs
The Alarm portion of the module provides the following functions:
forwarding of fabric status and cooling unit alarms to the Control Processor
forwarding of any Control Processor alarms to the BIP.

3
BITS or Building Integrated Timing System is a customer specific option
UA06.0 RNC Product Specification
Figure 3-16: BITS / Alarm Module Location

3.2.3.5 MAC ADDRESS MODULE
The MSS15K Media Access Control (MAC) module is a plug-in unit located at the rear of the
shelf. It contains the base MAC address and the range of MAC addresses available for
assignment. During the software boot sequence, the active control processor (CP) takes the
range stored in the MAC address module, divides this value by the number of functional
processor (FP) cards, and distributes a base value and a range to each FP.
Figure 3-17: MAC Address Module Location below shows the location of the MAC Address
Module on the chassis.
Passing on or copying of this document, use and communication of its contents not permitted without AlcatelLucent written authorization
05.01 / EN Preliminary Page 46//160
UA06.0 RNC Product Specification
Figure 3-17: MAC Address Module Location

3.2.4 COOLING
MSS15K shelf temperature is controlled by means of a cooling unit. One cooling unit is
required for the lower shelf, and another is required for the upper shelf (if the upper
MSS15K shelf is present). The upper cooling unit requires remote temperature sensors
above the shelf while the lower unit contains the sensors in the cooling unit module. A lower
cooling unit is shown in Figure 3-18: Lower Shelf Cooling Unit, and the cooling unit locations
and frame airflow patterns are shown in Figure 3-19: Frame/Shelf Level Airflow.
The major cooling unit components are:
fan module (3 variable speed fans, PCBs, status LEDs)
temperature sensors
fan controllers
Passing on or copying of this document, use and communication of its contents not permitted without AlcatelLucent written authorization
05.01 / EN Preliminary Page 47//160
UA06.0 RNC Product Specification
air filter
blank fillers for any unpopulated slots
The Cooling Unit features:
variable speed redundant fans
1 to 1.2 m/s air velocity
alarm connectivity
cooling unit status LEDs
field replacement of the whole unit or any of its parts
Figure 3-18: Lower Shelf Cooling Unit

Passing on or copying of this document, use and communication of its contents not permitted without AlcatelLucent written authorization
05.01 / EN Preliminary Page 48//160
UA06.0 RNC Product Specification
Figure 3-19: Frame/Shelf Level Airflow

3.2.5 MSS15K CONTROL PROCESSORS
The Control Processor is the centralized manager for all of the hardware and transport
aspects of User Plane resources. These functions include supporting the Function
Processors (FPs) and performing memory-intensive, non-real time tasks, such as routing
table maintenance. The CP initializes and maintains the switch fabrics (routing tables) and
provides system management functions, such as monitoring and alarm timing for all the
other processors connected to the backplane.
The CP also contains the synchronization circuitry and Stratum 3 holdover clocks which are
used to distribute synchronization signals to all FP slots to support Layer 1 physical network
synchronization for shelf-level SONET/SDH interfaces. Network timing signals are accepted
either from the BITS/Alarm Module, or from designated I/O Function Processors in the shelf.
The CP also provides Ethernet and serial interfaces to access the OAM&P systems
(including IP routing of Node-B OAM&P traffic to the OMC).
The CP comes in two performance variants, the CP3 and the higher capacity CP4. There
are many similarities between the CP3 and CP4. The CP3 is described first, then the CP4.
This is followed by a discussion of the differences between the CP3 and CP4, and some
coverage of common capabilities. Throughout the remainder of this document, references
to a CP refer to either of the CP3 or CP4, unless otherwise stated.
Passing on or copying of this document, use and communication of its contents not permitted without AlcatelLucent written authorization
05.01 / EN Preliminary Page 49//160
UA06.0 RNC Product Specification
Passing on or copying of this document, use and communication of its contents not permitted without AlcatelLucent written authorization
05.01 / EN Preliminary Page 50//160
3.2.5.1 CP3 HARDWARE DESCRIPTION
The Control Processor is the shelf management controller for the MSS15K platform. It
performs all the shelf hardware management, including the various Function Processors
installed in the shelf. It also serves as the centralized OAM&P point for the shelf. This is a
critical module in the MSS15K platform, and is fully redundant.
The Control Processor Generation 3 (CP3) has two variants to support localization in
different regions of the world. The NTHW06 is the North American variant, and its
synchronization circuitry is targeted to DS1 BITS interfaces and specifications. The
NTHW08 variant supports international deployment where network synchronization is based
on E1 BITS interfaces and specifications.
The CP3 offers the following features:
Two Passport Queue Controllers (PQC2) with Context and Queuing Memory (CQM)
4 MByte of queuing memory per PQC2
A custom EIDE controller operating at ATA-3 with cache memory (EMEM)
20 GBytes of formatted and usable disk capacity on a 4200 or 5400 RPM disk drive
Programmable Real-Time Clock (RTC)
V.24 connector for a local operator port
Two 100BaseT Ethernet ports (one for NMS, one for future use)
One 10BaseT Ethernet port for designer debug in labs
Point-of-Load Power Supplies (POL) with 48 Volt inputs and outputs of +5V, +3.3V,
+2.5V, +1.8V, +1.5V and +1.2V
Interface to a CPAC2 based fabric backplane
Stratum-3 clocking circuitry capable of tracking to BITS interfaces
A Processor Daughterboard (PDB) with:
o A PowerPC Processor
o An MPC106 32 bit, 33 MHz PCI bridge/memory controller for the 60X
processor bus, providing a system memory interface and PCI bus master
functions
o 256 MByte SDRAM memory subsystem with Read-Modify-Write (RMW)
parity
o 8 MByte FLASH memory (accessible in x64 data bus architecture) dual bank
architecture with an accompanying bank swapping mechanism
o General I/O register control for alarms, jumpers, slot id, etc.
o Two serial/operator debug ports
One port is routed to the motherboard for MAC address and TIP
access
One TIP port available on the PDB through a header strip
o Error protection, detection and indication
UA06.0 RNC Product Specification
o Module reset circuitry
o Six software timers
o Interrupt controller
Figure 3-20: CP3 Block Diagram shows the major components of the CP3.
Figure 3-20: CP3 Block Diagram


Passing on or copying of this document, use and communication of its contents not permitted without AlcatelLucent written authorization
05.01 / EN Preliminary Page 51//160
UA06.0 RNC Product Specification
Passing on or copying of this document, use and communication of its contents not permitted without AlcatelLucent written authorization
05.01 / EN Preliminary Page 52//160
3.2.5.2 CP4 HARDWARE DESCRIPTION
Like the CP3, there are two localization variants of the CP4: the NTPN06 version for North
American deployments, and the NTPN08 version for international deployment. There are
many commonalities between the CP3 and the CP4. The major differences are identified
below:
The two PQC2s are replaced by a single integrated Dual PQC2 device. The
functionality, including that of the CQM memory remain unchanged from the CP3
implementation
A new custom EIDE controller supporting ATA-4 disk drive connectivity has been
introduced. This improves performance over the older ATA-3 version and adds
support for higher performance drives in addition to the standard 5400 RPM ones
currently in use. Parity checking on the disk cache memory (EMEM) has also been
added.
A higher capacity Processor Daughter Board (PDB4) has been introduced. The
major highlights of the PDB4 are:
o 990 MHz IBM 750GX series PowerPC CPU
o Marvell Discovery 3 Bridge/Memory Controller: this is a 32-bit, 33 MHz PCI
device which also provides a system memory interface, PCI bus master
functions, and DMA controllers capable of mastering burst accesses to
other devices across the PCI bus
o 512 Mbytes SDRAM memory subsystem with ECC protection
The block diagram of the PDB4 is shown in Figure 3-22: PDB4 block diagram and a block
diagram of the entire CP4 is shown in Figure 3-21: CP4 block diagram.
UA06.0 RNC Product Specification
Figure 3-21: CP4 block diagram


Passing on or copying of this document, use and communication of its contents not permitted without AlcatelLucent written authorization
05.01 / EN Preliminary Page 53//160
UA06.0 RNC Product Specification
Figure 3-22: PDB4 block diagram


3.2.5.3 9370 RNC CP CARD COMPATIBILITY
The CP3 and CP4 are used only in specific configurations. These configurations are given
in Table 3-4: Control Processor Card Compatibility Matrix and are discussed elsewhere in
this document. An alarm is raised when an illegal combination is detected, on card insertion.
Mixing of CP3 and CP4 on the same shelf is not supported, except during a hardware
upgrade from CP3 to CP4.
Passing on or copying of this document, use and communication of its contents not permitted without AlcatelLucent written authorization
05.01 / EN Preliminary Page 54//160
UA06.0 RNC Product Specification
Passing on or copying of this document, use and communication of its contents not permitted without AlcatelLucent written authorization
05.01 / EN Preliminary Page 55//160
Table 3-4: Control Processor Card Compatibility Matrix
CP3 CP4 STM1 PQC STM1 MS3 PSFP DCPS GigE
CP3 Y N Y Y Y Y Y
CP4 N Y N Y N Y Y
3.2.5.4 CP COMMON CAPABILITIES
The Control Processor shelf management functionality requires operation in hot-spared
redundant mode, thus there are always two CPs in the shelf: one in slot 0 and the other in
slot 1. The two CPs contain duplicate information and are connected to the two fabrics with
redundant links. While one CP operates as the active module, the second module is in
warm stand-by mode to provide high availability. The active CP transfers (journals) critical
information to the spare CP via a dedicated inter-CP port minimizing the impact of a switch
of activity. To minimize inconsistencies in the file system between the active and standby
units, a software-based journaling system (the shadowed file system or SFS) is used to
replicate file writes to the standby CP disk drive.
A separate (from the main fabric) high-speed serial interface (HSCX) on the CP provides a
low-level communication path between the CPs, the Function Processors, and the fabric
cards.
The summary of the CP features are shown in Table 3-5.
Table 3-5: CP Feature Summary
Feature Supported in H/W Required by S/W
V.24 Port Yes yes
CP Ethernet OAM Port 10/100Base-T yes
CP Sparing/Redundancy yes yes
EIDE Disk (20 GB Formatted) yes yes
Real-Time Clock yes yes
Stratum 3 Network Clock Sync yes yes
BITS External Timing yes yes
Local Memory 256 MB (CP3)
512 MB (CP4)
256 MB
Multiservice Switch Shelf MSS15K MSS15K

The common CP faceplate is shown in Figure 3-23.
UA06.0 RNC Product Specification
Figure 3-23: Control Processor Faceplate
Three color LEDs (red, amber, green) are provided as status indicators. Only one LED lights
at any one time. Activity and connectivity LEDs are also provided for the Ethernet port.
3.2.5.5 CP DISK SUBSYSTEM
The CP disk has a formatted capacity of 20 GBytes. There are three partitions: one of 4
Gbytes, and the other two of 8 GBytes each.
The 4 GByte partition is used by all standard MSS15K platform software for software loads,
shelf-level provisioning, alarms, log and debug file storage. This partition is shadowed onto
the other CP disk via a Shadowed File System (SFS).
The first 8 GByte partition is also shadowed using SFS and contains the 9370 RNC-specific
provisioning (the MIB). RNC logs and traces are stored on the second 8 GByte partition
which is not shadowed between the Control Processors.
In the 9370 RNC, the PSFP CPUs require full I/O access to the CP disk. PSFP-based
applications directly access the CP disk using a network file system (NFS) client/server
architecture and IP datapath to the CP. The RNC OMU active and standby instances have
Passing on or copying of this document, use and communication of its contents not permitted without AlcatelLucent written authorization
05.01 / EN Preliminary Page 56//160
UA06.0 RNC Product Specification
direct access to the CP disk, as do the TMU instances. This is illustrated in Figure 3-24:
Datapath between the CP and PSFP CPUs.
Figure 3-24: Datapath between the CP and PSFP CPUs

3.2.5.6 CP DISK SYNCHRONIZATION
The shadowed file system (SFS) is resident on the active CP and provides a redundant file
system consisting of two hard disks, one on each of the active and standby CPs. When an
application writes data to the shadowed partition, SFS applies the data to the local disk on
the active CP and then replicates it to the remote disk on the standby CP. The data on the
standby CP is thus synchronized, permitting a hitless CP switchover. This disk
synchronization is transparent to the application which reads and writes to one file system.
On insertion of a new or replacement CP (always the standby CP), SFS co-ordinates the
synchronization phase during which the entire content of the shadowed partitions are copied
to the new Control Processors disk. During this activity, the active CP continues to support
full disk access and all normal functionality. Full system redundancy is not restored until the
two disks are fully synchronized.
3.2.6 MSS15K FUNCTION PROCESSORS
The 9370 RNC uses standard MSS15K I/O interface FPs for all its traffic interfaces. These
may be OC-3/STM-1 ATM or Gigabit Ethernet interfaces. In the ATM case, clear channel
Passing on or copying of this document, use and communication of its contents not permitted without AlcatelLucent written authorization
05.01 / EN Preliminary Page 57//160
UA06.0 RNC Product Specification
Passing on or copying of this document, use and communication of its contents not permitted without AlcatelLucent written authorization
05.01 / EN Preliminary Page 58//160
interfaces are used. The following sections describe the MSS15K Function Processors that
are used in the operation of the 9370 RNC.
3.2.6.1 16 PORT OC-3/STM-1 FUNCTION PROCESSORS
The 16 port OC-3/STM-1 Function Processors provide the traffic ingress/egress paths for
the entire 9370 RNC. These are deployed as a pair of FPs in a fully redundant, spared
topology to meet the requirements of SONET/SDM protection switching.
Depending on the 9370 RNC traffic and capacity requirements, two versions of the 16 port
OC-3/STM-1 FP are supported:
The MSHS 16pOC3SmIrAtm Function Processor (16-Port OC3/STM1 Single-
Mode, Intermediate Reach, ATM). This is the standard MSS15K Multi-Service High
Speed (MSHS) ATM interface module. It has fixed optics using an LC small form
factor E/O module
The MS3 16pOC3PosAtm Function Processor (16-Port OC3/STM1 Packet over
SONET, ATM) Function Processor. This is the higher capacity MSS15K Multi-
Service 3rd Generation (MS3) ATM interface module. This particular FP uses Small
Form Pluggable (SFP) optics to provide a broader range of interfacing options.
Both versions of 16-port OC-3/STM-1 ATM FP support a per-port line rate of 155 Mbps
operating in B-ISDN mode.
The key MS3 enhancements over the MSHS 16pOC3/STM1 FP are:
Complete set of statistics
8 vs. 4 Emission Priorities
o 2 Absolute Priorities, 6 weighted fair queues
o 4 shaped emission priorities
o 4 discard priorities
o Late Packet Discard level
o Support for common queuing
o No connection remapping required for simpler MS3 configuration
o No longer restricted to 16,000 PVCs per FP i.e. MS3 connection scalability
o connections/port =16000
o connections/FP =45000 (32K in a per-VC queue, remaining 13K in common
queues)
o calls/s without accounting and carrier grade =300
o calls/s with accounting =150
o Small form factor pluggable (SFP) optics
o Line rate IP i.e. up to 2.5Gbps vs. OC6 IP over ATM throughput
o Network processor technology
UA06.0 RNC Product Specification
Passing on or copying of this document, use and communication of its contents not permitted without AlcatelLucent written authorization
05.01 / EN Preliminary Page 59//160
3.2.6.1.1 MSHS 16 PORT OC3/STM1 FP
The 16pOC3SmIrAtm FP is a standard MSS15K FP with 16 Single Mode, Intermediate
Reach ports. This FP uses sixteen OC-3/STM-1 duplex fiber optic transceivers and supports
one ATM user-network interface (UNI) or one ATM network-network interface (NNI) per port.
The FP can operate from either side of the user/network boundary.
The main capabilities of the 16pOC3SmIrAtm FP are as follows:
16-ports of unchannelized OC-3/STM-1 payload
SONET and SDH transmission types, on the same card
4

Intermediate reach, single-mode optics
SONET APS Dual-FP 1+1 type (intra-card protection is not supported)
16K per port and 45000 per FP connection space, 6K SPVC connections software
handling capability per FP
ATM services (rt-VBR, nrt-VBR, UBR, and CBR traffic services)
ATM networking (SPVC, PVC, and connections)
Traffic management (scheduling, queuing, shaping, congestion control, and CAC)
Virtual Path Termination (VPT)
UNI and NNI capabilities
PNNI routing support
SONET equipment protection for PVC connections on FPs in a 1+1 sparing
configuration
Figure 3-25: MSHS 16-Port OC3/STM1 FP Faceplate shows the faceplate of a
16pOC3SmIrAtm FP with the protective hood raised to show the fiber connectors.

4
Due to the 16pOC3/STM1 PHYs features, only one ingress optical link can be used for line timing.
UA06.0 RNC Product Specification
Figure 3-25: MSHS 16-Port OC3/STM1 FP Faceplate

The block diagram (Figure 3-26: MSHS 16-Port OC3/STM1 FP Block Diagram) shows the
main datapath on the 16pOC3SmIrAtm FP.
Passing on or copying of this document, use and communication of its contents not permitted without AlcatelLucent written authorization
05.01 / EN Preliminary Page 60//160
UA06.0 RNC Product Specification
Figure 3-26: MSHS 16-Port OC3/STM1 FP Block Diagram

The 16-port FP uses two ASICs, a Queue Relay Device (QRD) and an ATM Port Controller
(APC), between the PQC ASICs and the physical layer devices to offload the PQC from
having to perform ATM traffic management and associated queuing at the full OC-48/STM-4
rate.
The ATM physical layer termination device (Rhine) manages the physical layer convergence
at the OC-3 or STM-1 level of channelization. The ATM Port Controller (APC) performs the
traffic management functions.
The QRD-PQC pair operates in two modes: the ingress datapath (from the ATM physical
link) and the egress datapath (to the ATM physical link).
The PQC performs two distinct forms of data forwarding referred to as autonomous
forwarding and frame processing.
The PQC frame processing function depends upon its autonomous forwarding capabilities
to forward complete reassembled frames to a work queue for frame processing. The frame
processing layer performs its functions in a deferred manner, retrieving frames from the
work queue. The frame processing layer modifies protocol headers and forwards frames
based on IP.
The CPAC2 performs MSS15K bus arbitration and bus to QRD rate adaptation.
From the datapath perspective, the FP CPU provides and manages the APC, QRD, and
PQC data structures that contain connection-based forwarding information (e.g. for ATM)
and address-based forwarding information (e.g. for IP forwarding). This CPU, using the
PQC, can also originate/terminate ATM OAM F4/F5 flows, system messages,
Passing on or copying of this document, use and communication of its contents not permitted without AlcatelLucent written authorization
05.01 / EN Preliminary Page 61//160
UA06.0 RNC Product Specification
Passing on or copying of this document, use and communication of its contents not permitted without AlcatelLucent written authorization
05.01 / EN Preliminary Page 62//160
signalling/routing messages and assist with frame processing when requirements are
beyond that of the PQCs hardware frame processor.
3.2.6.1.2 MS3 16 PORT OC3/STM1 FP
The features of the 16pOC3PosAtm FP are:
o SFP (small form pluggable) optical transceivers may be of mixed type, SR, IR or LR
(short, intermediate or long reach).
o One SFP is required for each of the cards 16 ports
o SR requires Multi-Mode fiber while IR and LR use Single Mode fiber
o line rate frame forwarding up to OC-48
o sophisticated Traffic Management capabilities
o multi-service and networking support for:
o ATM (Asynchronous Transfer Mode)
o PNNI (Private Network to Network Interface)
o IP (Internet Protocol)
o Flexible channelization limited only by RCAF FPGA implementation
o 16-ports of unchannelized OC-3/STM-1 payload
o SONET and SDH transmission types, on the same card
o SONET APS and SDH MSP for 1+1 equipment sparing (requires identical FP in
adjacent slot)
o 16K per port and 45000 per FP connection space
Figure 3-27: MS3 16-Port OC3/STM1 FP Faceplate shows the faceplate of the
16pOC3PosAtm FP with the protective hood raised to illustrate the fiber connectors.
UA06.0 RNC Product Specification
Figure 3-27: MS3 16-Port OC3/STM1 FP Faceplate

The 16pOC3PosAtm FP consists of the MS3 motherboard, GQM/ATLAS traffic
management daughtercard, 16pOC3 PHY/Optic daughtercard, a PUPS daughtercard and a
processor daughterboard.
The block diagram in Figure 3-28: MS3 16-Port OC3/STM1 FP Block Diagram shows the
blocks and data flows of this 16 port FP.
Passing on or copying of this document, use and communication of its contents not permitted without AlcatelLucent written authorization
05.01 / EN Preliminary Page 63//160
UA06.0 RNC Product Specification
Figure 3-28: MS3 16-Port OC3/STM1 FP Block Diagram

The major components of the motherboard consist of the following:
o One CPAC2.2 which allows access to both X and Y switching fabrics via the
backplane
o Re-assembly FPGA
o assembles AAL5 ATM cells back into packets
o 4MBytes of external QDR (Quad Data Rate) SRAM configured in three
banks: 512Kx36 cell storage, 512Kx18 linked list, and 512Kx18 context
memory banks
o two FWD1.1 ASICs (ingress and egress) with internal DRAMs
o each FWD1.1 has an RSP2.5 connected as a Flexible Forwarding Engine
(FFE)
o two RSP2.5 ASICs (ingress and egress) running at 250MHz
o the RSP2.5 provides flexibility as its functionality is derived from software. It
is the programmability of the RSP2.5 that enables this FP to be configured
for multiple services, allowing for current and future requirements
o the FWD1.1 performs data transfer while the RSP2.5 performs the functions
of a frame and cell processor
o CBAAS (Congestion Bus & ATM AAL5 Segmentation) FPGA
o on-board PUPS to convert 3.3VDC input to 1.5VDC @ 12 A
Passing on or copying of this document, use and communication of its contents not permitted without AlcatelLucent written authorization
05.01 / EN Preliminary Page 64//160
UA06.0 RNC Product Specification
Passing on or copying of this document, use and communication of its contents not permitted without AlcatelLucent written authorization
05.01 / EN Preliminary Page 65//160
o on-board switching power supplies for 0.75V and 0.90V termination resistors and
Vref
o standard serial ID EEPROM, programmed during final assembly
The main components of the GQM/ATLAS Traffic Management Daughtercard are:
o GQM ASIC
o The GQM uses five external 8Mx16-bit memory chips, for a total of 640
Mbits (80MBytes). The memory chips are configured for an 80-bit data bus
width to the GQM.
o RCAF (Re-assembly, Classification and Formatting) FPGA in a Xilinx Virtex II 6000
device
o Designed by Amirix Microsystems Inc.
o Includes 4MBytes of external QDR (Quad Data Rate) SRAM configured in
three banks: 512Kx36 data buffer, 512Kx36 context memory 1, and
512Kx36 context memory 2.
o two PMC-Sierra ATLAS 3200 chips for ATM operation
o Each ATLAS chip has 16M of external search SRAM configured as
256Kx72.
The Processor Daughterboard controls all datapath chips through either a 32-bit 33MHz PCI
or a 25MHz i960 interface (which can be dynamically sized at 8, 16 or 32 bits). Because
they require higher performance, FWDs and FQM/GQM are connected directly to the PCI
bus. The RSP2.5s are connected to the PCI bus through to their BAP (bi-directional access
port) interfaces via the PCI-BAP FPLD. The sized i960 bus (8-bit data bus) is used to control
the CPAC2.2. The regular i960 bus is used to control PHY Daughtercard chips (16-bit data
bus), and the ATLAS chips (32-bit data bus).
The features of the Processor Daughterboard are as follows:
o 512MB SDRAM for FP software load
o MPC755 PowerPC
o MPC106 PCI Bridge
o 3.3V V3 PCI to i960 Bridge
o Rebo and Zooty FPGAs
The 16-port OC-3 PHY Daughtercard uses the PMC-Sierra S/UNI 5382 PHY chip and has
16 customer-pluggable SFP LC Optical Transceivers, rated for maximum +85C operation,
operating in Single-Mode, Intermediate Reach (1310 nm, 15-Km). Note that at least one port
on this FP must be populated with an SFP.
The PUPS daughtercard contains four individual PUPS power supplies sequenced together,
to allow the higher voltages to come up first and go down last. The PUPS daughtercard is
powered from nominal -48VDC backplane power to provide:
o 1.8VDC @ 27.8A
o 2.51VDC @ 9.9A
o 3.31VDC @ 12.3A
UA06.0 RNC Product Specification
Passing on or copying of this document, use and communication of its contents not permitted without AlcatelLucent written authorization
05.01 / EN Preliminary Page 66//160
o 5.04VDC @ 1.61A
3.2.6.2 4 PORT GIGABIT ETHERNET FUNCTION PROCESSOR
In order to provide direct IP connectivity to the 9370 RNC, Gigabit Ethernet interfaces are
also supported directly on the MSS15K platform. This allows the 9370 RNC to support of
mix of ATM and Ethernet interfaces.
The 4-port GE Function Processor is based on the MS3 (Multi-service 3rd generation)
Function Processor platform. This is the same FP hardware platform used for the MS3 16-
port OC3/STM1 ATM FP. For the 4-port GE FP, some of the daughtercards are populated
differently. This hardware platform consists of the following common boards/cards:
o MS3 Common Motherboard
o MS3 PUPS (Point-of-Use Power Supply) Daughtercard
o Processor Daughtercard with the following features:
o 266MHz MPC755 PowerPC
o 512MB SDRAM suitable for supporting Hitless Software Migration.
o Built-in J TAG master for supporting off-line diagnostic testing of the FP

The differentiation in the functionality of the MS3 FP is achieved through the use of factory-
interchangeable physical interface daughter cards and traffic management daughter cards.
For the 4 port GE FP, the following are used:
o A 4-port Gigabit Ethernet PHY Daughtercard which has customer pluggable SFP LC
optical transceiver modules.
o 2 SFP Packet cards which mate with the Ethernet PHY Daughtercard. Each packlet
can hold two SFP modules.
o FQM Traffic Management Daughtercard

The 4-port GE FP supports the following capabilities:
o Support for 1000BASE-SX and 1000BASE-LX on a per-port basis
o Maximum segment length for 1000BASE-LX single mode is 10km
o Ethernet II, 802.3 LLC SNAP encapsulation
o Basic IP capabilities
o Full Gigabit Ethernet bandwidth supported on each of the four GE ports
o Aggregate bandwidth of approximately 2.5Gbps supported on all four FP ports
(depending on packet size and service)
5

o Full duplex only
o Ethernet statistics
o IEEE802.3 symmetric flow control only

5
This implies that not all four ports can run at full capacity (one Gbit/s) at the same time.
UA06.0 RNC Product Specification
Passing on or copying of this document, use and communication of its contents not permitted without AlcatelLucent written authorization
05.01 / EN Preliminary Page 67//160
o Internetworking with Cisco GSR, J uniper M160, J uniper M40 and Nortel Passport
8600
A block diagram of the 4 port GE FP is shown in Figure 3-28: MS3 16-Port OC3/STM1 FP
Block Diagram.
3.2.6.2.1 MS3 MOTHERBOARD
The major components of the MS3 FP motherboard consist of the following:
o One CPAC2.2 which allows access to both X and Y switching fabrics via the
backplane.
o Re-assembly FPGA
o Reassembles AAL5 ATM cells back into packets.
o 4MBytes of external QDR (Quad Data Rate) SRAM configured in three
banks: 512Kx36 cell storage, 512Kx18 linked list, and 512Kx18 context
memory banks
o two FWD1.1 ASICs (ingress and egress) with internal DRAMs
o each FWD1.1 has an RSP2.5 Network Processor ASIC connected as a
Flexible Forwarding Engine (FFE)
o two RSP2.5 ASICs (ingress and egress) running at 250MHz
o The RSP2.5 provides flexibility as its functionality is derived from software. It
is the programmability of the RSP2.5 that enables this FP to be configured
for multiple services, allowing for current and future requirements. The
FWD1.1 performs data transfer while the RSP2.5 performs the functions of
a frame and cell processor.
o CBAAS (Congestion Bus & ATM AAL5 Segmentation) FPGA
o On-board PUPS to convert 3.3VDC input to 1.5VDC @ 12 A
o On-board switching power supplies for 0.75V and 0.90V termination resistors and
Vref.
o A standard serial ID EEPROM programmed during final assembly
UA06.0 RNC Product Specification
Figure 3-29: 4 port GE FP Block Diagram

3.2.6.2.2 FRAME QUEUE MANAGER TRAFFIC MANAGEMENT DAUGHTERCARD
The FQM Traffic Management Daughtercard has the following features:
o Connects into the MS3 motherboard through two high-density, low-profile
connectors
o A standard serial ID EEPROM, programmed during final assembly
o Single FQM (Frame Queue Manager) FPGA
o uses an 80-bit wide external cell buffer with a capacity of 80Mbytes
o performs traffic management for frames only, including ATM AAL5 frames
on the egress path between the CBAAS FPGA and the PHY daughtercard
o supports 4-channel operation
o Single RCAF (Re-assembly, Classification and Formatting) FPGA
o uses 2MBytes of external QDR (Quad Data Rate) SRAM configured in three
banks: 512Kx36 data buffer, 512Kx36 context memory 1, and 512Kx36
context memory
The block diagram is shown in Figure 3-30 below.
Passing on or copying of this document, use and communication of its contents not permitted without AlcatelLucent written authorization
05.01 / EN Preliminary Page 68//160
UA06.0 RNC Product Specification
Figure 3-30: FQM Traffic Manager Daughtercard Block Diagram


3.2.6.2.3 4PGE PHY DAUGHTERCARD
The 4 port GE PHY Daughtercard has the following features:
o SFP metal cage connectors that allow insertion of single-mode or multi-mode optical
transceivers
o These optical modules have hot-swap capability.
o A standard serial ID EEPROM, programmed during final assembly
o 2 Dual Gigabit Ethernet Controllers with the following functionality:
o Two-port full-duplex Gigabit Ethernet Controller with industry standard PL3
system interface.
o Built-in dual SERDES, compatible with IEEE 802.3 1998 PMA physical layer
specification.
o Dual standard IEEE 802.3 Gigabit Ethernet MACs for frame verification.
Passing on or copying of this document, use and communication of its contents not permitted without AlcatelLucent written authorization
o Frame filtering on 8 unicast or 64 multicast entries.
05.01 / EN Preliminary Page 69//160
UA06.0 RNC Product Specification
Passing on or copying of this document, use and communication of its contents not permitted without AlcatelLucent written authorization
05.01 / EN Preliminary Page 70//160
o Internal 16KByte egress and 64KByte ingress FIFOs per channel to
accommodate system latencies.
o Line side loopback for system level diagnostic capability.
o Ethernet Header Processor (EHP) FPGA
o Two pipelined ZBT SRAM memory interfaces for accessing egress and
ingress header memories
o BIST capabilities for both external memories.
o Four rate adaptation input channel FIFOs which store data received from the
GE controllers.
o Output rate adaptation (RA) FIFO for storing data to be sent to the RCAF via
the PL3 Rx PHY interface
o Ingress port map capability allowing any port to be mapped to any other port.
o MAC address match capability
o Ethernet Header Classification
o Ethernet Header Compression
o Ethernet Header Error Checking
o CRC removal from all frames processed.
o Flow Control support based on programmable maximum bandwidths
o Four instances of Header insertion logic.
o Four output channel FIFOs which stage data before it is sent to the GE
Controllers.
o Egress port map capabilities allow any port to be mapped to any other port.
o Pass through mode supported on a per port basis.
o Ethernet header expansion.
o Egress Rate Shapers with programmable maximum allowable bandwidths
o PHY EPLD
o Controls the interrupt signals coming from the asynchronous chip set and
Gigabit
o Ethernet (GE) optics.
o Contains the logic to control the optical transceivers.
o Generates standard two-wire interface access to the GE optics, initiated by
the processor
o Interface for LED faceplate to indicate the traffic/status of each port.
The block diagram for the Ethernet PHY Daughterboard is shown in
Figure 3-31.
UA06.0 RNC Product Specification

Figure 3-31: 4 Port GbE PHY Daughtercard Block Diagram


The faceplate of a 4pGe FP with SFP sockets is shown in the figure below with the
protective hood raised to illustrate the fiber connectors.
Passing on or copying of this document, use and communication of its contents not permitted without AlcatelLucent written authorization
05.01 / EN Preliminary Page 71//160
UA06.0 RNC Product Specification
Figure 3-32: Faceplate view of 4pGE FP

3.3 9370 RNC SPECIFIC HARDWARE DESCRIPTION
As discussed earlier in this document, the 9370 RNC I/O plane is decoupled from the
application specific processing of the data flowing over those physical I/O interfaces. To that
end, an RNC specific packet processing engine was developed.
The first generation of that FP, referred to as the Packet Server Function Processor (PSFP),
constitutes the principal element in a 9370 RNC and occupies the majority of the slots in the
MSS15K shelf. It handles all the UTRAN control and user plane traffic from the NodeB
through to the UMTS Core Network
With the increasing need for more subscriber capacity and higher data throughput, a second
generation Packet Server Function Processor, the Dual-Core Packet Server (DCPS), was
developed. This new DCPS doubles the packet processing throughput of the original PSFP.
3.3.1 PACKET SERVER FUNCTION PROCESSOR
The Packet Server Function Processor (PSFP) is designed for the MSS15K platform. It is
based on open architecture PrPMCs (Processor PCI Mezzanine Card) for the compute
modules, allowing for rapid hardware upgrades and product evolution following Moores Law
as new components are introduced to market. This FP requires no external I/O. The
faceplate is shown in Figure 3-34: Packet Server Faceplate.
Passing on or copying of this document, use and communication of its contents not permitted without AlcatelLucent written authorization
05.01 / EN Preliminary Page 72//160
UA06.0 RNC Product Specification
The Packet Server FP is designed as a platform for a pool of PrPMCs which can be used in
a number of different RNC processing applications. A block diagram of the PSFP is shown
in Figure 3-33: Packet Server FP Block Diagram.
Figure 3-33: Packet Server FP Block Diagram

The switch fabric side of the PSFP consists of a standard SGAF design and, as such,
provides the following services:
o Traffic management (TM)
o Policing
o Service interworking
o IP Virtual Routing, PVC, SPVC services.
o Operations and maintenance
Passing on or copying of this document, use and communication of its contents not permitted without AlcatelLucent written authorization
05.01 / EN Preliminary Page 73//160
UA06.0 RNC Product Specification
Passing on or copying of this document, use and communication of its contents not permitted without AlcatelLucent written authorization
05.01 / EN Preliminary Page 74//160
o Performance management
o Billing / accounting
o Provisioning
o Scalability
o Virtual Path Termination (VPT)
o Fault / traffic management
o Sparing
o 1+1 and N+1 for the PrPMCs, depending on application role
A pair of MXT4400 Traffic Stream Processors (TSP) provides a data dispatch function for
the PrPMCs. For ingress traffic, the TSPs are responsible for AAL5 reassembly and transfer
of the recovered frames to the appropriate queues in the PrPMCs memory space. In the
egress direction backplane traffic is DMAd from the PrPMCs memory directly to an out-
going queue(s) in the TSPs where it is segmented and sent to the switch fabric. This
architecture ensures that all data transfer on the PCI buses which interconnects the
PrPMCs and the TSPs is in the form of write cycles which are much more efficient than PCI
read cycles.
o The Packet Server FP consists of:
o The Packet Server (PS) Motherboard
o A PrPMC Daughtercard
o Six Processor PCI Mezzanine Card (PrPMC) Daughtercards
o A Power Supply Module (PSM) Daughtercard.
UA06.0 RNC Product Specification
Figure 3-34: Packet Server Faceplate

3.3.1.1 PACKET SERVER MOTHERBOARD
The Packet Server FP motherboard provides the following features and functions:
o One CPAC2 ASIC for interfacing to the X and Y fabric cards.
o Two PQC2 devices with Context and Queuing Memory (CQM).
o MByte of queuing memory provided for each PQC2.
o Two PrPMC (Processor PCI Mezzanine Card) slots for the processing of radio
packets.
o A PowerPC 755 CPU tile which has a 64-bit data bus and L2 cache.
o A 32-bit, 33MHz PCI bus to support peripheral ASICs (PQC2 ASIC, Zooty FPGA)
o An i960 maintenance bus for microprocessor interface to non-PCI devices.
o MPC106 PCI bridge/memory controller providing the system memory interface, and
PCI master functionality
o SDRAM memory subsystem
o 256 MBytes implemented using Multi Chip Package (MCP) BGA devices
Passing on or copying of this document, use and communication of its contents not permitted without AlcatelLucent written authorization
05.01 / EN Preliminary Page 75//160
UA06.0 RNC Product Specification
Passing on or copying of this document, use and communication of its contents not permitted without AlcatelLucent written authorization
05.01 / EN Preliminary Page 76//160
o Read-Modify-Write (RMW) parity protection for data
o FLASH memory subsystem.
o 16MBytes of FLASH memory
o 64KByte block architecture, block erasable.
o dual bank architecture with an accompanying bank swapping mechanism
o General purpose I/O register control for alarms, slot-ID, jumpers, etc.
o Two serial debug ports.
o Both ports routed to backplane for MAC address and TIP access.
o Connector on TPC for TIP port access.
o Error protection, detection and indication
o Access exception on i960 bus.
o 60x, PCI and i960 errors.
o Parity protection.
o Watchdog Control.
o System Reset circuitry (designed in an FPGA)
o Five software timers
o Countdown timer chain.
o Current value read register.
o Programmable load value.
o Programmable interrupt at zero count.
o Fixed time base independent of the processor bus clock.
o Interrupt Controller (mimics i960 interrupt architecture).
o On-board Voltage Regulators for PowerPC Processor (MPC755 core voltage is
1.9V).
o Ethernet Port for Lab Debug.
o 10BaseT Ethernet access through RJ 45 connector on board (depopulated in
production version)
o FPGA-based PCI arbiter used to arbitrate between MPC106, Ethernet port and the
two PCI-PCI Bridges
3.3.1.2 PRPMC DAUGHTERCARD
The PrPMC Daughtercard has the following features and functions:
o Utopia Bridge FPGA to work around bug in PQC2 egress Utopia interface.
o Two Maker Communications MXT4400 Traffic Stream Processors for SARing and
routing of radio packets between the PQC2s and multiple PrPMCs.
UA06.0 RNC Product Specification
o Four PrPMC (Processor PCI Mezzanine Card) slots for the processing of radio
packets
3.3.1.3 PRPMC MODULE
The PrPMC module forms the heart of the Packet Server function. The six PrPMC modules
on the PSFP provide the dense computing required for RNC User and Control plane
processing. The block diagram for the PrPMC is shown in Figure 3-35: PSFP PrPMC
Module Block Diagram.
Figure 3-35: PSFP PrPMC Module Block Diagram

Each PrPMC module features the following:
o Processor
o MPC7410 at 450MHz core frequency
o 100MHz bus clock frequency
o Address and data bus parity
o L2 Cache
o 2MB backside cache using external pipeline burst-mode SRAM
o Data bus parity
o SDRAM
Passing on or copying of this document, use and communication of its contents not permitted without AlcatelLucent written authorization
05.01 / EN Preliminary Page 77//160
UA06.0 RNC Product Specification
Passing on or copying of this document, use and communication of its contents not permitted without AlcatelLucent written authorization
05.01 / EN Preliminary Page 78//160
o 256MB 72-bit wide ECC protected DRAM configured as a single bank of 16
bit devices
o FLASH
o 32MB soldered on-board
o Memory Controller and PCI Host Bridge ASIC
o Provides bridge function between PPC60x bus, system memory, and the
PCI local bus
o 100MHz PowerPC compatible bus interface
o SDRAM interface with ECC
o 32/64 bit PCI bus interface capable of running up to 66MHz
o Single channel DMA controller
o MPIC compliant interrupt controller
3.3.1.4 POWER SUPPLY MODULE (PSM) DAUGHTERCARD
The PSM Daughtercard has the following features and functions:
o 37 to 75 Vdc input voltage range.
o 5V@7A, 3.3V@40A outputs.
o All outputs DC isolated
o Low input shutdown and automatic recovery.
o Inrush current limiting.
o Input and output EMI filtering.
o Over-voltage protection.
o Sequencing of the 3.3 Vdc and 5 Vdc outputs.
o Live insertion capability.
3.3.1.5 HOST BRIDGE
The PowerPC Host Bridge provides interconnect between the MPC7410 and the PCI bus. It
also integrates a secondary cache controller for the external L2 cache and a high-
performance system memory controller for DRAM and FLASH.
The Host Bridge features:
o 32 bit address and 64 bit data local bus (60x system bus)
o System memory interface
o Supports DRAM (page mode, EDO), and SDRAM
o 64-bit data bus that operates up to 66 MHz
o Supports 1 to 8 banks built of x1, x4, x8, x9, x16 or x18 DRAM chips
o 1 GByte of RAM space, 16 Mbytes of ROM space
UA06.0 RNC Product Specification
Passing on or copying of this document, use and communication of its contents not permitted without AlcatelLucent written authorization
05.01 / EN Preliminary Page 79//160
o Supports writing to Flash EPROMs
o Supports parity or error-correcting code (ECC)
o External L2 cache controller
o Up to 1 MByte memory
o Direct-mapped
o Provides support for either asynchronous SRAM, burst SRAM, or pipelined
burst SRAM
o Full memory coherency
o FLASH (boot ROM) interface
o PCI interface (32-bit muxed address and data, @33Mhz)
The PCI interface supports both Slave and Master operation. Since there is an Ethernet
Controller for lab debug access, a PCI arbiter has been implemented in the Zooty FPGA for
the PCI bus master.
The supported PCI features include:
o PCI Specification 2.1
o burst transfers up to 32 bytes
o support of master aborts, target aborts, target re-tries and target disconnects
o posted writes when bursting to the PCI bus
3.3.1.6 PRIMARY PCI TO SECONDARY PCI BRIDGE
The PCI-to-PCI Bridge provides a high-performance connection path between two
Peripheral Component Interconnect (PCI) buses. Transactions occur between masters on
one PCI bus and targets on another PCI bus and the bridge allows bridged transactions to
occur concurrently on both buses.
The bridge supports:
o burst data transfers with pipeline architecture to maximize data throughput in both
directions
o independent read and write buffers for each direction
o up to three delayed transactions in both directions
3.3.2 DUAL-CORE PACKET SERVER OVERVIEW
3.3.2.1 SYSTEM OVERVIEW
The Dual-Core Packet Server (DCPS) is an evolution of the Packet Server Functional
Processor (PSFP)
The DCPS provides a pool of high performance processors which are responsible for the
following functions:
o High-touch Bearer processing.
UA06.0 RNC Product Specification
Passing on or copying of this document, use and communication of its contents not permitted without AlcatelLucent written authorization
05.01 / EN Preliminary Page 80//160
o Radio protocol handling (MAC, RLC and PDCP).
o Interface bearer protocols.
o Macro-Diversity Handover (frame selection, buffering, synchronization,
combining/splitting).
o AAL2 SARing function (Segmentation And Re-assembly).
o Translation between AAL2 and IP
3.3.2.2 FEATURE LIST
The DCPS consists of:
o A dual-core PowerPC processing and datapath motherboard including:
o Three Dual-Core 1.3GHz 8641D CPUs (six PowerPC CPU cores total).
o High-performance, robust, Serial Rapid IO (sRIO) interfaces and IP packet
handling
o A maintenance processor daughterboard, (PDB4) with a PowerPC 750GX and 512
MByte PC2700 DDR memory
A block diagram of the DCPS is shown in Figure 3-36: Dual-Core Packet Server Block
Diagram.
UA06.0 RNC Product Specification
Figure 3-36: Dual-Core Packet Server Block Diagram

3.3.2.3 PROCESSOR DAUGTHERBOARD
A new maintenance Processor Daughterboard (PDB), generally referred to as the PDB4 is
used on the DCPS. The PDB4 acts as the DCPS host processor and provides the following
features and functions:
o PowerPC 750GX @ 1GHz
o 512MB System memory.
o System Controller (Marvell Discovery 3).
o Legacy FPGA for supporting legacy RNC hardware
o FLASH - Reset Interface (FRI) CPLD
o HSCX Driver
o Ethernet PHY
o Event Log
o Clocks
Passing on or copying of this document, use and communication of its contents not permitted without AlcatelLucent written authorization
05.01 / EN Preliminary Page 81//160
UA06.0 RNC Product Specification
Passing on or copying of this document, use and communication of its contents not permitted without AlcatelLucent written authorization
05.01 / EN Preliminary Page 82//160
3.3.2.4 DCPS MOTHERBOARD
The DCPS motherboard provides the following features and functions:
o Three MPC8641D Dual-Core processor blocks with:
o 512 MBytes of DDR2-533 memory per core (3GB total DDR memory).
o 1 MByte of FLASH memory
o sRIO datapath interface
o Datapath FPGA (PANDA) for interfacing MPC8641D CPUs to RNC datapath with:
o 8 MBytes of egress ZBT SRAM.
o 8 MBytes of ingress ZBT SRAM.
o 8 MBytes of Flash for POST test vectors and test results.
o Manufacturing Test and Diagnostic improvements.
o CPAC2 ASIC interfaces to redundant X and Y backplane switching fabrics
o Two PQC12 Queue Controller ASICs with:
o 8 MBytes of CQM (Context and Queuing Memory).
o 4 MBytes of HCM (Header Context Memory).
o 32-bit, 33MHz PCI bus to support legacy RNC peripherals.
o i960 Maintenance bus from the PDB4 to interface to the CPAC2.
o PCI to sRIO Bridge.
o Eight port sRIO switch.
o Power block consisting of:
o -48V to 3.3V and 5V main power converters.
o Inrush current limiting on the -48V input.
o Low input shutdown and automatic recovery on -48V input.
o Inboard POLs (Point-of-Load power supplies) for generating various core
voltages from the 3.3V and 5V rails.
o Power monitoring and sequencing of the core voltages.
3.3.2.5 DCPS DATAPATH
This section provides brief overview of the DCPS datapath and its components. As the
terminology can be somewhat misleading, the following conventions are used:
o the ingress datapath is towards the fabric
o the egress datapath is away from the fabric (i.e. towards the MPC8641D end
points)
UA06.0 RNC Product Specification
3.3.2.5.1 MPC8641D CPU TILES
The DCPS contains three MPC8641D CPU tiles. At the heart of each CPU tile is a
Freescale Semiconductor MPC8641D Dual-Core integrated host processor as illustrated in
Figure 3-37: MPC8641D CPU Tile Block Diagram.
Each core in the MPC8641D is based on a high-performance superscalar architecture
supporting multiple execution units, including 4 independent units that execute the AltiVec
instruction set architectural extension for support of 128-bit vector operations.
The high level of integration in the MPC8641D helps simplify board design and offers
significant bandwidth and performance increases. The MPC8641D has a DDR2 SDRAM
memory controller, a local bus controller (LBC), a multiprocessor programmable interrupt
controller (MPIC), two I
2
C controllers, a four-channel DMA controller, and a dual universal
asynchronous receiver/transmitter (DUART). For high speed interconnect, the MPC8641D
provides two sets of multiplexed pins that support two interface standards: x1/x4 serial
RapidIO (with message unit) and x1/x2/x4/x8 PCI Express. The MPC8641D also has 4
integrated 10/100/1000 Mbps Ethernet controllers. The use of these interfaces is described
in more detail below.
Figure 3-37: MPC8641D CPU Tile Block Diagram

The MPC8641D has two 64-bit (72-bit with ECC) DDR2 SDRAM controllers. Each CPU tile
incorporates 512MB of SDRAM per memory controller for a total of 1GB per CPU tile. Each
memory channel is implemented as 64Mx72 with ECC, using five 1Gbit (64M x 16) devices.
The memory can be expanded to 1 GByte using 2Gbit (128M x 16) devices.
For high speed interconnect, the DCPS uses sRIO interfaces in x1 mode at a 2.5 gigabaud
link rate. The PCI Express interfaces are not used.
Each CPU tile also includes an I
2
C PROM for storage (if needed) of any boot configuration
for the MPC8641D.
The MPC8641D operates at a core frequency of 1.33 GHz, an internal (MPX) bus frequency
of 533MHz, and the DDR2 memory bus frequency of 266 MHz (for a memory data rate of
533MHz).
Passing on or copying of this document, use and communication of its contents not permitted without AlcatelLucent written authorization
05.01 / EN Preliminary Page 83//160
UA06.0 RNC Product Specification
Passing on or copying of this document, use and communication of its contents not permitted without AlcatelLucent written authorization
05.01 / EN Preliminary Page 84//160
3.3.2.5.2
Aligned
vated, clocked
d over both fabrics. The protocol engine
interfacing the CPAC2 to the CMD interface bus determines fabric selection.
traffic coming from both the X and Y fabrics is merged onto the CMD interface
3.3.2.5.3
uing Memory (CQM) and 4 Mbytes of Header
rating at 100MHz.
path functionality.
tation of incoming frames into Passport cells (Falcon or ATM encapsulated
ader (64 byte Passport cell comprises a 4 byte header,
st addresses for ATM cells only and 4 reserved
o queuing with 3 priority levels.
o us to the motherboard Utopia
3.3.2.5.4
orking between the Utopia
C8641D CPUs. The PANDA

ss data path:
he PQC12 are checked for errors.
e channel is enabled or not.
CPAC2
The CPAC2 ASIC is a companion device to the PRIZMA switch fabric device. The CPAC2
interfaces to the MSS15K switch fabrics over a proprietary DASL (Data
Synchronous Link) to a PQC12 ASIC on a CMD 32/16 bit interface bus.
The CPAC2 interfaces to both the X and Y switch fabrics (also referred to as Path X and
Path Y). Both paths are fully independent and one can be activated or deacti
or not clocked, tested or not tested without impacting the status of the other path.
The ingress traffic coming over the CMD interface bus from the ingress PQC12, is routed to
either fabric X or fabric Y, but never duplicate
The egress
bus and sent to the egress PQC12. The egress protocol engine for the CMD interface bus is
informed of packet availability from each fabric.
PQC12
The DCPS motherboard uses egress and ingress PQC12 ASICs. Each PQC12 is
configured with 8 Mbytes of Context and Que
Context Memory (HCM), with memory interfaces ope
The ingress PQC12 ASIC forwards Utopia Bus (16-bit @ 75MHz) data from the PANDA
FPGA to the CPAC2 CMD Bus interface (also 16-bit at 75MHz). The egress PQC12
provides the same reverse
The PQC12 provides the following traffic functions:
o Segmen
Passport cells).
o Addition of the Passport he
54 byte payload, 2 byte multica
bytes).
Traffic
o Traffic statistics collection.
Control Path AAL5 traffic routing from the PDB4 PCI b
bus.
o Network Interworking on ingress data path
The PQC12 is managed by the PDB4 over the PCI interface.
PANDA FPGA
The datapath or PANDA FPGA provides the data interw
interface of the PQC12s and the sRIO interfaces of the MP
also performs some processing on the data as it flows through it.
The PANDA performs the following operations on the egre
o Cells arriving from t
o VCI are checked to see if th
UA06.0 RNC Product Specification
Passing on or copying of this document, use and communication of its contents not permitted without AlcatelLucent written authorization
05.01 / EN Preliminary Page 85//160
o Cells are processed by the appropriate function.
the following operations on the ingress data path:
raw cell
t and free buffer pool.
The PANDA also has 8 MBytes of external FLASH. The FLASH is used to store POST
lf Test) test vectors that the PANDA executes through the J TAG chain as a
3.3.2.5.5
t
Interface) and dual XAUI (10-Gigabit Attachment Unit Interface) transceiver.
e VSC7281 configured as a quad 8-bit, independent parallel-to-serial and
bps) using link speeds of 2.5 gigabaud.
3.3.2.5.6 SWITCH
4A main features are:
dependently
Per-port destination ID lookup table, used to direct packets through the switch.
The DCPS uses the sRIO switch to complete the datapath connection between the MSS15K
fabric and the MPC8641D compute tiles. The DCPS also uses the sRIO switch to complete
o Loop back
o AAL5 Reassembly
o Cell Forwarding (AAL0 & AAL2)
o Packets and Cells are written directly to the memory of the target 8641D CPU.
The PANDA performs
o 8641D CPUs write (via DMA) cells and packets into buffers in the PANDA.
o The channel ID (VCI) is examined to determine the correct processing of the
Packet/Cell.
o PANDA supports AAL5 Segmentation, and AAL2/AAL0 cell forwarding,
segmentation and forwarding, etc
o Cells are sent as soon as they are filled up. AAL2 partial cells are sent at the
expiration of the timer CU.
o Data is provided to the PQC12 as cells over the Utopia 2 interface.
The PANDA has 8 MBytes of external sync SRAM for both the egress and ingress
directions. The ingress SRAM is used for counters and the egress SRAM is used for
connection configuration, AAL5 contex
(Power On Se
POST master after a power up reset. Following the execution of POST the PANDA will be
loaded with its mission mode image.
SRIO PHY
The sRIO PHY is a Vitesse VSC7281, single XGMII (10-Gigabit Media Independen
The DCPS uses th
serial-to-parallel serializer/deserializer (SerDes). Only 2 of the 4 SerDes are used by the
DCPS at a data rate of 2.0 gigabits per second (G
SRIO
The sRIO switch used on the DCPS is the Tundra Semiconductor Tsi564A 40 Gbits/s Serial
RapidIO switch. The Tsi56
o Up to 4 ports in x4 LP-Serial mode.
o Up to 8 ports in x1 LP-Serial mode (each x4 port can be configured in
as two x1 ports).
o Operating baud rates per data lane: 1.25 Gbit/s, 2.5 Gbit/s, or 3.125Gbit/s.
o Programmable serial transmit current with pre-emphasis equalization.
o
UA06.0 RNC Product Specification
Passing on or copying of this document, use and communication of its contents not permitted without AlcatelLucent written authorization
05.01 / EN Preliminary Page 86//160
the maintenance connection between the PDB4, the MPC8641D CPU tiles, and the PANDA
3.3.2.5.7
a bus bridge that connects PCI devices to serial RapidIO (sRIO) devices.
Y through an
XGMII interface.
The Rio Grande performs transaction mapping from PCI to RapidIO, and vice-versa. This
enables PCI based sub-systems and RapidIO based systems to interoperate.
FPGA.
PCI-TO-SRIO BRIDGE
The PCI-to-sRIO Bridge is implemented using 2 devices, the J ennic Ltd. Rio Grande and a
sRIO PHY.
The Rio Grande is
To complete the maintenance path from the PCI bus to the sRIO interface the Rio Grande
requires an external sRIO PHY. The Rio Grande interfaces to the sRIO PH
UA06.0 RNC Product Specification
Passing on or copying of this document, use and communication of its contents not permitted without AlcatelLucent written authorization
05.01 / EN Preliminary Page 87//160
4 RNC SYSTEMS & ENVIRONMENT
With the exception of the UMTS PSFP and DCPS modules, the 9370 RNC is built on the
MSS15K platform hardware under OEM supply agreement with Nortel.
The 9370 RNC is a unique product and has a separate product integrity program which runs
in parallel with that of the OEM MSS15K platform itself.
4.1 9370 RNC SINGLE MARKET PACKAGES
The 9370 RNC Single market packages are single shelf in-frame systems: a North
American variant with DS-1 timing interfaces, and an International variant with E1 timing
interfaces. With the introduction of new hardware in UA06, the market packages are further
differentiated into standard and high-capacity variants, depending on the module fill of the
base platform. These market packages are listed in Table 4-1: RNC Single Market Package.
The unused space in the upper half of the frame is available for the addition of a second
9370 RNC shelf through the RNC Expansion market package.


Figure 4-1 illustrates the composition of a single shelf 9370 RNC.
The RNC Single market package features:
o Redundant CP modules (2 performance options)
o Standard capacity CP3
o High Capacity CP4
o Redundant 16 port OC-3/STM-1 I/O cards for Iu, Iur, and Iub interfaces (2
performance options)
o Standard Capacity PQC-based 16pOC3SmIrAtm FP
o High Capacity MS3-based 16pOC3PosAtm FP
o Redundant 4 port Gigabit Ethernet I/O cards for Iu, Iur, and Iub interfaces (optional)
o BITS external timing options
o E1: 75 Ohm unbalanced or 120 Ohm balanced
o DS-1: 100 Ohm balanced.
o Packet Server scalable configurations (2 performance options):
o Standard capacity PSFP Packet Server: 4, 6, 7, 8, 10 and 12 per shelf
o High capacity Dual-Core Packet Server: 4, 6, 8, 10 and 12 per shelf
Refer to Table 4-2 for the 9370 RNC optionality matrix. The module fill and their locations in
the shelf are captured in Table 4-3: RNC Shelf Module Fill.
UA06.0 RNC Product Specification
Passing on or copying of this document, use and communication of its contents not permitted without AlcatelLucent written authorization
05.01 / EN Preliminary Page 88//160
Table 4-1: RNC Single Market Package
9370 RNC Single Description Market Pkg. Control Processor
Standard Capacity, E1 timing interfaces
NTW800BFAA01
CP3
High Capacity, E1 timing interfaces
NTW800DFAA01
CP4
Standard Capacity DS-1 timing interfaces
NTW800CFAA01 CP3
High Capacity DS-1 timing interfaces
NTW800EFAA01 CP4

Table 4-2 9370 RNC Module Fill Optionality Compatibility Matrix
Module CP3 CP4 16pOC3 PQC 16pOC3 MS PSFP DCPS 4pGigE
CP3 Y N Y Y Y Y Y
CP4 N Y N Y N Y Y
16pOC3 PQC Y N Y N Y Y Y
16pOC3 MS3 Y Y N Y Y Y Y
PSFP Y N Y Y Y N Y
DCPS Y Y Y Y N Y Y
4pGigE Y Y Y Y Y Y Y
Table 4-3: RNC Shelf Module Fill
Slot # Module
0E Filler
0 Control Processor
1 Control Processor
2 Packet Server
3 Packet Server
4 Packet Server
5 Packet Server
6 Packet Server
7 Packet Server
1E Filler
8 I/O (16pOC-3/STM-1)
9 I/O (16pOC-3/STM-1)
10 Packet Server
11 Packet Server
12 Packet Server
13 Packet Server
14 Packet Server or 4pGigE
15 Packet Server or 4pGigE
UA06.0 RNC Product Specification



Figure 4-1: RNC Single System Level Packaging
(Fans f ace rearward)
Lower Cooli ng Unit
Cabl e Trays
Fi l l er Panel
Breaker Interface
Panel
Front View
RNC Single Shelf in Frame
RNC Expansi on
Space
1 2 3 4 5 6 7 0 0E
9 10 1112 131415 8 1E

RNC Modul e
Fi l l

Passing on or copying of this document, use and communication of its contents not permitted without AlcatelLucent written authorization
05.01 / EN Preliminary Page 89//160
UA06.0 RNC Product Specification
Passing on or copying of this document, use and communication of its contents not permitted without AlcatelLucent written authorization
05.01 / EN Preliminary Page 90//160
4.2 9370 RNC EXPANSION AND DUAL MARKET PACKAGES
The 9370 RNC Expansion market packages are shelf/module only packages that are
installed in the upper frame space of the 9370 RNC Single market package. This MSS15K
shelf is common to all variants in the 9370 RNC market package line-up (Figure 4-1).
The RNC Expansion market package is not a standalone system and when installed
becomes an RNC Dual market package configuration. It is separately orderable and may be
used to upgrade an RNC Single configuration.
For Solution Commercialization and Supply Chain purposes the 9370 RNC Dual is available
by ordering the RNC Single plus an RNC Expansion market package. Although not an
orderable market package, the RNC Dual is a living configuration for purposes of
Engineering Change and Product Integrity tracking. Please refer to Figure 4-3: RNC Single
with Expansion Shelf (RNC Dual) which illustrates the fully populated frame of the 9370
RNC Dual configuration. The 9370 RNC Expansion market packages are listed in Table 4-4.
The RNC shelf-level module fill for the RNC Dual configuration is the same as for the RNC
Single configuration listed in Table 4-3, and features:
o Redundant CP modules (2 performance options)
o Standard capacity CP3
o High Capacity CP4
o Redundant 16 port OC-3/STM-1 I/O cards for Iu, Iur, and Iub interfaces (2
performance options)
o Standard Capacity PQC-based 16pOC3SmIrAtm FP
o High Capacity MS3-based 16pOC3PosAtm FP
o Redundant 4 port Gigabit Ethernet I/O cards for Iu, Iur, and Iub interfaces (optional)
o BITS external timing options
o E1: 75 Ohm unbalanced or 120 Ohm balanced
o DS-1: 100 Ohm balanced.
o Packet Server scalable configurations (2 performance options):
o Standard capacity PSFP Packet Server: 4, 6, 7, 8, 10 and 12 per shelf
o High capacity Dual-Core Packet Server: 4, 6, 8, 10 and 12 per shelf
Table 4-4: 9370 RNC Expansion Market Packages
9370 RNC Expansion Description Market Pkg. Control Processor
Standard Capacity, E1 timing interfaces
NTW810BFAA01
CP3
High Capacity, E1 timing interfaces
NTW810DFAA01
CP4
Standard Capacity DS-1 timing interfaces
NTW810CFAA01 CP3
High Capacity DS-1 timing interfaces
NTW810EFAA01 CP4

UA06.0 RNC Product Specification
Figure 4-2: RNC Expansion Shelf


Figure 4-3: RNC Single with Expansion Shelf (RNC Dual)
(Fans f ace rearward)
Upper Cooli ng Uni t
Lower Cool i ng Uni t
Breaker Interface
Panel
1 2 3 4 5 6 7 0 0E
9 10 1112 13 1415 8 1E
1 2 3 4 5 6 7 0 0E
9 10 1112 13 1415 8 1E
Front View
Cable Trays
Cable Trays
RNC Lower
Shel f Modul e
Fi l l
RNC Upper
Shelf Modul e
Fi l l

Passing on or copying of this document, use and communication of its contents not permitted without AlcatelLucent written authorization
05.01 / EN Preliminary Page 91//160
UA06.0 RNC Product Specification
Passing on or copying of this document, use and communication of its contents not permitted without AlcatelLucent written authorization
05.01 / EN Preliminary Page 92//160
4.3 PHYSICAL, ENVIRONMENTAL, ELECTRICAL
CHARACTERISTICS
The specifications for the RNC Single configuration are listed in Table 4-5: RNC Single
Operational Characteristics.
Table 4-5: RNC Single Operational Characteristics
Characteristic Unit RNC Single
mm 600 x 600 x 2125 W x D x H
without cosmetic side panels
Inches 23.6 x 23.6 x 83.7
mm 660 x 732 x 2125 W x D x H
with cosmetic side panels and doors
Inches 25.9 x 28.8 x 83.7
Weight (RNC Frame) Kg / lbs 125 / 275
Weight (fully populated RNC Single with accessories) Kg / lbs 420 / 925
Weight (fully populated RNC Dual with accessories) Kg / lbs 612 / 1348
Voltage (Nominal) VDC -48 or -60
Maximum Power Consumption
6

With PSFP
With DCPS
W
2540
2900
Maximum Heat Dissipation (1 shelf)
With PSFP
With DCPS
W/m
2

3306
7

3775
C
5 to 40 Normal Operating Temperature
F
41 to 104
C
-5 to 50 Short Term Operating Temperatures
F
23 to 122
C/hour
30 Temperature Variation
F/hour
86
Normal Relative Operating Humidity (non-condensing)
8



% 10 to 90
9
Short Term Relative Operating Humidity (non-condensing)8

% 5 to 90
9
Acoustic Noise Level (normal conditions) dBa <60
Earthquake Zone 4
Altitude m Max. 4000m above sea level

6
Estimated, based on a fully populated 9370 RNC shelf (i.e.12 Packet Server FPs)
7
Calculated based on [R40] Telcordia GR-63 NEBS, General Requirements for Network Equipment Building Systems, Section
4.1.6, Objective O4-20
8
Compliant to [R40] Telcordia GR-63 NEBS, General Requirements for Network Equipment Building Systems
9
Test results can be found in [R55] UMT/RNC/DJ D/017284 ALU 9370 RNC Full H/W Compliance Report
UA06.0 RNC Product Specification
Passing on or copying of this document, use and communication of its contents not permitted without AlcatelLucent written authorization
05.01 / EN Preliminary Page 93//160
5 RNC SOFTWARE OVERVIEW
This chapter provides an overview of the 9370 RNC software architecture; in cases where
more detailed descriptions are available for specific domains additional references are
provided.
5.1 RNC SOFTWARE ARCHITECTURE OBJECTIVES
The 9370 RNC software architecture introduces the following key capability in the UA06.0
release:
IP in the UTRAN: UA06 introduces IP UTRAN using a pair of 4-port GE interface cards
to carry IP traffic [R70] UMT/SYS/DD/023092 IP in UTRAN FN. UA06 feature
implementation in IP UTRAN is done in the following areas:
o Hybrid Iub support on RNC. In the hybrid Iub interface, the R99, signaling, GBR
and OAM traffic are still on ATM transport layer. The I/B HSPA(HSDPA and E-
DCH) is supported on IP/Ethernet. This configuration is used to support Macro
BTSs.
o IP IuPs. All traffic carried through IuPs including both C-Plane traffic and U-
Plane traffic is on IP/Ethernet. This interface is used to connect to the SGSN
through an IP network.
2x Dimensioning: The main objective of the Increased Dimensioning feature is to
increase the 9370 RNC dimensioning allowing for a greater number of NodeBs to be
supported by the 9370 RNC when PSFPs are employed, and a greater number of
NodeBs and Cells to be supported by the 9370 RNC when DCPS is employed.
o This feature also required the introduction of service groups where NodeBs are
distributed into service groups each mapped to a Core Process making the
addition of new PSFP or DCPS modules invisible to the customer; see
7.12.1Hardware Upgrade,
Bandwidth Pools and Advanced QOS: The implementation of bandwidth pools and
Advanced QOS [R73] provide the framework for 3GPP compliant Iub Alcap, Bandwidth
Pool Congestion Control and Guarateed Bit Rate (GBR) services for HSDPA [R77]
UMT/SYS/DD/023087 UTRAN Transport Architecture UTRAN architecture and
transmission management.








UA06.0 RNC Product Specification

5.2 RNC SOFTWARE DOMAINS
The 9370 RNC major software domains are shown in the following diagram:
Figure 5-1 RNC Software Domains













Shelf
Mngt
& Disk
OAM
CM
FM
PM
Base and Operating System Software
Control Plane - Cells Control Plane - Calls User Plane
ATM &
IP
Routing
RNC
Transport
Protocols
Path &
Connection
Mngt
RNC Primary Software Domains
Shelf
Mngt
& Disk
OAM
CM
FM
PM
Base and Operating System Software
Control Plane - Cells Control Plane - Calls User Plane
ATM &
IP
Routing
RNC
Transport
Protocols
Path &
Connection
Mngt
RNC Primary Software Domains


Base and Operating System software comprises two main types:
Packet Server platform software to provide a base for the APs VxWorks based.
Control Processor and ATM Interface module software also VxWorks OS
The Shelf Management software resides on the Control Processor (CP) and is responsible
for overall switch management via the Process Control System, including card control, disk
and file system management, software loading, upgrade software and CP equipment
protection.
The ATM and IP services software provides the external and internal connectivity within the
RNC. ATM services support VPs and VCs, PVCs and SPVCs, ATM connection
management and traffic management, equipment protection and APS. The ATM software is
located on the Control Processor and 16pOC3/STM-1 Interface host processor. IP services
support static routes, protected routes, IP forwarding tables and VLANS. The IP software is
located on the Control Processor as well as loaded into the hardware forwarding devices on
each card.
RNC transport protocols make use of the ATM services to support the external RNC IuB,
IuCS, IuPS, IuR, IuBC and IuPC interfaces at a transport level; this includes SS7 control
plane protocols on the NI and PDCs.
Passing on or copying of this document, use and communication of its contents not permitted without AlcatelLucent written authorization
05.01 / EN Preliminary Page 94//160
UA06.0 RNC Product Specification
Path and connection management software provides connectivity to external network
elements (NodeBs, SGSN, MGWs, Call Servers and other RNCs). ATM paths are
configured on the RNC, for example to NodeBs via an ATM transport network; the path
management system (Transport Bearer Manager) provides the connectivity between the
ATM Interface modules and the RNC internal APCs. Channels and calls are then
multiplexed onto the paths and managed by connection management software identified via
channel Ids (CIDs). Path management software (TBM) resides on the PMC-M and
connection management software on the TMU and NI.
OAM software comprises Configuration Management, Fault Management and statistics
collection or performance management (CM, FM, PM). The primary OAM software
functions are located on the CP and OMU, although local agents are found on each card.
The Control Plane is responsible for cell, NodeB and call processing both are located on
the TMU; functions include Radio Network Layer protocols (RANAP, RNSAP, NBAP), UE
and NodeB call processing, paging, IuR.
The U-Plane is responsible for forwarding AAL2 cells and packets between the IuB network
and core network; the functions include power control, macro diversity, ciphering, circuit-
switched services and packet switched services, GTP tunnel termination, support of MAC
and RLC protocols. The U-plane is located on the PC and RAB processors.
The RNC software domains and mapping to hardware is shown in the following diagram:
Figure 5-2 9370 RNC Processor Role for a maximum configuration
OAM
Passing on or copying of this document, use and communication of its contents not permitted without AlcatelLucent written authorization
05.01 / EN Preliminary Page 95//160

















Shelf Mngt
Shelf Mngt
Disk/Filesystem
OAM CAS/CDL
C-plane OAM/MIB
Counters
CP CP OMU OMU
TBM Path &
CID/UDP
SS7 Xport
NI
PDCs-SAAL
IuPC
NI
PMC-M PMC-M
Transport
RMAN, CN-IN
10 (8 with GE)
Alcap IuCS/ IuR
Alcap IuB
BW Pools/CAC
UE-Call, NOB-Call
14 (12 with GE)
TMU TMU TMU
Control Plane UMTS Protocols
IuB SSCOP
AAL2
U-Plane
RPM, LG, RLC, MAC
GTP, DHO
Paths
12 (10
with GE)
40 (32
with GE)
RAB PC PC RAB
BW
Limitation

IP NAT
ATM
Interfaces
IO IO
GE
UA06.0 RNC Product Specification
5.3 RNC PLATFORM SOFTWARE
The 9370 RNC uses the Passport 15000 platform software on:
Control Processor for shelf management and file system/disk management
Interface card processors: 16 port OC3 ATM and 4 port Gigabit Ethernet
Packet Server Host CPU (PDC)

Figure 5-3 RNC Platform Functions
Control
Processor 16pOC3
ATM IF &
4pGE
IF
Shelf Control
OAM
Loading
SW Upgrade
SW Control
Spooling
File/Disk System
ATM Routing
IP Routing
Carrier Grade
AP AP
AP AP
AP AP
PDC
PS
ATM Interfaces:
PVC & SPVCs
ATM Cell Forwarding
Connection Mngt
Traffic Management
Sparing and APS

GE Services:
IP Packet Forwarding
Protected Routes
Link Aggregation

Card & Port Mngt
AP AP
AP AP
AP AP
PDC
PS
Packet
Server
RNC Applications
Control Plane
Bearer Plane
CP-PS Messaging
Connection Mngt
OAM & Maintenance

SS7 Link Layer
CPU CPU
UMTS Protocol Stacks

SS7 Transport

PS Base Software
AP Maintenance
Sparing Functions
Connection Mngt
AP Messaging

The VxWorks OS is used on all processors in the RNC.
Control Processor platform software functions include:
Shelf-level, card and hardware component control, software control, software loading
and upgrade, spooling, file system and disk management, midplane control
OAM components, configuration (CDL and MIB), fault detection and notification,
statistics collection and spooling, network management interface
ATM services are supported on both the Control Processor and ATM interface cards:
PNNI routing and signaling components
VP and VC connection management
Traffic management
Carrier grade
OAM cell management and statistics
Passing on or copying of this document, use and communication of its contents not permitted without AlcatelLucent written authorization
05.01 / EN Preliminary Page 96//160
UA06.0 RNC Product Specification
IP services software is similarly located on both Control Processor card and GE interface
card; forwarding tables are downloaded on to the PQC or RSP on all cards:
Virtual Routers
IP, UDP and TCP protocols stacks
QOS functions and Differentiated Services


Figure 5-4 Packet Server Base



Passing on or copying of this document, use and communication of its contents not permitted without AlcatelLucent written authorization
05.01 / EN Preliminary Page 97//160














The Packet Server cards (PS1 and DCPS) contain base software to support multi-cpu
computing environment for the RNC applications. The same PS base software is supported
on both Packet Servers with the lower layer BSP code and HAL drivers providing the
requisite abstractions to run on different hardware this software is present on the AP
processors only.
The PS Base utilities are responsible for software loading, initialization via the dependency
manager which ensures that code startup occurs in a well-defined manner; includes timers
and sparing framework (used by the NI function).
PS Base supports maintenance functions to perform local diagnostics, notify the RNC OAM
system of faults in the AP environment, and respond to processor lock/unlock commands.
PS Control Plane Application Service components are shown in the following diagram:

Device PSLAN
Vxworks & Board Support Package
HAL
Packet Server Base
Connection Services Sparing Services
CS HA
High Availability on PS
Connection Mngt
Debug Monitor
HAL
Base OMs
PSHAL Timers
Dependency Mgr
PSLAN
Loader
HW Database
vSockets
Inter PMC
PMC Core Base Layer
PMC - Interfaces
LOCAL RUNTIME BASE
Fault Mngt Address Book
Packet Server Base
software modules to
Initz Control PSE
support compute
infrastructure
SCN Mngt PSE Messaging
LAT Messaging
Diagnostics
Operating System
Hardware transparency
UA06.0 RNC Product Specification
Figure 5-5 Control Plane Services












Counters Call Trace
Callp Run-time
Env (GCCP)
DDR Fault Mngt
Configuration
Mngt
ADM - Mediation
Association Mgr
(APE)
PS CN Applicati on Services
Cell Info Services
Control Plane Applications
PS Platform Software
Counters Call Trace
Callp Run-time
Env (GCCP)
DDR Fault Mngt
Configuration
Mngt
ADM - Mediation
Association Mgr
(APE)
PS CN Applicati on Services
Cell Info Services
Control Plane Applications
PS Platform Software
Counters Call Trace
Callp Run-time
Env (GCCP)
DDR Fault Mngt
Configuration
Mngt
ADM - Mediation
Association Mgr
(APE)
PS CN Applicati on Services
Cell Info Services
Control Plane Applications
PS Platform Software
Control Plane Functions:
GCCP the Control Plane run-time environment; in essence this encapsulates the
call control applications to provide transparency to hardware and to a certain degree
the base software environment.
Distributed Data Repository (DDR) downloads data to the TMU Applications and
maintains local database on the TMU; the structure consists of a centralized agent
on the OMU and local agents on TMU.
Cell Information Services
Call Trace implements tool for tracing and logging calls
Counter framework to collect counters activated via the control plane applications.

OAM Functions: The configuration Management component has overall control of MIB
downloading and storage to disk, and distribution of configuration data to Applications;
CM_CA interacts with APE (Association Manager) to receive MIB Mediation Objects
(MOs) from the OMC
APE implements a proprietary layer 4 transport stack to download tables of
configuration data
ADM element converts the external view MOs to the internal Application Object
(AO) view termed mediation
CM_LA transfers AOs to Applications via a local agent on each control plane TMU.




Passing on or copying of this document, use and communication of its contents not permitted without AlcatelLucent written authorization
05.01 / EN Preliminary Page 98//160
UA06.0 RNC Product Specification
Figure 5-6 OAM Functions















OMC-R
Association Manager
(APE)
ADM
CM_CA
DAS
MIB
CM_LA
TGE/RGE
TEE/REE
Applications
TEA/REA
MO
AO
OMU
OMU, TMU
OMC-R
Association Manager
(APE)
ADM
CM_CA
DAS
MIB
CM_LA
TGE/RGE
TEE/REE
Applications
TEA/REA
MO
AO
OMU
OMU, TMU


5.4 RNC TRANSPORT INTERFACES
The RNC protocol stacks enable interconnection to other network elements to perform
UTRAN functions such as connection setup and release, radio bearer assignment, paging,
location tracking. RNC protocol stacks are specified in 3GPP standards to the CS and PS
core networks (IuCS and IuPS), to other RNCs (IuR), to NodeBs and UEs (IuB).
Generally, each RNC interface is comprised of transport layer protocols and radio network
layer protocols. The RNC protocol stacks for the core network interfaces (IuCS, IuPS, IuR)
are illustrated below.









Passing on or copying of this document, use and communication of its contents not permitted without AlcatelLucent written authorization
05.01 / EN Preliminary Page 99//160
UA06.0 RNC Product Specification
Passing on or copying of this document, use and communication of its contents not permitted without AlcatelLucent written authorization
05.01 / EN Preliminary Page 100//160
Figure 5-7 RNC Iu Interfaces






















Control Plane Stacks:
The signaling transport protocol on the RNC core network interfaces is SCCP (Signaling
Connection Control Part) derived from common channel SS7. SCCP enables the Radio
Network Layer protocols to transparently use either ATM or IP transport stacks in future
releases.
SCCP supports signaling across networks including global title address translation; MTP
(Message Transfer Part) provides routing of signaling messages between signaling points,
and the link layer is based on SSCOP or SCTP (IP) to provide a robust and reliable signaling
bearer.
The IuB interfaces to NodeBs support RNL directly on the link layer (SSCOP, SCTP).
ATM transport uses VP/VCs on AAL 5 across ATM switched networks, and IPv4 on Ethernet
media on either routed or Ethernet switched networks.
The RNC ATM and IP transport stacks to the NodeBs are shown in Figure 5-8 RNC IuB
Interface.
Q.2150
AAL5
ATM
SSCOP
SSCF
MTP3B
Q.AAL2
Q.2630
RNC
IuCS
IuPS
IuR
RNSAP
SSCP
AAL5
ATM
SSCOP
SSCF
MTP3B
RANAP
IuR + IuCS, IuPS IuR, IuCS
IuUP
AAL2
ATM
GTP-U
UDP
AAL5
ATM
IP
IuPS IuR, IuCS
RNC Core Network Interfaces
Q.2150
AAL5
ATM
SSCOP
SSCF
MTP3B
Q.AAL2
Q.2630
RNC
IuCS
IuPS
IuR
RNSAP
SSCP
AAL5
ATM
SSCOP
SSCF
MTP3B
RANAP
ATM Transport
IuR + IuCS, IuPS IuR, IuCS
IuUP
AAL2
ATM
GTP-U
UDP
AAL5
ATM
IP
IuPS IuR, IuCS
RNC Core Network Interfaces
Q.2150
AAL5
ATM
SSCOP
SSCF
MTP3B
Q.AAL2
Q.2630
Q.2150
AAL5
ATM
SSCOP
SSCF
MTP3B
Q.AAL2
Q.2630
Q.2150
AAL5
ATM ATM
SSCOP
SSCF
MTP3B
Q.AAL2
Q.2630
RNC
IuCS
IuPS
IuR
RNC
IuCS
IuPS
IuR
SSCP
AAL5
ATM
SSCOP
SSCF
MTP3B
RANAP RNSAP RNSAP
SSCP
AAL5
ATM ATM
SSCOP
SSCF
MTP3B
RANAP RANAP

IuR + IuCS, IuPS

IuR, IuCS
IuUP
AAL2
ATM
GTP-U
UDP
AAL5
ATM
IP
IuUP IuUP
AAL2
ATM ATM
GTP-U GTP-U
UDP
AAL5
ATM ATM
IP
IuPS IuR, IuCS
RNC Core Network Interfaces
Control Plane Bearer Plane
RANAP
SCCP
M3UA
ETHERNET
SCTP
IP
IuUP
UDP
IP
ETHERNET
IuUP
RTP
ETHERNET
UDP
IP
RNSAP
IuR, IuPS IuCS

IP Transport
RANAP
SCCP
M3UA
ETHERNET
SCTP
IP
RNSAP RANAP RANAP
SCCP
M3UA
ETHERNET ETHERNET
SCTP
IP
RNSAP RNSAP
IuR, IuPS IuCS
Radio NW Layer
Transport Layer
Radio NW Layer
Transport Layer
Radio NW Layer
Transport Layer
UA06.0 RNC Product Specification
Figure 5-8 RNC IuB Interface

Passing on or copying of this document, use and communication of its contents not permitted without AlcatelLucent written authorization
05.01 / EN Preliminary Page 101//160















Node B
RNC UE
ATM Transport
ATM Transport
IP / Ethernet Transport
IP / Ethernet Transport
Q.2150
AAL5
ATM
SSCOP
SSCF
MTP3B
Q.AAL2
Q.2630
Control Plane Bearer Plane
Radio NW Layer
Transport Layer
AAL2
IuBFP
ATM
NBAP
SSCOP
AAL5
ATM
SSCF
IuBFP
UDP
IP
ETHERNET
NBAP
SCTP
IP
ETHERNET
RNC IuB Protocols
Node B Node B
RNC RNC UE UE
ATM Transport
ATM Transport
IP / Ethernet Transport
IP / Ethernet Transport
Q.2150
AAL5
ATM
SSCOP
SSCF
MTP3B
Q.AAL2
Q.2630
Q.2150
AAL5
ATM
SSCOP
SSCF
MTP3B
Q.AAL2
Q.2630
Q.2150
AAL5
ATM ATM
SSCOP
SSCF
MTP3B
Q.AAL2
Q.2630
Control Plane Bearer Plane
Radio NW Layer
Transport Layer
Radio NW Layer
Transport Layer
AAL2
IuBFP
ATM
AAL2
IuBFP IuBFP
ATM ATM
NBAP
SSCOP
AAL5
ATM
SSCF
NBAP NBAP
SSCOP
AAL5
ATM ATM
SSCF
IuBFP
UDP
IP
ETHERNET
IuBFP IuBFP
UDP
IP
ETHERNET ETHERNET
NBAP
SCTP
IP
ETHERNET
NBAP NBAP
SCTP
IP
ETHERNET ETHERNET
RNC IuB Protocols
Bearer Plane Stacks:
The ATM bearer plane consists of a framing protocol layered on AAL2 for IuB and IuR
interfaces, as well as IuCS circuit switched connections; GTP tunnels are implemented on
the IuPS packet switch interface.
The IP bearer plane consists of the same framing protocol layered on UDP over IP; the RTP
protocol is included on the IuCS interface to use the IETF standardized packet format for
delivering audio over the IP infrastructure.
GTP-U is tunneling protocol used to transfer user-plane data between the RNC and core
network (SGSN) via the IuPS interface the main functions are:
Transfer of data packets and sequence control
Tunneling and encapsulation including a 32 bit Tunnel Endpoint Identifier: enables
support of multiple UE PDP contexts, as well as multiplexing of user data packets
destined for different addresses on a single path (identified by two IP addresses)
Path keep-alive maintenance
The software elements providing the ATM Transport Control Plane are shown in the
diagram below; the IP Transport Control Plane implements the same model using M3UA
instead of MTP3B, and SCTP instead of SSCOP. The purpose of the SS7 stacks is to
provide transport for call processing control messages between nodes in the UTRAN. SS7
is used on IuCS RANAP, IuPS RANAP and IuR RNSAP between neighboring RNC nodes.
The SCCP and MTP3B and M3UA stack components are located on the NI in a 1:1 hot
standby sparing model. J ournaling framework (J FK) is used to journal all critical context
data between the NIs. Once the active and standby NIs are synchronized, failure of the
active NI results in an activity switch to the standby NI without loss of calls.
UA06.0 RNC Product Specification
SCCP (Signaling Connection Control Part) provides basic connection facilities between the
RNC and core network elements e.g. SGSN, MSC; the SCCP routing is based on
Destination Point Codes (DPC). The MTP3B and M3UA stacks support message routing
across ATM and IP transport networks with both associated and quasi-associated modes
being used. A route set consisting of multiple link sets is provisioned on the RNC for each
neighboring node, and each link set can consist of 1-16 links. MTP3B/M3UA link set
protection is achieved through provisioning of redundant links. By distributing these links
across the Packet Server PDC host processors, the loss of the PDC host will not result in
the failure of any link sets.
Figure 5-9 Control Plane Transport Functions


















AAL5
ATM
ACTIVE ATM IF
Layer 3 Protocols
1+1 Spared
NI PMCs
AAL5
ATM
STBY ATM IF
SSCF
SSCOP
PSFP PDC
SSCF
SSCOP
PSFP PDC
MTP3B
ACTIVE NI
SCCP
MTP3B
STDBY NI
SCCP
1+1 Spared
ATM OC3 FPs
Equipment sparing
and APS
SS7 Link Layer L2
distributed across
PSFPs for CG response
RANAP
TMUs
RNSAP
CALLP
SS7 Control Plane
Software Elements
AAL5
ATM
ACTIVE ATM IF
AAL5
ATM
ACTIVE ATM IF
Layer 3 Protocols
1+1 Spared
NI PMCs
AAL5
ATM
STBY ATM IF
AAL5
ATM
STBY ATM IF
AAL5
ATM
STBY ATM IF
SSCF
SSCOP
PSFP PDC
SSCF
SSCOP
PSFP PDC
SSCF
SSCOP
PSFP PDC
SSCF
SSCOP
PSFP PDC
SSCF
SSCOP
PSFP PDC
SSCF
SSCOP
PSFP PDC
MTP3B
ACTIVE NI
SCCP
MTP3B
ACTIVE NI
MTP3B
ACTIVE NI
SCCP
MTP3B
STDBY NI
SCCP
MTP3B
STDBY NI
SCCP
MTP3B
STDBY NI
SCCP
1+1 Spared
ATM OC3 FPs
Equipment sparing
and APS
SS7 Link Layer L2
distributed across
PSFPs for CG response
RANAP RANAP
TMUs
RNSAP RNSAP
CALLP
SS7 Control Plane
Software Elements
The ATM SS7 control plane software architecture is shown in the above diagram:
UMTS protocol messages (RANAP, RNSAP) are received from the core network by the
RNC on the interface modules. AAL5 streams are forwarded to the PSFP PDC where the
cells are reassembled into SSCOP frames, and processed by the SSCOP link handler.
The control plane packets are sent to NI AP and processed by MTP3B/M3UA and SCCP;
internally RNC uses IP based messaging to exchange packets between APs.
NI sends RANAP or RNSAP packets to the TMU associated with the call context.
NBAP control plane protocol is supported by the TMU using with SSCOP for ATM transport
or SCTP for IP transport. Up to 8 control VCs are connected to a NodeB to support the
NodeB Control Port (NCP) and Common Control Ports (CCP) each supporting a SSCOP link
Passing on or copying of this document, use and communication of its contents not permitted without AlcatelLucent written authorization
05.01 / EN Preliminary Page 102//160
UA06.0 RNC Product Specification
layer session. . In the case of IP transport a SCTP association and multiple streams are used
to interconnect to the NodeB.
5.5 PATH MANAGEMENT
The RNC supports ATM paths or connections and IP connectivity to the NodeBs over the
IuB interface. On the core network side the RNC supports ATM paths on all Iu interfaces
and IP GE on the IuPS interface.
Common channels such FACH, RACH, paging channels as well as calls (SRBs, TRBs) are
then multiplexed onto these paths.
In the case of ATM paths or VCCs, the channels or calls are multiplexed via AAL2
channels and identified via CIDs
In the case of IP interfaces, the channel or call is identified via a UDP port number
Transport Bearer Manager is responsible for configuring paths on the RNC and handling
path redistribution when equipment failures occur e.g. Packet Server card.
Connection establishment is performed by the TMU in the control plane, and TBM on the
PMC-M APCs in the user plane.
Figure 5-10 Path Management
Passing on or copying of this document, use and communication of its contents not permitted without AlcatelLucent written authorization
05.01 / EN Preliminary Page 103//160












The connectivity between the RNC external Iu interfaces and internal processing
components are shown in the above diagram.
RAB
RNC ATM USER PLANE INTERFACE
PC
GTP
LPS
ATM
ATM endpoints: an ATM VC (PVC, SPVC) is configured on the 16pOC3/STM-1
Interface module and using an internal switch connection through the switch fabric
is terminated on a PC APC; a group of such ATM VCs is called a Logical Group
the LG serves as an aggregation vehicle to manage multiple VCs as a single entity,
for example to redistribute LG groups when an PC fails. The ATM PVCs and SPVCs
are configured by the ATM Services software. TBM interacts with the ATM Services
software to associate VCCs with LG sets on the PC processors.
IF
RLC
IP NAT
AAL2 MUX
RNC IP USER PLANE
SCUDP
HW-ATM
GE
CALL
VC FP
CONNECTION
HW-IP
MAC
GE LINK
IuB, IuCS,
DHO
TTI
UA06.0 RNC Product Specification
IP endpoints: external static IP addresses are configured on the TMU APCs
enabling NodeBs to send frames to the RNC using these IP addresses plus a UDP
port number to identify a common channel or call. The IP addresses are managed
by the IP Virtual Router system and the forwarding tables distributed to all cards in
the RNC.
The main TBM (Traffic Bearer Management) software elements are summarized in the
following diagram, including the key TBM interfaces.
Figure 5-11 TBM

Passing on or copying of this document, use and communication of its contents not permitted without AlcatelLucent written authorization
05.01 / EN Preliminary Page 104//160






TRANSPORT & CELL RESOURCE MANAGEMENT
TBM PATH & ATM & IP TRM RMAN
ALCAP STACK
ATM CID / IP
UDP
BW POOL
MNGT
Cell Logical
IuCS / IuR
Groups
IuB
MNGT CAC

The TBM software elements perform the following functions:
Process manager handles the TBM process initialization
Event manager receives and filters events to perform path binding and fault recovery
Database manager provides APIs to store, retrieve and sort through information related
to path management; the Provisioning manager provides a higher layer of abstraction
for provisioning: CAS friendly API, LG to PC assignment, path to LG assignment
Supervisor implements state machines to manage the LG aggregation of paths
Resource manager supports resource modeling for each resource required to sustain
datapath binding complementing the Supervisor function; the resource components
include the ATMIF element to interface to the platform ATM Core receiving notifications
of ATMIF media and VCC states, the Card manager to supervise the status of each
card, and the PID Manager to supervise the status of the base messaging process ids
Interface manager specializes in facilitating the API with other applications or
interfaces:
o CAS for provisioning data
o PVC Agent interface for PVC establishment
o CS Agents to interact with connection services
IuB traffic manager supports IuB bandwidth management on the PC APC
UA06.0 RNC Product Specification
5.6 CONTROL PLANE SOFTWARE
The control plane architecture combines the two main domains: cell management and UE
call processing on the TMU: The 9370 RNC architecture provides autonomous call
processing capacity. The control plane software is shown in the following diagram:
Figure 5-12 Control Plane Components

Passing on or copying of this document, use and communication of its contents not permitted without AlcatelLucent written authorization
05.01 / EN Preliminary Page 105//160






TRANSPORT & CELL RESOURCE MANAGEMENT
ATM & IP TRM RMAN
ALCAP STACK TBM PATH &
BW POOL
MNGT
Cell Logical
IuCS / IuR ATM CID / IP
UDP
Groups
IuB
CAC

The RNC implements the following control plane functions:
RNC establishes Radio Access Bearers between the UE and core network
consisting of Radio Bearer and GTP Tunnel; RNC analyzes the attributes of the
requested RAB(s), evaluates the radio resources required, and then activates or
reconfigures radio channels mapping the RAB to the radio bearer.
RNC is responsible for creating signaling bearers between UE and core network;
signaling radio bearer is setup when a UE requests call setup via the common
channels (RACH and FACH) based on RRC messages; RNC requests radio links to
be established over the IuB interface to the NodeB
RNC mobility management functions maintain the UE to cell association using the
RRC state: idle/camping on UTRAN cell or connected mode {Cell-DCH, Cell-FACH,
Cell-Paging-Channel, UTRAN-Registration-Area-PCH}; allocates Radio Network
Temporary Identifiers for use between UEs and UTRAN for signaling messages.
Basic handover function consists of three phases:
o Measurement (reports and criteria) UEs continuously measure signal
strength of cells and send the reports to the RNC
o Decision phase during which the RNC looks at algorithm parameters and
handover criteria to trigger the handover
o Execution of the handover involving signaling and radio resource allocation
Handover types include: softer HO (different sectors same NodeB), soft HO, hard
HO; inter- and intrafrequency; SRNS
RNC admission control estimates whether a new call can have access to the
system without causing a degradation in the bearer requirements of existing calls
(SNR is not exceeded in the cell), as well as determining whether sufficient
resources are available to meet the QOS requirements for the new call.
UA06.0 RNC Product Specification
Passing on or copying of this document, use and communication of its contents not permitted without AlcatelLucent written authorization
05.01 / EN Preliminary Page 106//160
RNC manages the allocation of codes to cells and channels: scrambling codes to
identify cells, and channelization codes to identify various traffic channels.
RNC selects positioning method and controls how the method is carried out within
the UTRAN; RNC connects to a stand-alone AGPS Serving Mobile Location Center.
RNC supports NodeB and cell management: including initialization support,
providing data parameters, receiving status data and measurement reports.
Paging to locate UEs for incoming calls
Maintains cell information e.g. codes, cell ID number, location area ID and routing
area ID, power control thresholds, handover related data traffic parameters and
connection qos data, neighboring cell lists
Performs a system information broadcast function to provide UEs the necessary
data to enable UTRAN communication: radio path information, radio measurement
criteria, paging data, mobility management, cell selection; this data is structured into
Scheduling Blocks and System Information Blocks.

5.6.1 TMU FUNCTIONS
TMU handle dedicated and common UE call control functions
Configuration data is provided via the MIB and all TMU receive the same provisioning data.
In essence, each TMU provides autonomous call processing services independent of other
TMU; based on the current loading or eagerness of the TMU new calls will be directed to the
TMU. Each TMU shares RAB management with other TMU
TMU intermessaging function is based on CN-IN messaging interface. Large messages are
segmented to reduce potential head-of-line blocking and delays for critical call control
messaging.
The TMU implements the control plane for performing resource and OAM management of
NodeBs, cells and common channels
The TMU provides the following protocol functions:
SAAL UNI / SSCOP stack as the ATM link layer protocol to NodeBs
Common and dedicated NBAP ASN.1 encoding and decoding procedures
Q.AAL2 stacks for dynamic AAL2 channel establishment and release (Open IuB)
CID management to identify AAL2 channels for common channels and calls
The TMU supports the following NodeB procedures:
NBAP procedures: initialization, call setup and release, reconfiguration
NodeB notifications and resource status handling
Providing system information to NodeB
Measurement processes
UA06.0 RNC Product Specification
Passing on or copying of this document, use and communication of its contents not permitted without AlcatelLucent written authorization
05.01 / EN Preliminary Page 107//160
5.6.2 CALL PROCESSING
TMU call processing is responsible for:
Handling aspects of UE calls dedicated RRC and RRM procedures, and RANAP
User plane layer 1 and layer 2 configuration messaging to the RAB APCs
TMU call processing consists of a number of interacting components responsible for the
establishment and release of UE calls.

UE Call Component
The UE_Call element is responsible for the coordination of the TMU call processing
components to manage calls. Functions include:
Supports CS and PS calls from both core networking domains
RRC connection establishment and the Iu section facing the core network
RANAP protocol interactions and interfacing to the SS7 SCCP transport stack; RANAP
messaging provides the core network parameters required to manage RABs
Performs RAB assignment which involves performing a best fit mapping between the
available RABs and the UE capabilities; RABs are assigned and released, and RAB
reconfigurations may be done based on measurements; interacts with the Radio
Resource Management domain
Controls establishment, modification and release of user plane dedicated radio bearers
(MAC and RLC); controls ciphering and integrity key changes
Process measurements data e.g. intrafrequency, user plane, inter-RAT
Handovers of connections
Location service

UE RRC Component
Primary role is implementation of the RRC protocol for UE signaling interactions :
RRC protocol stack
Direct Transfer message routing between the Iu interfaces and UE
RRC (Radio Resource Control Protocol): operates between the RNC and UE to control the
radio bearers, transport channels and physical channels:
Dedicated Control Function Entity (DCFE) to handling all signaling specific to one UE
Paging and Notification Function Entity (PNFE) handles paging messages sent to idle
mode UEs
Broadcast Control Function Entity (BCFE) handles system information broadcasting on
broadcast and FACH logical channels
The RRC protocol messages are transported between RNC and UE using the FP bearer
channels encapsulated in MAC and RLC packet headers.
UA06.0 RNC Product Specification
Passing on or copying of this document, use and communication of its contents not permitted without AlcatelLucent written authorization
05.01 / EN Preliminary Page 108//160
Iu Call Component
Responsible for the RNC interface to the core network for UEs:
Handles Iu core network interactions on behalf of UE Call Component
Supports dedicated RANAP protocol termination for PS and CS calls
Interacts with SCCP in respect to Iu transport connectivity
Interacts with NI for establishment of GTP tunnels and user plane connection
establishment and release
Supports classes specific to the CS and PS domains related to MIB data loading

IuCom Call Component
Provides functions to interface the RNC to the core network that are not directly dedicated to
specific UE:
Interacts with the core network elements via the Iu interface
Implements connectionless procedures e.g. paging, register, reset

RNC Call Component
Responsible for controlling IuB/IuR call management interacting with NoBCall on the NodeB
side and IuR Call-Init and Call-Term for Serving and Drift RNC interactions:
Radio link management: supervises RL setup, addition, reconfiguration and release;
commands are forwarded to Nob-Call which interfaces with the NodeB via the NBAP
protocol
Dedicated transport channel management: supervises DTrCH setup, reconfiguration
and deletion to be implemented by the user plane; incorporates DHO processing and
power control management
Routing of SRNC UE-Call commands to IuR-Call-Init, DRNC UE-call commands to IuR-
Call-Term, and performs DRNTI allocation (Drift RNC Radio Network Temporary
Identity)

CP IuR Components
Primary role is the management of all aspects of the IuR interface for UE connections: IuR-
Call-Init is responsible for functions on the Serving RNC and IuR-Call-Term manages the
Drift RNC functions.
IuR-Call-Init functions include:
o Handling interactions with the RNC Call component
o RNSAP protocol termination, DCH procedures
IuR-Call-Term functions include:
o Handling interactions with IuR transport bearer management for user plane
connections and IuR FP resources
UA06.0 RNC Product Specification
Passing on or copying of this document, use and communication of its contents not permitted without AlcatelLucent written authorization
05.01 / EN Preliminary Page 109//160
o RNSAP protocol termination
Utility functions common to IuR-Call-Init and IuR-Call-Term include:
o Registration with various RNC utility services e.g. Software Bus

CP IuRCom Component
Common IuR utility providing the following functions:
Interface to SCCP for connection oriented procedures
Interface to SCCP for connectionless data transfer procedures

UE RRM Component
Responsible for radio resource management associated with a given UE:
Radio assignment
Handover decisions physical channel control
Transport channel control
UL and DL outer loop power control
RAB admission control
RAB matching
QOS monitoring of RAB
Processing dedicated measurements from UEs

RNC RRM Component
Responsible for radio resource management at the RNC level:
Manages identities used on the radio interface, allocation of uplink scrambling codes
Admission control: RABs, paging, new RRC and Drift connections
Service mapping for initial access of a RAB
manages transport channel connections to other RNCs
Admission control is performed on new call establishment requests to ensure sufficient
resources are available to setup the call both for the UL and DL directions. Admission
control consists of two steps:
o RAB matching: performs RAB to RB mapping
o RNC builds a list of candidate RBs and selects a reference RB as the
highest bit rate in the ordered list
o The reference RB can be replaced by a lower RB bit rate for Interactive and
Background traffic classes according to UE radio conditions and cell
congestion status
UA06.0 RNC Product Specification
Passing on or copying of this document, use and communication of its contents not permitted without AlcatelLucent written authorization
05.01 / EN Preliminary Page 110//160
o The reference RB is selected and combined with SRB and any already
established RBs
o Call admission control: checks for the availability of DL power and OVSF codes; for the
UL the admission function checks that the load is kept within affordable noise levels;
transport and NodeB resources are also reserved

RNC Com Component
Common utility responsible for distribution of common messages, not dedicated to a call:
Paging coordination and distribution
Cell information services (neighboring cells list)
UE initial access (RACH request, FACH setup)

5.6.3 CELL PROCESSING

NobCall Component
Primary function is to manage radio links on the NodeBs:
Setup, addition and release of radio links
Radio link reconfiguration, allowing the RNC to reallocate resources
Supervision of radio links, reporting of failures and radio link restoration
Measurement functions for dedicated resources
Downlink power level drift corrections
Compressed mode control
NBAP protocol termination and Transport Bearer Management for DCH channels

CellRRM Component
Primary function is to perform radio resource management at the cell and NodeB levels:
Cell RRM functions include:
o Cell resource management
o Power management
o Code tree optimization
o Common channel control
o Prioritization and scheduling of broadcast
o Admission control
o Common measurement control
UA06.0 RNC Product Specification
Passing on or copying of this document, use and communication of its contents not permitted without AlcatelLucent written authorization
05.01 / EN Preliminary Page 111//160
NodeB RRM functions include:
o Admission control for NodeB resources
o Radio link configuration handling

Nob Component
Primary function is the OAM management related to NodeBs:
NodeB restart and notifications, resource status management
Control port admin state
Command management to lock/unlock NodeBs
NodeB relevant NBAP procedures

NobCell Component
Primary function is the OAM management related to cells:
AOs for cells are handled by this component including cell state management
NBAP procedures for cell setup, reconfiguration and deletion
Command management to lock/unlock cells
Measurement procedures
Providing SysInfo to the NodeB

NobCCH Component
Primary function is the logical OAM related to common transport channels
AOs for common transport channels are handled by this component
NBAP procedures for CCH setup, reconfiguration and release
Transport bearer management in relation to Q.AAL2 stack
Interface with the 9370 RNC user plane for the establishment and release of common
transport channels

NoBRRC Component
Primary function is the logical OAM related to RRC common protocol
AOs for RRC common protocol
Manages System Information block formatting and scheduling
Handles RRC Connection Request and RRC connection setup protocol for initial UE call
establishment

UA06.0 RNC Product Specification
Passing on or copying of this document, use and communication of its contents not permitted without AlcatelLucent written authorization
05.01 / EN Preliminary Page 112//160
5.6.4 IU PROTOCOL STACKS
The TMU supports RANAP and RNSAP Radio Network Layer control plane stacks, as
stated in the above Call Component descriptions.
The Radio Network Layer control plane protocols provide the functionality required to setup
and manage the Radio Access Bearers (RABs):
o RANAP (Radio Access Network Application Part): Iu protocol stack for both CS and
PS domains for performing overall RAB management; services are divided into two
groups:
General control services related to the whole Iu interface between RNC and core
network e.g. Iu fault handling to detect faults and clear connections; providing a
means to flow control on the Iu interface for excessive user traffic.
Dedicated control services related to interactions with a single UE enabling the
core network to control the establishment, modification and release of RABs
between UE and CN; supports SRNS relocation when the UE moves from one
Serving Radio Network Subsystem to another
o RNSAP (Radio Network Subsystem Application Part): provides bearer management
signaling across the IuR between the Serving RNC and Drift RNC the SRNC can
setup and control radio links using the resources of the DRNC; the IuR interface
provides radio interface mobility between RNSs including handover, radio resource
control and synchronization, supports measurement reporting.

5.6.5 CONFIGURATION DATA
TMU receives configuration data via the MIB as distributed by CNP Configuration
Management.


UA06.0 RNC Product Specification
5.7 USER PLANE SOFTWARE
User Plane software is responsible for providing Traffic Radio Bearer, Signaling Radio
Bearer and Transport Bearer services to UE calls (User Equipment) and Cell calls (i.e.,
Common Channel Access).
The PC and RAB processors implement the User Plane on the 9370 RNC and manage the
following interfaces:
Core Network Interface (Iu UP FP)
1. NodeB Interface (IubDCH/IubCCH UP FP)
2. SRNC/DRNC Interface (Iur UP FP)
3. Uu Interface with UE (MAC/RLC/PDCP, Please note that PDCP is currently not
supported)
Figure 5-13 RNC Datapath


Passing on or copying of this document, use and communication of its contents not permitted without AlcatelLucent written authorization
05.01 / EN Preliminary Page 113//160














ATM
LC
PC
BTS
Core NW
ATM VC
PC
RA
RNC Bearer Datapath
ATM
LC
Packet Server Card
IP MSG IP MSG
TBM Path Mngt
ATM Services
Setup VC
Setup VC
Setup VC
Termination EP
AAL2 AAL
Calls
BTS Config Data
Control Processor PMC-M
IP over ATM
Routing
Local Media ATM
ATM PS Card
Control Processor
IP VR
GE
LC
ATM VC
GE
LC
IP/AAL5
IP/AAL5
IP/GE
UA06.0 RNC Product Specification
Passing on or copying of this document, use and communication of its contents not permitted without AlcatelLucent written authorization
05.01 / EN Preliminary Page 114//160
6 RNC CARRIER GRADE
One of the important aspects of a carrier grade product is high availability In UA06, the
target is to achieve 5x9s availability which can be translated into 5.25 minutes of
unscheduled outage per system per year. It is also required in UA06 to maintain the same
availability KPIs as defined in UA05. In this section, the 9370 RNC architecture and its
sparing model are examined. This is followed by descriptions of availability KPIs that need
to be met in UA06. Refer to [R45] for more detailed definitions of availability and reliability
targets to achieve final target availability.
6.1 SPARING MODEL
With the 9370 RNC, the PMC-M, NIs and OMU are 1:1 hot spared. RAB APs are spared in
a load sharing mode in the sense that when one AP fails, the work load on the failed AP is
carried over by multiple APs. For ATM traffic, PC APs are also spared in a load sharing
mode. TMU APs are spared N:M depending on the number of PS modules and for IP traffic
PCs are spared N:1.
More descriptions on sparing activities are given in the following sections.
6.1.1 PMC-M SPARING
The PMC-M is not directly involved in user data traffic, neither C-plane nor U-plane,
however it manages and controls RNC applications and resources used by these
applications. The PMC-M is 1:1 hot spared in the sense that the standby PMC-M is
dynamically synchronized with the active PMC-M, and when the active PMC-M fails, the
standby PMC-M takes over immediately the activity without any service impact. The major
components running on PMC-M are resource manager (RMAN), transport bearer manager
(TBM), connection service manger (CSM) and CNAccess. Packet Server J ournaling
framework (PS J FK) [R67] is used by some software components on PMC-M for data
synchronization between the active and standby components.
TBM/Rman manages CID allocation, cell logical group on RABs, Cell and UE call IDs, and
manages its own database. All the information in the database is journalled to the standby
using PS J FK except UE call IDs which are journalled through PSE messages.
For ATM based transport, TBM manages AAL2 data path binding for U-plane traffic on Iub,
Iu and Iur interfaces. Binding here means to connect ATM traffic coming in from an ATM
interface card to a particular AP. TBM path binding involves software running on the ATM
card, i.e. IuxIf, connection service software on the PDC of the PSFP card with the AP, and
TBM software running on PMC-M. After path binding, AAL2 traffic gets terminated on PCs.
Every path is hot spared by two PCs with the sparing taking over the traffic immediately
should the PC with the active path fails. Note that the same applies to the ATM card.



UA06.0 RNC Product Specification
Figure 6-1 AAL2 Path Binding
PC-1 PC-2 PC-3
NodeB
LPS1-A
LPS1-S
LPS2-A LPS2-S
LPS3-A
LPS3-S
Iub/Aal2If Iucs/Aal2If Iur/Aal2If
ATM Interface Card (Active)
(IuxIf)
Drift RNC Core Network
Path Binding
PMCM
(TBM)



6.1.2 PC AND PATH SPARING
In ATM based transport, the concept of Logical Path Set (LPS) is used for path sparing on
PCs for link redundancy. An LPS is a collection of paths or ATM connections. Each LPS is
spared as a unit. Paths are grouped into interfaces which are used for Iucs, Iur and Iub.
Each interface is known as an Aal2If. For sparing purposes, paths are grouped into LPSes,
each of which has an active PC and a standby PC. An entire LPS is switched over at once
when the active PC fails. The LPS load rebalancing algorithm ensures active and standby
LPSes are evenly distributed across the system and the active LPSes on a PC are evenly
distributed to the rest of PCs in the pool should the PC fail. Figure 6-2 shows a simplified
view of how LPSes are distributed in the pool of 8 PCs. In this diagram, PC1 is serving as
an active role for LPS1, LPS9, LPS17, and LPS25, at the same time its serving as a
standby role for LPS5, LPS6, LPS7 and LPS8. The same applies to other PCs.
As indicated in the figure, while each LPS or each path is 1:1 spared, PCs are spared in
load sharing mode in the sense that each PC hosts active LPSes. Traffic on a failed PC is
redirected to other PCs.





Passing on or copying of this document, use and communication of its contents not permitted without AlcatelLucent written authorization
05.01 / EN Preliminary Page 115//160
UA06.0 RNC Product Specification
Passing on or copying of this document, use and communication of its contents not permitted without AlcatelLucent written authorization
05.01 / EN Preliminary Page 116//160
Figure 6-2 Path Sparing on PC


When a PC failure is detected by the maintenance software, IuxIf and TBM are notified of
the failure and then the traffic from the ATM side is redirected to the standby paths on
another PC. Similarly, when the active ATM card fails, traffic is switched automatically to the
previous standby ATM card thanks to Multiservice Switch 15000 APS (Automatic Protection
Switching) technology.
Figure 6-3 PC Swact shows an example of path activity switchover event. PC-1 is the PC
that hosts an active LPS and PC-2 hosts the corresponding standby LPS. When PC-1 fails,
TBM needs to request IuxIF on the ATM cards to bind with PC-2 and trigger connection
service agent on the PSFP PDC to switch the PC activity state for the LPS affected.

LPS1-A
LPS9-A
LPS17-A
LPS25-A
LPS5-S
LPS6-S
LPS7-S
LPS8-S
LPS1-S
LPS2-S
LPS3-S
LPS4-S
LPS5-A
LPS13-A
LPS21-A
LPS29-A
LPS2-A
LPS10-A
LPS18-A
LPS26-A
LPS13-S
LPS14-S
LPS15-S
LPS16-S
LPS6-A
LPS14-A
LPS22-A
LPS30-A
LPS9-S
LPS10-S
LPS11-S
LPS12-S
LPS3-A
LPS11-A
LPS19-A
LPS27-A
LPS21-S
LPS22-S
LPS23-S
LPS24-S
LPS7-A
LPS15-A
LPS23-A
LPS31-A
LPS17-S
LPS18-S
LPS19-S
LPS20-S
LPS4-A
LPS12-A
LPS20-A
LPS28-A
LPS29-S
LPS30-S
LPS31-S
LPS32-S
LPS8-A
LPS16-A
LPS24-A
LPS32-A
LPS25-S
LPS26-S
LPS27-S
LPS28-S
8 PCs
32 LPSs
PC-1 PC-2 PC-3 PC-4
PC-5 PC-6 PC-7 PC-8
UA06.0 RNC Product Specification
Figure 6-3 PC Swact
PC-1
PC-2
ATM-1
ATM-2
Passing on or copying of this document, use and communication of its contents not permitted without AlcatelLucent written authorization
05.01 / EN Preliminary Page 117//160


Figure 6-4 ATM Card Swact shows an example of ATM card swact. When ATM card
switches over, TBM/IuxIf is also responsible to re-bind paths with the newly active ATM
card.
Figure 6-4 ATM Card Swact

PC-1
PC-2
ATM-1
ATM-2
(active)
(standby)
(active)
(active)
(standby)
(active)
(standby)
PC-1
PC-2
ATM-1
ATM-2
(standby)
(active)
(active)
(standby)
Steady State
PC-1 Down PC-1 Up
Legend
Active bind
Standby bind
LPS
PC-1
PC-2
ATM-1
ATM-2
PC-2
ATM-1
ATM-2
(active)
(standby)
(standby)
(active)
(standby)
(active)
(new active)
PC-1
PC-2
ATM-1
ATM-2
(active)
(standby)
(standby)
(active)
Steady State
ATM-1 Down ATM-1 Up
Legend
Active bind
Standby bind
LPS
PC-1
(active)
UA06.0 RNC Product Specification
Passing on or copying of this document, use and communication of its contents not permitted without AlcatelLucent written authorization
05.01 / EN Preliminary Page 118//160
6.1.3 RAB AND CELL LOGICAL GROUP SPARING
RABs provide cell U-plane sparing in the concept of a logical group. Each RAB could
contain multiple logical groups and each logical group could have multiple cells. Each logical
group is spared on another RAB which resides on a different PSFP card. Information
between the active logical group and the standby logical group is journalled through PS
J FK.
RABs connect to PCs through AAL5 virtual sockets [R68]. As shown in Figure 6-5 RAB and
Path Sparing, for each active logical group, a number of virtual sockets are created that
connect to both the active and standby PC paths. The virtual sockets for the standby logical
group are not created until the switchover happens.
Figure 6-5 RAB and Path Sparing

Similar to path sparing on PC, each RAB could contain active and standby logical groups.
While each logical group is 1:1 spared, RABs are spared in load sharing mode.
Note that UE calls on RAB are not spared. Therefore, while RAB failure has no impact on
cells, calls hosted on the failed RAB are dropped.
6.1.4 TMU SPARING FOR CELL C-PLANE
The TMU handles C-plane processing for both cells and calls. While calls are not spared on
TMUs, cells are cold spared on TMU in N:M sparing mode. For every 7 or less active TMUs,
there is one standby TMU assigned. For a 12 PSFP card system with 14 TMUs, there are
maximum two sparing TMUs. All centralized or static MIB data is delivered to active and
standby TMUs. When one active TMU fails, the standby TMU takes over the activity and re-
creates the cells based on existing information. TMUs are cold spared and cell re-creation
vAddr-A
pmcID-A
L
o
g
i
c
a
l

V
S
o
c
k
e
t
vSocketA-A
vSocketA-S
L
o
g
i
c
a
l

V
S
o
c
k
e
t
vSocketS-A
vSocketS-S
vAddr-S
pmcID-S
atmID-A
atmID-S
ATM-A
ATM-S
P C-A
P C-S
RAB-A
RAB-S
Established active connections
Established standby connections
Connections to be established after swact
UA06.0 RNC Product Specification
Passing on or copying of this document, use and communication of its contents not permitted without AlcatelLucent written authorization
05.01 / EN Preliminary Page 119//160
can not be done fast enough to avoid impact on the NodeBs. The impacted NodeB detects
cell outage and deletes the cells. The cells on the NodeB are re-created once TMU
switchover completes. The related U-plane cell contexts are cleared on the RABs. All of the
calls on those cells are dropped. Note that an audit between NodeB and newly active TMU
happens after swact to ensure the data consistency between the NodeB and the TMU.
Similar to AAL2 data path binding done by TBM and IuxIF, Iub Access component on TMU
is also responsible for NBAP signaling and ALCAP path binding. Note that unlike PC path
sparing, paths to standby TMUs are created only after the swact.
Figure 6-6 NBAP Signaling and ALCAP path binding

TMU-A
NodeB
TMU-S
Iub/ALCAP Iub/Si g
Sig/Alcap Binding
(Iub Access)
ATM Interface Card (Active)
(IuxIf)
UA06.0 RNC Product Specification
Passing on or copying of this document, use and communication of its contents not permitted without AlcatelLucent written authorization
05.01 / EN Preliminary Page 120//160
6.1.5 NI AND PDC SPARING FOR SS7 STACK
Figure 6-7 SS7 Stack Distribution shows the distribution of SS7 software components in
RNC. M3UA and SCTP layers are newly introduced in UA06 for IP UTRAN. SCCP layer is
common for ATM and IP transport. As shown in Figure 6-7 SS7 Stack Distribution, NI hosts
SCCP and MTP3/M3UA layers. The SAAL-NNI or SCTP layer is hosted by PDCs of the
PSFP cards that dont have NIs on them.
Figure 6-7 SS7 Stack Distribution


Figure 6-8 Sparing Model for SS7 Stack shows a sparing model for SS7 stack. For ATM
based transport, SCCP and MTP3 are 1:1 hot spared on the active and standby NIs. Hot
sparing here means when NI switchover happens, there is no service impact on existing
calls. PS J FK is used to journal data between the active and the standby. SAAL-NNI (SSCF-
NNI and SSCOP) is spared on PDCs of PSFPs which do not host any NIs. PDCs are
spared in load sharing mode for SS7 links, each of which is hot spared on multiple PDCs
using the link set concept. When one PDC is down, the traffic on those links are carried out
on the rest of PDCs.
For IP UTRAN, M3UA is 1:1 hot spared on NIs and the stack is journalled to the standby
side. SCTP is installed on all PDCs except those on PSFPs with NIs. Each of those PDCs
has an external IP address. Therefore, there are 8 or 10 external IP addresses on a fully
configured system as SCTP termination points. Associations on a given PDC are
distinguished by port numbers. Multiple associations are set up to a given SS7 point code
over different PDCs to achieve the equivalent link redundancy as ATM transport layer. Each
PDC also terminates multiple associations for different DPC codes. M3UA stack is
connected to multiple SCTP associations on different PDCs and reacts to the loss of an
association by redistributing the load to that DPC over the remaining associations.
SCCP (NI)
MTP3 (NI)
SCCF-NNI (PDC)
M3UA (NI)
SCTP (PDC)
IP Transport
S
A
A
L
-
N
N
I
SSCOP (PDC)
ATM Transport
UA06.0 RNC Product Specification
Figure 6-8 Sparing Model for SS7 Stack
Passing on or copying of this document, use and communication of its contents not permitted without AlcatelLucent written authorization
05.01 / EN Preliminary Page 121//160


6.1.6 OMU SPARING
Many OAM applications for CNode software run on OMU as listed:
Configuration management
Fault management
Performance management
OBS
Stack management for Cs core and Ps core
Serveral callP related functionalities (DDR and callP manager)
These applications use CNode fault tolerant framework for journaling the same way as in
V5.0.
6.1.7 OC3 SPARING
Both PQC and MS3 OC3 cards provide carrier grade capability which is the same as in the
previous release. In UA6.0, hairpin removal feature is required for capacity increase. SPVC
agent process is added on both active and standby ATM cards and passport J FK is used to
journal SPVC provisioning information and dynamic information, e.g. SPVCID, PMCID and
their state changes. The functionality of the existing IuxIf processes on ATM cards is
expanded to provide path binding for SPVCs similarly as what is done for PVCs.
M3UA MTP3
SCCP
NI
active
SCCP
M3UA MTP3
NI
spared
PSFP2 PSFP3
SCTP SAAL-NNI
PDC
PSFP7
SAAL-NNI SCTP
PDC
PSFP4
SAAL-NNI SCTP
PDC
PSFP13
1:1
-
-
-
-
-
-
SAAL-NNI SCTP
PDC
PSFP10
UA06.0 RNC Product Specification
Passing on or copying of this document, use and communication of its contents not permitted without AlcatelLucent written authorization
05.01 / EN Preliminary Page 122//160
6.1.8 CP SPARING
In UA06 base layer CP sparing functionality remains the same, i.e. hot CP equipment
protection is enabled.
6.1.8.1 INTRODUCTION OF CP4
CP4 sparing is essentially the same as CP3 sparing with the improvement of reliability and
performance. Compared with CP3, the reliability of CP4 is improved through the following:
New disk vendor with higher reliability numbers in the product specification
Improved hardware reliability via EMEM parity checking
Improved hardware reliability via CRC on data transfers across ATA bus
Improved hardware reliability via the use of ECC on DDR main memory.
Compared with CP3, CP4 also offers better performance:
Disk speed is increased from 4200 RPM to 7200 RPM
Memory space is increased from 256M to 512M for the first UA06 release.
The hardware is capable of 2G.
CP4 is a faster processor with processing speed increased 1.25 to 2.0 times
compared with CP3.

6.2 IP UTRAN CARRIER GRADE STRATEGY AND SPARING
MODEL
To provide RNC with a similar redundant model as for ATM UTRAN, IP UTRAN uses the
following strategy for IP interface connections to the external nodes:
Static protected IP routes are used for sparing at IP/L3 layer on GE cards to protect
RNC from interface card/link failures and in some circumstances router failures.
On the C-Plane, multiple associations can be used for failure protections:
o Protecting from RNC component failures. When one component in RNC fails
the association on the component is lost, another association on a spared
component can be chosen by M3UA protocol so that traffic can continue to flow.
Sparing through associations can be implemented in load sharing mode or 1:1
spared mode.
o Protecting network router failures. Associations with IP addresses on different
subnet can be used to protect RNC from router failures in the IP network
between RNC and Core Network or NodeB.
TBM does path/LG balancing similarly as in ATM UTRAN. However, TBM is not
involved in data path binding since local media is responsible for setting up the data
path. This is also true for PC failure handling. Refer to [R71] and [R72]for detailed
information.
To support the above mentioned carrier grade strategy, the sparing model for IP UTRAN
has been changed in the following aspects:
UA06.0 RNC Product Specification
Passing on or copying of this document, use and communication of its contents not permitted without AlcatelLucent written authorization
05.01 / EN Preliminary Page 123//160
GE cards are spared in load sharing mode
New protocol stacks for IP transport are added on NI, TMU and PDC to be spared
C-Plan traffic to core network is protected by multiple associations on PDC which are
spared in load sharing mode and managed by 1:1 hot spared M3Ua on NIs.
NAT functionality is added on the PC APCs
In the following, the changes in the sparing model are examined in details.
6.2.1 PROTECTED IP ROUTES ON 4PGE CARD FOR IP
TRANSPORT SPARING
Passport feature Protected IP Routes for 4pGe Card introduces Protected IP Routes as a
mechanism for sparing at the IP/L3 layer in order to provide carrier grade (<1 sec switchover
time) support for the 4pGe FP in an IP scenario. A protected route is defined as an IP route
with a set of 2 or more next Hops, each of which uses a unique local IP address that can be
used to forward the IP packet. The operational state of these next Hops or interfaces of the
protected routes are monitored, and the forwarding information is optimally managed to
enable route reprogramming within 1 second in the case of an active next Hop/interface
failure. For full coverage, i.e. line protection and card protection, the adjacent router needs
to use static routing towards the GE cards to maintain IP data flow when failures happen.
Figure 6-9: Protected Static IP Routes shows the configuration for protected IP routes
between GE cards and the IP network. GE cards are running in load sharing sparing mode
in the sense that both cards carry IP traffic. When one GE card fails, all traffic is carried by
the other GE card. Similarly, if one next Hop Router fails, all IP traffic goes through the other
next Hop router.
Figure 6-9: Protected Static IP Routes

In summary, the above system configuration provides the following failure protections:
GE-1
GE-2
nextHop
Router -1
A.1
S
u
b
N
e
t A
.A
.A
.0
B.1
S
u
b
N
e
t
B
.
B
.
B
.
0
nextHop
Router -2
C.1
D.1
S
u
b
N
e
t
C
.
C
.
C
.
0
S
u
b
N
e
t
D
.
D
.
D
.
0
IP Network
UA06.0 RNC Product Specification
Passing on or copying of this document, use and communication of its contents not permitted without AlcatelLucent written authorization
05.01 / EN Preliminary Page 124//160
When one GE card fails, the IP traffic on the failed GE card is re-directed to
the other GE card within 1 second. This applies to both ingress and egress
paths.
When ports/links on a GE card fail, the IP traffic on those ports/links is re-
directed to the other ports/links on another card within 1 second. This applies
to both ingress and egress paths.
If one of the next Hop routers fails and the routers are using static routes, the
IP traffic on the failed next Hop router is re-directed to the other router within
1 second thanks to L3 heartbeat mechanism. This applies to both ingress and
egress paths.
6.2.2 NI AND PDC SPARING FOR C-PLANE TRAFFIC ON IU/IUR
Similar to MTP3 for ATM UTRAN, M3UA is 1:1 hot spared on NIs and the stack is journalled
to the standby side. SCTP is installed on all PDCs except those on PSFPs with NIs. Each of
those PDCs has an external IP address. Therefore, there are 8 or 10 external IP addresses
on a fully configured system as SCTP termination points. Associations on a given PDC are
distinguished by port numbers. Multiple associations are set up to a given SS7 point code
over different PDCs to achieve the equivalent link redundancy as ATM transport layer. Each
PDC also terminates multiple associations for different DPC codes. M3UA stack is
connected to multiple SCTP associations on different PDCs and reacts to the loss of an
association by redistributing the load to that DPC over the remaining associations.
Protecting from RNC component failures
o As shown in Figure 6-8 Sparing Model for SS7 Stack, M3UA on NI does routing,
load balancing and redundancy management and congestion control for SCTP
links. M3UA is hot spared on NI. When the active NI fails, the standby NI takes
over activity in less than a second of time. There is no impact on existing calls
when NI switchover happens.
o SCTP associations are provisioned on PDCs of the PSFP cards without NIs.
The SCTP links on PDCs operate in a load sharing sparing mode managed by
M3UA running on the active NI. To protect system from PDC failure, SCTP
endpoint IP addresses need to be provisioned such that associations to the
same Core Network appear on different PDCs.
Protecting network router failures.
o Associations with IP addresses on different subnet can be used to protect RNC
from router failures in the IP network between RNC and Core Network.

The RNC is protected from different RNC component failures as well as adjacent router
failure: Figure 6-10 IP IuPS C-Plane Path Redundancy between the RNC and a Core
Network. In the case of multiple IP networks between the RNC and the Core Network,
multiple associations can be provisioned with IP addresses on different IP networks. In this
figure only one redundant path is shown. RNC supports multiple path redundancy by
provisioning multiple associations between the RNC and the Core Network. Refer to [R8] for
detailed information on IP IuPS carrier grade support.
UA06.0 RNC Product Specification
Figure 6-10 IP IuPS C-Plane Path Redundancy between the RNC and a Core Network

SCTP
ept1
PDC
SCTP
ept8
PDC8
M3UA
NI(a
GigE2
GigE1
SCTP
ept
SCTP
ept
CNE
Router
Router
IP
Networ
RNC
assoc1
assoc2
CNE

As to the service recovery time requirements, M3UA layer is hot spared and meets the
equivalent service recovery budget as for MTP3 layer. Similarly, SCTP layer is spared
in load sharing mode and meets the equivalent service recovery budget as for
SAALNNI layer.

6.2.3 PC SPARING FOR U-PLANE TRAFFIC ON IUB
Unlike PC path sparing in ATM based transport where paths are 1:1 spared and PCs
are running in load sharing mode, Figure 14 shows a simplified version of N:1 PC
Network Address Translation (NAT) sparing model where N equals to 2. In the figure,
there are 2 active PCs, each of which has an external visible IP address. The single
standby PC has no external IP address. TBM is responsible for assigning IP
addresses to the active PCs as well as the mapping between a range of UDP ports to
UE logical groups on RABs. A U-plane VR is added for this type of traffic and local
media software connects data path from the GE card to the active PCs. Each PC
builds a port forwarder table based on the mapping to send up link traffic it receives to
the proper RABs. For down link traffic, a new type of VSocket needs to be created to
send UDP packets from RAB to NodeB IP/UDP address through the PC where the
RAB receives traffic from. As shown in the figure, when the active PC1-A crashes, the
standby PC is assigned with the external IP address e@-1 and updated by TBM with
the port forwarder table. The traffic is re-routed to PC3-S and the RABs based on the
port forwarder table. Note that the lines between the PCs and the RABs in the figure
represent the mapping relationship. The real connections are through vSockets which
will be created when calls are set up and torn down when the calls are released. The
switchover between the active PC and the standby PC is fast enough not to impact the
existing calls serviced on the RABs.
Passing on or copying of this document, use and communication of its contents not permitted without AlcatelLucent written authorization
05.01 / EN Preliminary Page 125//160
UA06.0 RNC Product Specification
Figure 6-11 PC NAT Functionality for N:1 Sparing
GE
Port A
1
A
n
Port ->LG/RAB
Mapping LG1
LG2
Port B
1
B
n
Port A
1
A
n
Port ->LG/RAB
Mapping
Port B
1
B
n
Port A
1
A
n
Port ->LG/RAB
Mapping
Port B
1
B
n
LG3
LG4
PC1-A
PC2-A
PC3-S
RAB-1
RAB-2
e@-1
e@-2
e@-1
SWACT


IP data path failures between the RNC and the connected NodeBs can be detected by
data path heartbeat mechanism within 5 seconds after a certain number of re-tries fail.
Heartbeat UDP packets are sent from TMUs to the NodeBs and returned back to
TMUs through PCs. Once the failure is detected, an alarm is raised on the NodeB and
this triggers the state change of the FddCells on the NodeB. All the calls associated
are released.

6.2.4 PMCM SPARING
The functionality of PMCM is expanded to support IP based transport which include
assigning external IP addresses to PCs, keep mappings between PC port ranges, UE
logical groups and RABs, and re-assign the active PC external IP address to the
standby PC should the active fail. Because of this, extra data needs to be journalled
between the active PMCM and the standby PMCM.
Passing on or copying of this document, use and communication of its contents not permitted without AlcatelLucent written authorization
05.01 / EN Preliminary Page 126//160
UA06.0 RNC Product Specification
Passing on or copying of this document, use and communication of its contents not permitted without AlcatelLucent written authorization
05.01 / EN Preliminary Page 127//160
7 OAM&P
7.1 RNC OAM STRATEGY
The 9370 RNC operations and maintenance are managed by the OMC-R, MDM and MDP
components of the Wireless-Network Management System (W-NMS).
The W-NMS is the Operation and Maintenance Centre for the RNC and Node-Bs. It consists of
the OMC-R , OMC-B, MDM and MDP.
The OMC-R manages the RNC OAM applications. The OMC-R is connected logically to the CP
or the 16pOC-3/STM-1. Messaging between the PMC-OMU and the OMC-R is over the SE/PE
(System Exploite/Poste dExploitation) protocol running on TCP/IP. The PMC-OMU and PMC-
TMU logical (non-hardware) alarms are sent over the SE/PE protocol.
The MDM manages the Multiservice Switch platform features such as ATM and IP transport and
port management. All hardware alarms go through the MDM. FMIP formatted data is sent
between the MDM and the CP via the OAMENET port or from the 16pOC-3/STM-1 for OAM
information such as alarms, SCN, and provisioning. The OMC uses the MDM function to
operate the RNC.
The OAM infrastructure provided by the Multiservice Switch 15000 is used on the RNC.
Additional RNC OAM functionality accesses the AP-based components on the PS modules.
Platform counters are managed by the MDP. MDP converts performance data from the RNC
User Plane into BDF format. The performance server collects the Control Plane OBS (counters)
and call traces and Node-B counters from its flash memory.
Node-Bs are managed by the OMC-B.
7.2 OMC-R / OMC-B
The OMC-R and OMC-B are standards-based terms defined by the 3GPP and correspond to
the wireless specific management of the RNCs and Node-Bs respectively. They consist of fault
management (FM), performance management (PM) and configuration management (CM)
functions. The OMC-R is viewed as the controlling network element manager for the RNC and
the logical part of Node-Bs. The OMC-B is the element manager of the physical part of the
Node-Bs. The OMC-R uses the MDM as a mediation layer for communication with the User
Plane and platform. The OMC-R presents an alarm display for all of logical and hardware
alarms.
7.3 MDM AND MDP
The Multiservice Data Manager (MDM) manages Multiservice Switch devices. Its primary role in
W-NMS is to mediate fault and configuration information between Multiservice Switch-based
devices and higher layers of the OAM network. The MDM is responsible for fault management,
managing the IP and ATM networks, and managing the RNC platform and hardware.
UA06.0 RNC Product Specification
Passing on or copying of this document, use and communication of its contents not permitted without AlcatelLucent written authorization
05.01 / EN Preliminary Page 128//160
MDM is a workstation-based network management system providing a full set of applications for
managing faults, configuration, accounting, performance, and security on the Multiservice
Switch 15000. It is the tools layer of the MDM that provides the management of the network.
The MDP collects platform statistics.
7.4 OAM FRAMEWORK
The OMC-R, OMC-B, MDM, and MDP are embedded in the W-NMS network management
platform which presents a unified OAM solution. The W-NMS includes a main server and a
performance server which offer a framework of common services such as administration,
performance, and alarm services for UMTS Access devices.
The W-NMS main server runs network management, fault management (including alarm
management) and configuration management functions e.g. OMC-R OAM functions. The W-
NMS performance server hosts the performance management functions including MDP and
APM for uploading data files from network elements (RNC and Node-Bs).
Overall, W-NMS delivers an integrated UMTS management platform across the radio access,
IP/ATM backbone, and service-enabling platform domains.
7.5 OFF-LINE CONFIGURATION
The Wireless Provisioning System (WPS) is a PC based, off-line provisioning application
dedicated to preparing and auditing configuration data in the W-NMS context. The WPS was
previously known as the OCAN. It is used for re-configuration, fine tuning, bulk deployment,
radio optimization, etc. It generates these configurations through specific files, data request
work orders and are imported into the OMC. Configuration capabilities are provided for the RNC
and the Node-B.
The WPS can be installed on a standalone PC with Windows OS or co-reside on a PC-based
W-NMS client PC. A UNIX-based W-NMS client is not supported.
7.6 ROC/NOC
OAM functionality is provided by a combination of W-NMS branded software products, other
Alcatel-Lucent software products, and third party products. The OAM architecture is based on a
tiered hierarchy of a National Operations Center (NOC) and subsidiary Regional Operations
Centers (ROCS). The OAM components are grouped logically and geographically into the ROC
and the NOC. The NOC can be connected to several ROCS.
The ROC manages the PM/CM/FM functions for a grouping of Node-B and RNC and, therefore,
is not necessarily physically collocated with any one node. In order to provide site diversity,
there can be multiple ROCS, but the one-to-one relationship between ROCS and grouping of
nodes must be kept. Backup servers can reside on one ROC to provide ROC resiliency for the
boxes of the other ROC. A given ROC (with one or more main servers and performance
servers) is targeted to manage many nodes. The NOC is really just a reach-through to the ROC.
It is important to note that all network nodes are able to recover from failures without ROC
intervention. However, some nodes may have local craft interfaces. Thus, the availability of
OAM&P systems does not need to be as high as the bearer traffic.
UA06.0 RNC Product Specification
Passing on or copying of this document, use and communication of its contents not permitted without AlcatelLucent written authorization
05.01 / EN Preliminary Page 129//160
7.7 RNC OAM
7.7.1 PLATFORM
OAM provides the framework for the configuration and supervision of the components on each
RNC plug-in module.
Packet server base maintenance (psMtc) is a maintenance framework developed to support
multiple processors on an FP and to provide specific base maintenance support for the PS.
Some of the functionality includes:
support for loading APs
support for the AP OSI state
support for the operator commands (lock, unlock, and reset)
The Multiservice Switch 15000 OAM components running on the CP PDC include the
component administration system (CAS) and the data collection system (DCS). The RNC AP
software environment is independent of the Multiservice Switch platform software.
Consequently, psMtc provides an OAM proxy to communicate between the PS PDC processor
and RNC applications running on each AP. The CP DCS component communicates with the
psDCS on the PS PDC and interacts with AP-based applications over the OAM proxy.
7.7.1.1 CAS
Operator addressable entities are called components. CAS provides the operator interface to
the Multiservice Switch shelf. It is responsible for maintaining an on-switch database of all
provisioned components and handling the delivery of provisioned data and operator
commands (e.g. lock, unlock, reset).
7.7.1.2 DCS
The DCS manages real-time events such as alarms, statistics, state change notifications
(SCNs), accounting records, and application specific field traces. It provides the capability to
spool many of these events to the CP disk. Data collection and reporting are managed by the
DCS located on the CP which communicates with the PS psDCS component over the OAM
proxy. The psDCS is used to communicate between the applications running on the APs and
the Multiservice Switch DCS system
Alarms, state change notifications, and statistics are transported to the MDM from DCS in
FMIP format. Either in-band or out-of-band connectivity is used to connect the RNC to the W-
NMS framework. Alarms and field traces are saved on the CP disks first (4 GByte) partition.
7.7.1.3 OAM PROXY
The OAM Proxy is the mechanism used to communicate between the PS PDC and the six
APs. The OAM proxy allows AP applications to use the capabilities offered by DCS. It sends
PSE messages to the processes running on the PS APs and PEV messages to processes
running on the CP and FP PDCs. There is a AP proxy running on each AP and a proxy
running on the PDC. They communicate over a PsLAN connection. PsLAN is a message-
based interface between the PDC and the APs on the PS module.
UA06.0 RNC Product Specification
The PDC-based psDCS receives records (alarms, SCNs, etc.) from AP applications.
Application specific session control messages run between the PP psDCS and the AP psDCS
on the PMC-M. Role queries are issued to the AP psDCS, for example.
Figure 7-1: OAM Logical View illustrates the logical connections between some of the OAM
components in the W-NMS and the RNC
10
. The connection between the W-NMS and the RNC
can be either out-of-band (W-NMS - CP) or in-band (W-NMS - 16pOC-3/STM-1). A router (not
shown) converts the W-NMS Ethernet to IP/ATM when in-band is employed..
Figure 7-1: OAM Logical View
W-NMS
OMC-R OMC-B MDM APM MDP
Passing on or copying of this document, use and communication of its contents not permitted without AlcatelLucent written authorization
05.01 / EN Preliminary Page 130//160

7.7.2 CONTROL PLANE/USER PLANE OAM APPLICATIONS
The PMC-OMU hosts fault management, configuration management, and administration. It
manages Control Plane applications running on the PMC-TMU and OMU. Node-B logical
resources: the Node-B core process, provisioning and activation are PMC-OMU
responsibilities. The OMC-R communicates with the PMC-OMU over the SE/PE
11
protocol
through in-band or out-of-band connectivity.
The CP is responsible for overall OAM of the Multiservice Switch-based modules and hosts
DCS, CAS and other OAM functions. FMIP information is exchanged between the CP and the
MDM for platform applications.

10
OMC and MDM clients e.g. PC and workstations not are shown with the OMC and MDM
servers.
11
The OMC-B communicates with the Node-Bs over the SE/PE protocol.
DCS
CPx
CAS
ATM IF
Out-of-Band In-Band
Connectivity Connectivity
PDC PMC
PMC
P
r
o
x
y

PP
P
PMCpsDCS
PPpsDCS
r
o
x
y
App
UA06.0 RNC Product Specification
Passing on or copying of this document, use and communication of its contents not permitted without AlcatelLucent written authorization
05.01 / EN Preliminary Page 131//160
Both MIB and CAS provisioning data are maintained in the CP disk for Control Plane, User
Plane and platform applications. The MIB is built on the OMC-R, sent by FTP to the in-service
RNC, and copied to the CP disk. A copy is maintained in the PMC-OMU memory to enhance
the PMC-OMU performance. The first time the MIB gets loaded into the PMC-OMU memory is
when the PMC-OMU becomes active. In total, there are three copies of the MIB that are used
by the PMC-OMU, two disk copies and one memory copy.
The PSPF lock command will deload the PMC-RABS on the PSFP card. All the traffic handled
by this card is gracefully de-loaded. Once the de-loading is done, the PSFP card will be reset
and transit to a locked state. No new service will be added to this card, but all existing services
provided by this card will continue until they complete. Since the existing service may take
quite long, de-loading process may take quite long as well.
The packet server base maintenance framework (psMtc) performs the following tasks:
maintenance requests from CAS to lock, unlock, reset a PS
determines the running application on each AP (the role e.g. NI)
maintains the state of the AP and notifies interested applications of any state change
handles AP-related faults
manages an AP heartbeat and AP recovery actions when a processor is hung
runs an audit mechanism for triggering in-service and out-of-service diagnostics
handles error conditions when loading applications
allows other applications to register to be notified when the AP admin state changes
handles some of the display commands etc.
The control plane maintenance with PSE-based maintenance components hook into psMtc
for: registering for deloading, reporting of SCNs for lock/unlock operations, etc. The RNC
control plane maintenance components work with psMtc to get the shelf view of the PMC-
OMU and PMC-TMU as well as hardware IDs. To send an alarm, the maintenance
components (e.g. RNCCnMtc) send fault reports to psMtc and psMtc uses the existing
infrastructure to report the alarms.
7.8 FAULT MANAGEMENT
Fault management (FM) provides monitoring capabilities to diagnose and rectify hardware,
software, and network problems. FM attempts to limit the effects of a failure on the network.
Faults generated by the RNC are received at the W-NMS where they are integrated.
7.8.1 ALARMS
Both alarm messages and notification messages are unsolicited, spontaneous messages
generated by the RNC. Alarms types are software errors, hardware faults, security violation,
QoS degradation, and warnings. An alarm is cleared once an end-of-fault message is received.
Notification messages are of two types: AttributeValueChange and StateChangeInfo.
In Multiservice Switch, alarms are generated by CDL defined components which have an OSI
state. Alarm records are sent to DCS for distribution and optionally spooled to the CP disk. As
mentioned above, all hardware alarms are sent to the MDM. These alarms are also stored on
the first (i.e. Multiservice Switch) partition on the CP disk. All PMC-OMU and PMC-TMU logical
(non-hardware) component alarms are sent to the OMC-R over the SE/PE protocol.
UA06.0 RNC Product Specification
Passing on or copying of this document, use and communication of its contents not permitted without AlcatelLucent written authorization
05.01 / EN Preliminary Page 132//160
One of the Multiservice Switch alarm fields is Status. The Status field is: MESSAGE, SET or
CLR.
MESSAGE - indicates a condition in which the operator may be interested
SET - indicates a fault or failure has occurred and operator action may be required
CLR - indicates a clear alarm is generated when the fault has been repaired
The Multiservice Switch alarm is made up of an Index Group (the first group of four digits) and a
subIndex. For example, 7008 represents a file alarm, 7039 an ATM alarm, and 7070 and
Adjunct Processor alarm. The subIndex is a four-digit number which only has significance within
the Index Group. For example, 7008 1014 is generated when a disk test has failed to complete
and the cause is unknown.
All Alarms and State Changes are documented and tracked in a Managed Objects Dictionary:
[R42] UMT/RNC/DD/000017 V06/EN UMTS OA&M V5.0 MOD - Volume 3.

7.9 PERFORMANCE MANAGEMENT
Performance management permits users to monitor the traffic load on the network. One of the
goals is to detect any gradual degradation before it impacts QoS. RNC collects Multiservice
Switch platform statistics and UMTS application measurements.
RNC Call Trace and Object Trace provide very detailed information at call level on one or more
specific mobile(s) and at object level such as cell, Iux Interfaces respectively. Trace is an
additional source of information to Performance Measurements and allows going further in
monitoring and optimization operations.
Contrary to Performance Measurements, which are a permanent source of information, Trace is
activated on user demand for a limited period of time for specific analysis purposes.
The W-NMS OMC-R provides means for operator to configure Performance measurement and
Trace parameters.
The W-NMS Performance Server runs data collection applications and reporting applications. It
collects and mediates raw counters and trace files from NEs (RNC and Node B) and stores
them for further post-processing and reporting.
7.9.1 STATISTICS & COUNTERS
RNC collects two kinds of platform counters -- statistic counters (including ATM port, IP port,
Logical Processor and Adjunct Processor counters) and accounting counters (ATM VP and
VCC accounting records). The former are collected every 15 minutes and the latter are
collected every 60 minutes. These report periods are not modifiable by operator. RNC DCS
collects (and aggregates) platform counters from various functional processors and writes files
into CP disk.
MDP pools Platform counters files periodically from RNC CP via FTP. The counters are
converted from FMIP format to Bulk Data Format (BDF) and stored in the Performance server.
UA06.0 RNC Product Specification
Passing on or copying of this document, use and communication of its contents not permitted without AlcatelLucent written authorization
05.01 / EN Preliminary Page 133//160
In the 9370 RNC all OBS, GPOs and platform counters are stored on the CP disk where they
are retrieved by the performance server via FTP.
7.9.2 UMTS APPLICATION COUNTERS
UMTS Application counters are also called general purpose observations (GPOs), or GPO
counters in short. RNC collects and aggregates GPO counters from Call Processing
Applications (CallP) on TMU and User Plane Applications on RAB. Counters data are
formatted in XDR and stored in CP disk. When a PM file is ready, RNC notifies OMC-R of it.
The Performance server downloads the PM file from the CP disk and deletes it when it is done
via FTP.
RNC keeps a PM file for 72 hours at most if it is not retrieved nor deleted by the Performance
server.
Reporting period is configurable by the operator, of 15, 30, 45, or 60 minutes.
RNC GPO framework supports three counter types -- cumulative, load (or mean), and value
counters. The GPO counter are grouped in the following families: Radio link management,
Handover, Soft Handover, Power management, RRC connection, IU connection, RAB and RB
management, Security, Paging, Mobility, Radio measurements, Quality of service, User Plane,
etc (For details, refer to UMT/RNC/DD/000088 RNC System Functional Specification)
7.9.3 TRACE
RNC Call trace and Object trace are collected from UMTS control plane applications on TMU
and user plane applications on RAB, all trace data are encoded in XDR format and spooled to
the CP disk. When a Trace file is ready, RNC notifies OMC-R of it. The Performance server
downloads the Trace file from the CP disk and deletes it when it is done via FTP.
RNC keeps a Trace file for 24 hours at most if it is not retrieved nor deleted by the
Performance server.
7.9.3.1 OBJECT TRACE
Object Trace is for tracing non-Subscriber or non-MS related events within an RNC. RNC
supports the following types of object traces: OTCell, OTIuCs, OTIuPs, OTIur and OTIuBc and
OT-RNC. Generally speaking, they are used to trace common ASN.1 3GPP protocol
messages which are not related to a specific call. OTCell is used for trace of cell activities
such as cell management message between NODE B and RNC and RACH/FACH messages
between UE and RNC, the OTIuCs/Ps for connectionless NAP messages; OTIur for common
RNSAP messages; OTIuBc for SABP messages.
7.9.3.2 CALL TRACE
Call Trace can be configured to collect certain types or all types of call related signaling
messages and RNC internal call processing information and traffic counters such as call
establishment and release protocol messages, measurement control and report messages,
handover messages, RLC traffic counters and Uplink link BLER counters.
RNC supports the several types of Call Trace capability including CTA, CTB, CTG and CTN.
UA06.0 RNC Product Specification
Passing on or copying of this document, use and communication of its contents not permitted without AlcatelLucent written authorization
05.01 / EN Preliminary Page 134//160
CTA is invoked by Core Network, capable to trace calls from a certain number of predefined
UE.
CTB is used for tracing any calls from a predefined UE. RNC supports 20 instances of CTB
trace. UE is identified by IMSI, TMSI, PTMIS or IMEI.
CTG is used for tracing a certain number of calls from a list of predefined cells.
CTN is used for recording a large portion of handover events which occur in an RNC in
specified period. Handover metrics between neighboring cells are built from CTN trace and
then used for network optimization.

7.10 CONFIGURATION MANAGEMENT
7.10.1 OVERVIEW
Configuration Management (CM), in general, provides the operator with the ability to assure
correct and effective operation of UTRAN as it evolves. CM actions have the objective to control
and monitor the actual configuration on the Network Elements (NEs) and network resources,
and they may be initiated by the operator or by functions in the Operations Systems (OSs) or
NEs. CM actions may be requested as part of an implementation program (e.g. additions and
deletions), as part of an optimization program (e.g. modifications), and to maintain the overall
Quality of Service (QoS). The CM actions are initiated either as a single action on a NE of
network or as part of a complex procedure involving actions on many NEs.
ADM performs the mediation between the OMC transactions and target objects. It receives a
TGE (Operation General Transaction, Transaction Generale dExploitation) from the APE. This
TGE is decomposed in TEEs (Operation Elementary Transactions, Transactions Elementaires
dExploitation) designated to Managed Objects (MOs). Each TEE is then mediated into one or
several TEAs (Elementary Applicative Transactions, Transactions Elementaires Applicatives)
designated to Applicative Objects. An Applicative Object (AO) has its data (one or more tables
of different kinds of data) stored in the Applicative Database (BDA, Base de Donnees
Applicative, called also Management Information Base, MIB). One AO corresponds to one
Target Object (one TO can be addressed by more than one AO). A Target Object is an
application running on one of the RNC cards. This application receives the TEA data.
The Conversion model is bidirectional:
conversion Managed Object -> Applicative Object (TEE - -> TEA(s)) and conversion
Applicative Object -> Managed Object (REA(s) - -> REE).
The up conversion (AO->MO) includes messages conversion. TEE is an elementary
transaction (request). It is ASN.1 encoded and acts on one Managed Object (MO).
Please refer to Figure 7-2 for the illustration of the CM architecture.
ADM deals with two object models of the cellular network. One of them: Managed Object model
is close to the operators view of the network. This is a tree-structured view of different network
elements, with their attributes and possible operations.
UA06.0 RNC Product Specification
The second model, Applicative Object model, is not structured in a tree, and is more close to
applications, representing them as objects with different kinds of data contained in MIB tables.
ADM performs mediation between those two models. This mediation includes translation of
Managed Object attributes into Applicative Object attributes and vice versa. One Managed
Object corresponds to one or more Applicative Objects.
For more detailed information on the CM architecture, please refer to the document [R46]
PE/BSC/SS/0427 BSC Control Node CM System Functional Specification.
For more detailed information on AO and MO parameters, including the MO containment tree,
please refer to the document [R47] UMT/RNC/DD/000019 OA&M RNC: SFS Mediation Device.
Figure 7-2: CM Functional Architecture

7.10.2 CONFIGURATION MANAGEMENT
On-line configuration is used to change the UTRAN configuration in real time. On-line tools
include command-line interfaces (CLI) and GUIs. For example, local operator and Telnet
sessions are hosted by the CP. These can be used to make adjustments to the network by
either tuning the performance of a network element (NE), e.g. the RNC or Node-Bs, or by
executing troubleshooting procedures.
Control Plane configuration management (CM) is performed by the PMC-OMU. It manages the
MIB information stored on the CP disk as well as in the PMC-OMU memory (RAM). For
example, when a PMC-TMU restarts, the PMC-OMU retrieves data from the MIB.
Passing on or copying of this document, use and communication of its contents not permitted without AlcatelLucent written authorization
05.01 / EN Preliminary Page 135//160
UA06.0 RNC Product Specification
Passing on or copying of this document, use and communication of its contents not permitted without AlcatelLucent written authorization
05.01 / EN Preliminary Page 136//160
Platform and User Plane CAS-based provisioned data, maintained on the CP disk is distributed
by the CP. The MDM provides provisioning capabilities for Multiservice Switch-based devices. It
sends provisioning information in FMIP format to the RNC
7.10.3 OFF-LINE CONFIGURATION
Off-line configuration implies the use of a database where provisioning information for a device
is maintained and, at a later date, downloaded to the device itself. The manipulation of
configuration data is not done on the device itself. Off-line configuration is performed by the
WPS running on a client PC. It is able to generate these configurations through specific files
which are imported into the OMC.
Bulk creations and modifications are performed on the RNC from the WPS.
7.11 IN-BAND / OUT-OF-BAND CONNECTIVITY
The RNC OAM&P connectivity to the W-NMS OMC/MDM is available in two forms:
An in-band connectivity system, where the physical connectivity of an OAM&P system shares
the same physical links as the bearer traffic. Or more formally, an in-band OAM&P system is
one where the messaging between the W-NMS and network elements (e.g. Node-Bs and
RNC) use the same transport network as user data traffic.
The other form is an out-of-band connectivity, where the messaging between the W-NMS and
network elements use a separate transport network (e.g. Ethernet network) from the user data
traffic.
7.11.1 OUT-OF-BAND OAM&P
Out-of-band OAM&P connectivity is accomplished using an Ethernet port on the CP module and
an internal IP datapath. In other words, OAM messaging uses a separate Ethernet network.
Incoming packets from the OMC-R are received on the CP Ethernet port and routed to the
PMC-OMU using provisioned IP subnet entries in a routing table.
OMC-B OAM&P (and DHCP) messages are received at the CP Ethernet port and routed over a
16pOC-3/STM-1 link to the Node-Bs via IP/AAL5/ATM on the Iub interface.
MDM OAM traffic is received over the CP OAMENET port and routed to the CP.
DHCP is used for the assignment of IP addresses of UMTS nodes for OAM purposes. The
OMC-B, resident in the W-NMS, assigns the IP addresses of the Node-Bs. The RNC provides
the DHCP relay function. Optionally, an external DHCP server can assign the IP addresses.
The default is to use the OMC-B.
For Control Plane debugging, remote field debugging terminals for APs are connected via an
Ethernet network to the CP.
Out-of-band OAM connectivity is shown in the figure below.

UA06.0 RNC Product Specification
Figure 7-3: Out-of-band OAM&P Solution




7.11.2 IN-BAND OAM&P
In-band OAM connectivity is accomplished over ATM links on the 16pOC-3/STM-1 module.
Incoming packets from the OMC-R are converted from Ethernet to ATM by a router, which is
connected to the PMC-OMU with a provisioned IP datapath over ATM.
OMC-B implementation specific OAM&P (and DHCP) messages are routed over a 16pOC-
3/STM-1 link to the Node-Bs via IP/AAL5/ATM on the Iub interface.
The in-band OAM&P solution is shown in the figure below












Passing on or copying of this document, use and communication of its contents not permitted without AlcatelLucent written authorization
05.01 / EN Preliminary Page 137//160
UA06.0 RNC Product Specification
Figure 7-4: In-Band OAM&P Solution


7.12 UPGRADE
7.12.1 HARDWARE UPGRADE
A PS can be hot inserted into the RNC shelf without impacting the existing services. The PS will
come up automatically if the module is already provisioned. Software is downloaded once the
module and the LP are configured.
UE logical group (LG) and spare cell LGs are created per PMC-RAB on the new PS. UE calls
are processed on the new PMC-RABs but cells are not re-balanced across all PMC-RABs. The
paths are re-distributed so the PMC-PC on the new PS is assigned some of the paths from the
other PMC-PCs. Cells are SWACTed to other PS PMC, but cells are not re-balanced when a
PS goes out of service.
Cells are not automatically re-balanced across PMC-TMUs when a new PS is hot inserted as it
impacts service. The new PMC-TMU(s) on the PS acts as a spare.
SaalNnis are not re-balanced and only new SaalNnis may be assigned to a new PS.
When an active PMC-TMU goes out of service, a spare PMC-TMU becomes active. The NOB
core processes are recreated on this PMC-TMU as well as the cells from the failed PMC-TMU.
The MIB does not have to know the relocation of the core processes.
Passing on or copying of this document, use and communication of its contents not permitted without AlcatelLucent written authorization
05.01 / EN Preliminary Page 138//160
UA06.0 RNC Product Specification
Passing on or copying of this document, use and communication of its contents not permitted without AlcatelLucent written authorization
05.01 / EN Preliminary Page 139//160
When a PMC-TMU and PMC-OMU fail at the same time, cell SWACT on the PMC-TMU cannot
occur until the PMC-OMU SWACT is completed. Therefore, the cell outage time depends on the
PMC-OMU SWACT time, in this specific case.
Adding a new PS will add PMC-TMUs which are initially used as spares for processing UE calls.
New nodeB can be added to new service groups without any service outage; that is, there is no
new mib build required. Adding a new NodeB does not cause an outage with the existing
NodeB. To provide better load distribution, existing NodeBs can be moved to these new service
groups. Service Group might be used to have a better geographical distribution or to spread the
time of day effect to different PMC-TMUs.
A module must be locked before it is uninstalled. MDM receives an alarm for the module going
out of service.
7.12.2 SOFTWARE UPGRADE
The Shelf Critical upgrade is the only method to upgrade software.
The Shelf Critical upgrade consists of resetting the standby CP and loading it with the new
version of software. Once it is back in service, the standby CP communicates with the active
CP. The active CP will reset itself causing a reset of the PSFPs and 16pOC-3/STM-1s. The
migration CP now becomes active, and the FPs and previously active CP come up with the new
load from the newly active CP. They are loaded and provisioned in parallel.
The impact of the software upgrade on the RNC and cells is detailed in [R45]
UMT/RNC/DD/010040 V04.03 WCDMA RNC Dependability Report - UTRAN06.
It should be noted that Hitless Patching is available for critical field bug fixes. In the majority of
cases, these patches can be applied without impacting service (i.e. zero outage). See sections
below.
7.12.3 MIB UPGRADE
The MIB contains configurable parameters for the Control Plane portion of the RNC. The
managed objects (MO) are known by the OMC. A structural modification is always associated
with a software upgrade. MIB data only changes do not require a software upgrade.
An RNC software upgrade may or may not need a new version of the MIB. This depends on
whether there are MIB changes in between the two software versions involved in the upgrade. A
Shelf Critical upgrade is performed for those MIB changes that require a shelf reset.
7.12.3.1 MIB BUILD ON OMC
A full MIB build process is done on the OMC for the 9370 RNC.
Once built, the new MIB file is transferred by FTP to the CP disk in a different directory from the
directory containing the MIB used by the PMC-OMU. After it is saved to disk, it is activated with
the matching software version. It is also loaded into the PMC-OMU memory. There is a second
copy of the MIB saved in the same directory. The service interruption time is only the RNC reset
time after the MIB is activated.
UA06.0 RNC Product Specification
Passing on or copying of this document, use and communication of its contents not permitted without AlcatelLucent written authorization
05.01 / EN Preliminary Page 140//160
An operator activation command is available at the OMC giving the operator control over when
to activate the newly built MIB. In this manner, the MIB build is separated from the MIB
activation.
The newly built MIB has data schema changes not known by the running code on the RNC. The
only way to read the changes is to replace the running code (version N) with the new code
(version N+1) in all processors that read the MIB. This requires an RNC upgrade to pick up the
MIB file.
The CP, PS, and 16pOC-3/STM-1 modules are reset as part of the software upgrade described
in Section 7.12.2 Software Upgrade This implies the MIB has already been built and saved to
disk before the onset of the software upgrade. Please refer to section 6.1.2 System outage
measurements and objectives for the overall target outage duration for an RNC upgrade.
12

7.12.3.2 OFF-LINE MIB BUILD ON THE RNC
The MIB is built by the PMC-OMU while the RNC is off-line. The build time is greater than an
OMC built MIB. This option is rarely used.
7.12.4 RNC PATCHING
Patching is the process that develops a replacement piece of code, packages the patch for
delivery, and applies the patch to the switch. The patching of code is done to alter its behavior
to fix a known problem.
7.12.4.1 MAIN PROCESSOR (PDC) PATCHING
PDC patching is a customer feature on Multiservice Switch switches for CPs and FPs. A PDC
patch can either fix a known problem or provide enhanced functionality. The modification is
limited in scope and replaces the currently active object code on a function by function basis.
Patches are either difficult or impossible to implement for initialization code (already run), in-line
functions, CDL changes, etc. A patch, which does not need a reset of any card it is being
activated on, is called a transparent application patch (TAP).
7.12.4.2 AP PATCHING
The PS Adjunct Processor (AP) patching feature adds methods for generating patches to the
AP software, as well as applying patches to the actual code running on the APs and tracking
them in switch provisioning.
AP patching is available for all AP types. Approximately 70% of AP patches are TAP type of
patches. The remaining 30% patches will not be possible or will impact the PS through a
reset.
13



12
A reset is required whether the MIB file comes with a software upgrade or without a software upgrade. MIB
copies are saved on disk for a MIB with a software upgrade.
13
The size of a single PMC patch object file cannot exceed 2 Mbytes due to loader allocation limitations.
UA06.0 RNC Product Specification
Passing on or copying of this document, use and communication of its contents not permitted without AlcatelLucent written authorization
05.01 / EN Preliminary Page 141//160
8 TECHNICAL SPECIFICATIONS
8.1 DEPENDABILITY
8.1.1 DEFINITIONS
1+1 Protection: Protection and working paths both have live traffic. Selector at
switchover/router determines which one is passed through.
1:1 Protection: Either Protection or working path has live traffic, not both. Also used to
describe equipment protection.
1:N Protection: Generic case of 1:1 where N paths are protected by a single path. Could also
be used to describe equipment protection.
Availability: The measure (probability) of the degree to which a system is available to
perform its intended functions at any time. This uptime is often expressed as a percentage
of a calendar year, e.g. 99.999%. This five nines availability means no more than
approximately 5.25 minutes of downtime (unavailability) per node per year. This value is the
target for total RNC unscheduled outages.
CLOS (Complete Loss of Service): A measurement of the complete (total) loss of
functionality of all of the RNC. This measurement is captured during upgrade procedures.
Critical: Impacts the ability of the system to process calls and/or data.
Defect: A defect is defined as any condition associated with the product that is not in
compliance with the requirements of this document. Cosmetic defects are not generally
included in this measurement, unless they are judged serious enough to preclude the use of
the product for normal field service.
Dependability: Collective term used to describe system performance and its influence
factors: Reliability, Availability and Serviceability (RAS) support. It is the ability of a Solution
to meet Service Providers and their customers expectations of failure rate, downtime and
maintenance actions.
DOA (Dead On Arrival): Any product that is unusable by Alcatel-Lucent or their customers
before commissioning is completed, including cosmetic defects, storage and shipping
damages.
Downtime: Downtime parameters are defined to be the expected long-term average annual
time spent in a down or a failed state.
Fault detection: A process that discovers the existence of faults. Fault detection can be
accomplished manually or automatically, depending on product requirements.
FIT (Failures In Time): The number of failures in 1 billion hours of operation. This is typically
a hardware measure.
FR: Failure Rate is the percent of products per year (%failures/yr) that experience failure at
customer sites (field). This rate is taken into account only after the initial site installation and
commissioning period has ended. This is typically a hardware measure.
FRU: Field Replaceable Unit. The physical unit that the customer would return. It contains
one or more circuit packs or OEM devices.
UA06.0 RNC Product Specification
Passing on or copying of this document, use and communication of its contents not permitted without AlcatelLucent written authorization
05.01 / EN Preliminary Page 142//160
HALT: Highly accelerated life test. HALT is a series of tests performed beyond the product
specifications in order to precipitate weak components or design.
In-service fault detection: The probability of detecting a fault in the product while it is in
service. A high level of detection will reduce the average time to repair the system.
In-service fault isolation: The probability of isolating a fault to the FRU while it is in service. A
high level of detection will reduce the average time to repair the system and reduce NFFs.
Lifetime - Useful Life: This is the period of time (in years) that the product expected to
operate in normal field service when used in operational specified environment. The
materials, the technologies and design practices used in the manufacture of this product
must be chosen appropriately to meet this useful life requirement. This number is expressed
as an L10, i.e., the point at which it is estimated that 10% of the population will experience
end of life.
MTBF (Mean Time Between Failure): A basic measure of reliability. The average time during
which all parts of an item perform within their specified limits during a particular
measurements period under stated conditions.
MTBF system: A measure of system reliability which includes the effects of any fault
tolerance which may exist. The average time between which cause a loss of a system
function defined as critical by the customer.
MTTR (Mean Time To Repair): A basic measure of maintainability. The average time it takes
to fully repair a failed system. Typically includes fault isolation, removal and replacement of
failed item(s) and checkout.
NFF (No Fault Found): NFF refers to product returned from the customer where the reason
for return cannot be determined. NFFs can be caused by poor isolation of faults by the
system, poor correlation of factory test sets to field conditions, and difficult return processes,
among other causes.
Partial outage: Partial outage for the RNC is a significant event, but one that is not classified
as a Total System Outage. A significant event is defined to be a loss of greater than 10% of
the provisioned RNC capacity for origination and/or termination (for combined voice and data
traffic) for a period greater than 30 seconds. Note that the duration of partial outages must
be weighted by the related service impact: for instance, a partial outage lasting 20 minutes,
and affecting 10% of RNC capacity accounts for 20 minutes * 10% =2 minutes.
Reliability: The probability of successful operation, for a specified length of time, under a
specified set of operating conditions.
RR (Return Rate): Return rate is the percent of products per year (%/yr) that are returned
from the end customer to Nortel for any reason. This could include NFF, functional failure,
intermittent failure, cosmetic damage, damage in shipment, incorrect vintage, etc.
Serviceability: All aspects of service from a customer viewpoint. Includes Maintainability,
Install ability, Problem Resolution, Order Fulfillment, etc.
SO (System Outage): A measurement of complete loss of functionality of all (total) or part of
the RNC.
SPQL (Shipped Product Quality Level): The shipped product quality level refers to the
portion of the annual defective product compared to the total annual shipped product, which
are discovered during factory audits and during product installation. It is expressed either in
UA06.0 RNC Product Specification
Passing on or copying of this document, use and communication of its contents not permitted without AlcatelLucent written authorization
05.01 / EN Preliminary Page 143//160
PPM or percentage. It is also referred to as out of box quality, average outgoing quality level,
or Dead On Arrival (DOA).
Total outage: Total outage for the RNC is the total loss of capacity for origination and/or
termination (for voice and data traffic) due to causes affecting the RNC for more than 30
seconds
8.1.2 SYSTEM OUTAGE MEASUREMENTS AND OBJ ECTIVES
Reliability and Quality Measurements for Telecommunications Systems (RQMS) is a set of
specifications written by Telcordia Technologies. The reference for Wireless is: GR-1929-
Core also known as RQMS-Wireless. A set of measurements is specified for each node e.g.
MSC, HLR, RNC, and Node B. The most important (visible) measurements are:
Total Outage, Downtime Performance Measurement (DPM)
Partial Outage, DPM
Total Outage, Outage Frequency Measurement (OFM)
Partial Outage, OFM
System Outage (SO) is a measurement of complete loss of functionality of all (total) or part
of the RNC. The SO measurement expresses the annualized total number (outage
frequency) and total duration (downtime) of outages experienced by an average system.
These measures translate directly into system Mean Time Between Failures (MTBF) and
system availability / unavailability respectively. The System Outage measurements are
performed on all known RNC configurations.
The UA06.0 targets for scheduled and unscheduled outages are listed in [R49] and
summarized in Table 8-1 Outage Metrics.
Table 8-1 Outage Metrics

Metric Duration
System Outage Downtime (min/sys/year) 5 minutes
CLOS on Upgrade (with MIB change) UA05 ->UA05.2
CLOS on Upgrade UA05.2.x ->UA05.2.y
Upgrade procedure duration
9 minutes
7 minutes
3.5 hours
System Restart (initialization) time 5 minutes
Patching outage duration
Patching procedure duration
0 (zero)
<1 hour




UA06.0 RNC Product Specification
8.1.3 RELIABILITY BLOCK DIAGRAM
A reliability block diagram (RBD) is a reliability model that presents a clear picture of
functional interdependencies and provides a framework for developing quantitative product
level reliability estimates. A RBD should make it easy to identify single points of failure,
redundancies and all serial-parallel relationships.
14


Figure 8-1: RNC Reliability Block Diagram
Contributes to total
outage
Contributes to partial
outage
Failure Modes


9370 RNC
Passing on or copying of this document, use and communication of its contents not permitted without AlcatelLucent written authorization
05.01 / EN Preliminary Page 144//160


HSHot Standby CRCold Redundancy LSLoad Shared BPBackplane Pwr Mod-Power Module
Any other block causes a total outage which is defined as a total loss of capacity for more
than 30 seconds. For example, both of the PMC-NIs (active and standby) must be out of
service to cause a total outage.
8.1.3.1 DEPENDABILITY: REDUNDANCY / SPARING
See Chapter 6 for detailed descriptions of the hardware and software sparing models and
implementaion.


14
The reliability block diagram was developed according to Telcordia SR-TSY-000385 Bell
Communications Research Reliability Manual.

PS1 | DCPS
RAB/PC LS
TMU N+P LS
BP
(Simplex)
Pwr Mod
(1+1 LS)
Fans
(2+1 LS)
Fabric
(1+1 LS)
CPx +OMU
(1:1 HS)
16p OC-3
(1:1 HS)
NI
(1:1 HS)
0 or 1 pair
GigE
(1+1 LS)
PMC-M
(1:1 HS)
9370 RNC
PS1 | DCPS
RAB/PC LS
TMU N+P LS
BP
(Simplex)
Pwr Mod
(1+1 LS)
Fans
(2+1 LS)
Fabric
(1+1 LS)
CPx +OMU
(1:1 HS)
16p OC-3
(1:1 HS)
NI
(1:1 HS)
PMC-M
(1:1 HS)
0 or 1 pair
GigE
(1+1 LS)
UA06.0 RNC Product Specification
Passing on or copying of this document, use and communication of its contents not permitted without AlcatelLucent written authorization
05.01 / EN Preliminary Page 145//160
8.1.4 HARDWARE RELIABILITY OBJ ECTIVES
Reliability is a component of dependability. It is the probability that an item can perform its
intended function for a specified interval under stated conditions. Availability, another
dependability component, is the measure (probability) of the degree to which a system is
available to perform its intended functions at any time. The RNC is designed to be fault
tolerant and operate with very high reliability.
Theoretical hardware dependability targets for metrics such as Mean Time Between Failure
(MTBF) and Return Rate (RR) are listed in [R45]
A reliability analysis and prediction is performed to determine the RNC failure rate and to
identify the failure rate drivers and critical components. The analysis is performed at 40 C
ambient, ground benign, in accordance with Alcatel-Lucent procedures. When necessary
assessments are performed using component manufacturer's test data, field results, similar
equipment history or laboratory test. Estimates based on best engineering judgment should
be made when no data is available.
The failure rate drivers are assessed for their impact on reliability. These items will generally
include a new technology, short shelf life, and limited operating life times and special
handling requirements. Commitment to achieve specific Shipped Product Quality Level
(SPQL) and reliability targets is set up with the suppliers.
The RNC and its modules are designed for optimum component thermal positioning within
the operating constraints. A thermal analysis to determine junction and device temperatures
is performed prior to the reliability analysis and prediction.
In order to strengthen the design and achieve sufficient design margins, the Highly
Accelerated Life Test (HALT) is performed on all critical modules. Failures occurring during
testing shall be analyzed to determine the root cause and corrective action.
A Failure Mode, Effect and Criticality Analysis (FMECA) is performed. Corrective design
options or other actions to eliminate design or manufacturing risks, safety concerns and built-
in test limitations shall be documented.
The Mean Time To Restore System (MTTRS) following a critical RNC failure shall not be
greater than four hours. The MTTRS includes all corrective maintenance time and delays
times for spare units.
8.1.5 SOFTWARE RELIABILITY OBJ ECTIVES
Downtime due to system failures is directly related to the application failure rates,
autonomous fault detection and fault recovery SWACT time (time to switch activity). The
potential for multiple fault scenarios and the corresponding outage consequences is directly
related to the recovery time (restart time) of the spare components




UA06.0 RNC Product Specification
Passing on or copying of this document, use and communication of its contents not permitted without AlcatelLucent written authorization
05.01 / EN Preliminary Page 146//160

Table 8-2 Reliability Objectives
Component ChR Time
(Mins)
Restart Time time from RNC restart to first cell availability 5
PS Restart time from restart until module is fully active (all roles assigned) 2
CP Restart time from restart of active CPx until it is available as standby CPx 3
16pOC-3
Restart
time from restart of the active 16pOC-3 until it is available as the
standby 16pOC-3
2
PMC-M Swact time from active PMC-M reset until standby PMC-M takes active role 0.033
PMC-NI Swact time from active PMC-NI reset until standby PMC-NI takes on active
role
0.5
PMC-OMU
Swact
time from active PMC-OMU reset until standby PMC-OMU takes on
active role
0.5
CPx Swact time from active CP3 reset until standby CP3 takes on active role 0.05
16pOC-3/STM-
1 Swact
time from active 16pOC-3/STM-1 reset until standby 16pOC-3/STM-
1 takes on active role
50 ms
PMC-RAB
Swact
time from PMC-RAB reset until spare LGs are re-assigned 0.05
PMC-TMU
Swact
time from active PMC-TMU reset until first cell availability 0.25

UA06.0 RNC Product Specification
Passing on or copying of this document, use and communication of its contents not permitted without AlcatelLucent written authorization
05.01 / EN Preliminary Page 147//160
8.2 DIMENSIONING
The following dimensioning tables specify the maximum number of neighbouring RNCs,
NodeBs, and users in connected state. These figures are pro-rated with the number of
Packet Server cards in the RNC for the other configurations (4, 8, 10 and 12 Packet Server
cards).
The RNC Capacity Roadmap document provides additional details, see [R51]
Table 8-3 Dimensions for 9370 RNC with PS1
Dimension for 9370 RNC PS1 Configuration Limit
Neighbouring RNC 24
NodeBs macroBTS @ 6 cells per BTS 200
NodeBs picoBTS @ 1 cell per BTS 1200
Max number of CS RRC contexts 6048
Max number of PS (DCH+FACH) RRC contexts 9600
Max number of FACH RRC contexts 7020
Max number of CELL_PCH RRC Contexts 7020
Max number of total CELL_PCH and URA_PCH RRC contexts 32040

Table 8-4 Dimensions for 9370 RNC with DCPS
Dimensions for 9370 RNC DCPS Configuration Limit
Neighbouring RNC 24
NodeBs macroBTS @ 6 cells per BTS 408
NodeBs picoBTS @ 1 cell per BTS 2400
Max number of CS RRC contexts 12096
Max number of PS (DCH+FACH) RRC contexts 17280
Max number of FACH RRC contexts 14040
Max number of CELL_PCH RRC Contexts 14040
Max number of total CELL_PCH and URA_PCH RRC contexts 64080

UA06.0 RNC Product Specification
Passing on or copying of this document, use and communication of its contents not permitted without AlcatelLucent written authorization
05.01 / EN Preliminary Page 148//160
8.3 CAPACITY
The RNC Capacity Roadmap: [R51] UMT/PLM/INF/004862 RNC Capacity Roadmap is the
reference document that provides the RNC capacity figures, which are defined per RNC
hardware configuration. This document describes the assumptions and environment under
which the RNC with PS1 and RNC with DCPS achieve a specific capacity.
The following criteria are used to formulate the capacity commitment of the RNC:
Scenarios must be verifiable either on a lab or in the field
Capacity metrics are chosen based on their suitability for network engineering

Table 8-5 Capacity for 9370 RNC with PS1

9370 RNC with PS1
4 PS 6 PS 7 PS 8 PS 10 PS 12 PS
Node Bs @ 6 cells per BTS 60 100 140 140 200
200
Pico Node Bs 360 600 720 840 1200
1200
Cells 360 600 720 840 1200
1200
Speech (Erlangs)
1150 1850 2000 2550 3300 3900
Mobile Office Internet
(Iu Application Mbps)
44 73 88 105 147 88
R99 Service-Mix (TRBs)
700 1100 1250 1500 2000 2500
HSPA Service-Mix (TRBs)
700 1100 1250 1500 2000 2500

Table 8-6 Capacity for 9370 RNC with DCPS

9370 RNC with DCPS
4 PS 6 PS 8 PS 10 PS 12 PS
Node Bs @ 6 cells per BTS 102 170 238 340 408

Pico Node Bs 600 1000 1400 2000 2400

Cells 612 1000 1428 2040 2448

Speech (Erlangs) 2450 4000 5800 8250 10000

Mobile Office Internet
(Iu Application Mbps)
110 183 263 368 440

R99 Service-Mix (TRBs) 1400 2200 3100 4100 5000

HSPA Service-Mix (TRBs) 1400 2200 3100 4100 5000



UA06.0 RNC Product Specification
Passing on or copying of this document, use and communication of its contents not permitted without AlcatelLucent written authorization
05.01 / EN Preliminary Page 149//160
8.4 KEY PERFORMANCE INDICATORS
KPIs or key performance indicator is the criteria used to characterize the performance of an
end-to-end UMTS system. KPI targets are set such that the end-to-end network adheres to
the highest performance standards including meeting or exceeding those set by the
customers. KPIs enable customers to offer high value services by delivering a performance
proven infrastructure.
There are five types of KPI categories:
Signaling Plane KPIs: delay in call setup/release, handoff, authentication, voice quality
User Plane KPIs: voice delay, data traffic delay, ping
Throughput KPIs: FTP, web download (HTTP), email
Network KPIs: success and failure statistics of certain call events
Application and Services KPIs: Call setup times, call setup success rates

Please refer to [R54] FRS 33518 UA06 Performance Targets for the detailed KPI
requirements that RNC complies to.
8.5 SECURITY
9370 RNC is able to utilize existing MSS15K security features. At this time, IP Sec with
Radius authentication, IP Sec with IKE, secure shell, security warning banner and security
logs are all supported.
8.5.1 IPSEC WITH RADIUS AUTHENTICATION
Allows the operator to install a Radius server on the WNMS and have the server validate
passwords to the RNC.
8.5.2 IPSEC WITH IKE
Specifically Internet Key Exchange is the protocol used to establish an IP Sec security
association.
8.5.3 SECURE SHELL
Secure Shell (SSH) is available as a secure alternative to telnet. SSH is available as a
server on the RNC and can only be used to telnet into the RNC.
8.5.4 SECURITY WARNING BANNER
Allows the operator to override the banner that is displayed after a successful telnet session
is established.
8.5.5 SECURITY LOGS
Logs operator accesses to the RNC DCS.
Please refer to [R64] UMT/RNC/DD/018227 RNC Security Enhancements HLD for more
information.
UA06.0 RNC Product Specification
8.6 POWER
Power is supplied to the MSS15K through redundant A and B voltage feeds to the rear of
the breaker interface panel (BIP) located at the top of the cabinet. The BIP requires DC
voltage provided directly from the site power plant or from a system of AC rectifiers. With
two independent supply feeds, the B power feed maintains power to the same breaker
interface module (BIM) if the A side fails (and vice-versa). This ensures there is no single
point of failure in the system powering.
One of the BIPs functions is to provide stabilized power to the shelf assemblies and
cooling units. Both the A and B feeds provide power through the BIP to the entire shelf in a
load-sharing mode of operation under normal conditions. On failure of either the A or B
feed, the remaining supply then provides the full power load to the equipment. The
redundant A/B power feeds are routed to two (9370 RNC Single) or four (9370 RNC Dual)
breaker interface modules (BIMs). Circuit breakers on each BIM control the A and B power
supplies to the power interface modules (PIMs) on the shelf assemblies and cooling units.
Power is distributed to each shelf through redundant PIMs located along the left rear edge
of the shelf assembly. The PIMs provide additional power filtering and interface the -48 Vdc
power to the backplane. Refer to section for more information on PIMs. The BIP and its
components are shown in Figure 8-2. The rear view of the BIP showing the power entry
cabling point is shown in Figure 8-3.
Figure 8-2: BIP Location in NEBS 2000 Frame

Passing on or copying of this document, use and communication of its contents not permitted without AlcatelLucent written authorization
05.01 / EN Preliminary Page 150//160
UA06.0 RNC Product Specification
Figure 8-3: BIP Rear View with Power Entry Points



Power is distributed to the four PIMs and the lower cooling unit from two BIMs on a single
shelf RNC. BIM A1 has circuit breakers A1.1 to A1.5 and BIM B1 has circuit breakers B1.1
to B1.5. The lower cooling unit receives power from A1.3 and B1.3. Each of the four PIMs
receives power from both an A1 and B1 breaker. The upper fabric is fed power from A1.1
and B1.1 and the lower fabric is fed power from A1.4 and B1.4. Each card slot has two
power feeds. For example, slot 2 receives power from A1.2 and B1.2. The power
distribution interconnect cabling is shown in Figure 8-4. The upper shelf has the same
breaker/BIM assignments except that BIMs A2 and B2 supply the power. The BIP lower
shelf breaker assignments on BIMs A1/B1 for the shelf slot and cooling unit are shown in
Figure 8-5.
Passing on or copying of this document, use and communication of its contents not permitted without AlcatelLucent written authorization
05.01 / EN Preliminary Page 151//160
UA06.0 RNC Product Specification
Figure 8-4: Power Distribution Interconnect Diagram

Passing on or copying of this document, use and communication of its contents not permitted without AlcatelLucent written authorization
05.01 / EN Preliminary Page 152//160
UA06.0 RNC Product Specification
Figure 8-5: BIM and Breaker Assignments for Lower Shelf

8.6.1 POWER CONSUMPTION
The typical and maximum power consumption values for a fully configured 9370 RNC with
12 Packet Servers are listed in Table 8-7.
Table 8-7: Component Power Consumption
Component Qty
Typical
Power
Max.
Power
Shelf
Typical
Shelf
Max
BIP 1 13 21 13 21
Cooling Unit 1 80 187 80 187
Fabric 2 50 60 100 120
PIM 4 0 0 0 0
Alarm/BITS 1 1 1 1 1
MAC Address
Module 1 0 0 0 0

CP3 2 37 55 74 110
16pOC3 (PQC-based) 2 130 150 260 300
PSFP 12 105 150 1260 1800

9
3
7
0

R
N
C

+

P
S
F
P

Totals 1788 2539
Passing on or copying of this document, use and communication of its contents not permitted without AlcatelLucent written authorization
05.01 / EN Preliminary Page 153//160
UA06.0 RNC Product Specification
Passing on or copying of this document, use and communication of its contents not permitted without AlcatelLucent written authorization
05.01 / EN Preliminary Page 154//160


Component Qty
Typical
Power
Max.
Power
Shelf
Typical
Shelf
Max
BIP 1 13 21 13 21
Cooling Unit 1 80 187 80 187
Fabric 2 50 60 100 120
PIM 4 0 0 0 0
Alarm/BITS 1 1 1 1 1
MAC Address
Module 1 0 0 0 0

CP3 2 37 55 74 110
16pOC3 (MS3-based) 2 136 150 272 300
DCPS
15


12 150 180 1800 2160

9
3
7
0

R
N
C

+

D
C
P
S

Totals 2340 2899

8.7 REGULATORY COMPLIANCE
8.7.1 GROUNDING
The NEBS 2000 frame is grounded to the site ground window using a cable with a straight
two-hole lug to the top front or rear pair of pre-drilled holes through the silvery strip into the
frame. The minimum cable size for grounding the frame to the site ground window is No. 6
AWG.
The frame itself is the grounding point for the hardware mounted in it. The frame has a
silvery grounding strip bonded to the front of each frame upright and across the top front.
Any MSS15K equipment mounted into the frame must use self-tapping bolts to ensure
proper ground contact to the frame.
8.7.2 POWER AND GROUNDING SAFETY STANDARDS
North American requirements are identified in the following document:
GR-1089-CORE Electromagnetic Compatibility and Electrical Safety - Generic
Criteria for Network Telecommunications Equipment (Chapter 9: Grounding and
Bonding)
European requirements are identified in the following documents:
ETS 300 132-2, Power Supply Interface at the Input to Telecommunications
Equipment (ETSI)
ETS 300 253, Earthing and Bonding of Telecommunication Equipment in
Telecommunication Centers (ETSI)

15
Power dissipation data based on estimates.
UA06.0 RNC Product Specification
8.7.3 GENERAL
The following sections list all of the regulatory specifications for the 9370 RNC. Refer to
[R55] UMT/RNC/DJ D/017284 ALU 9370 RNC Full H/W Compliance Report for (a)
product integrity compliancy (e.g. safety, EMC, thermal, mechanical and power) and (b)
regulatory compliancy.
The regulatory specifications listed in Table 8-8 are based on the following assumptions:
Regulatory approvals are required for the USA, Canada and Europe and do not
presently cover other countries. Other markets could add additional design
specifications.
The RNC will always be connected to an Underwriters Laboratory/Canadian
Standards Association (UL/CSA) certified CSU (or other Network Terminating
equipment) that provides the secondary protection for lightning and AC power
faults.
A sample of the 9370 RNC regulatory label is shown in Figure 8-6. The mandatory
regulatory markings are:
CSA bi-national logo (USA and Canada)
Consumer Electronics (CE) Mark
CSA CB Certificate for product safety
FCC and Canadian ICES Electromagnetic Compatibility text
o Class A compliance for North America
o Class B compliance for Europe and other international markets
VCCI logo for Class B Electromagnetic Compatibility in the J apanese market
Figure 8-6: RNC Regulatory Label


Passing on or copying of this document, use and communication of its contents not permitted without AlcatelLucent written authorization
05.01 / EN Preliminary Page 155//160
UA06.0 RNC Product Specification
Passing on or copying of this document, use and communication of its contents not permitted without AlcatelLucent written authorization
05.01 / EN Preliminary Page 156//160
Table 8-8: Regulatory Requirements Summary
Category Requirement Specification
Safety

Complies with requirements for
Safety of Information
Technology Equipment
UL Standard 60950 and CAN/CSA-
C22.2 No. 60950-00
IEC 60950
EN 60950
AS 3260 (Australia)
Optical Fiber Standards
UL 1651 & CSA C22.2 n232
IEC 60825-1 & IEC 60825-2
EMC
(Radiated Emissions)
Complies with Requirements for
Computing Devices and
Information Technology
Equipment
FCC Pt. 15,subpart B (Class A)
(USA)
ETS 300 386 Ver 1.2.1 (EC)
ICES-003 (CAN)
EMC
(Conducted
Emissions)
(DC power)
Complies with Requirements for
conducted emissions on DC
power
ETS 300 386 Ver 1.2.1 (EC)
ICES-003 (CAN)

EMC
(Conducted
Emissions)
(on Signal Leads)
Complies with Requirements for
conducted emissions on signal
leads
ETS 300 386 Ver 1.2.1 (EC)
Bell Canada DS 8465 (CAN)

EMC
(Radiated Immunity)
Immune to Radiated Emissions ETS 300 386 Ver 1.2.1 (EC)
EMC
(Conducted Immunity)
(on DC Power leads)
Immune to Conducted
Emissions
ETS 300 386 Ver 1.2.1 (EC)
EMC
(Conducted Immunity)
(on signal leads)
Immune to Conducted
Emissions
ETS 300 386 Ver 1.2.1 (EC)
ESD To Meet Level 4 ETS 300 386 Ver 1.2.1 (EC)
EFT To Meet Level 2 ETS 300 386 Ver 1.2.1 (EC)
8.7.4 CUSTOMER REQUIREMENTS SUMMARY
The following section lists all of the customer hardware integrity specifications for the RNC.
The 9370 RNC Single is compliant to GR-63-CORE, GR-1089-CORE and GR-1929-
CORE.
The integrity specifications listed in Table 8-9 are based on the following assumptions:
The key markets for the RNC are Europe and North America. Compliance to the
Customer requirements in those markets will satisfy the customer requirements in
most other markets.
The RNC could be located in a Central Office or in a shelter. The physical
environment is, therefore, based on a location other than a telecommunication
center.
The RNC will always be connected to a UL/CSA certified CSU (or other network
terminating equipment) that provides the secondary protection for lightning and AC
power faults.
UA06.0 RNC Product Specification
Passing on or copying of this document, use and communication of its contents not permitted without AlcatelLucent written authorization
05.01 / EN Preliminary Page 157//160
Table 8-9: Customer Requirements Summary
Category Requirement Specification
Thermal Reliable, Long Term Operation Telcordia GR-63-CORE (USA/CAN)
ETS 300-019 (EC)
Climatic To meet commercial
specification.
Telcordia GR-63-CORE (USA/CAN)
ETS 300-019 (EC)
Mechanical To meet commercial
specification.
Telcordia GR-63-CORE (USA/CAN)
ETS 300-019 (EC)
Reliability To meet customer expectations
for system reliability.
Telcordia GR-1929-CORE, RQMS
issue 1, Dec. 1999, LSSRGR-
Reliability section 12
Telcordia TR-332, issue 6, Dec. 1997,
Reliability Prediction Procedure for
Electronic Equipment
Safety
(Lightning/Surge)
(AC Power Fault)
To meet customer expectations
for safety of
Telecommunication and
Computing Equipment
Telcordia GR1089-CORE (USA/CAN)
Intra-building Surges only apply to this
product.
EMC
(Radiated Emissions)
To meet customer expectations
for radiated emissions of
Telecommunication and
Computing Equipment
Telcordia GR-1089-CORE (USA)
Bell Canada DS 8465 (CAN)
EMC
(Conducted
Emissions - DC
power leads)
To meet customer expectations
for conducted emissions of
Telecommunication/Computing
Equipment
Telcordia GR-1089-CORE (USA)
Bell Canada DS 8465 (CAN)
EMC
(Conducted
Emissions - on Signal
Leads)
To meet customer expectations
for conducted emissions of
Telecommunication/Computing
Equipment
Telcordia GR-1089-CORE (USA)
Bell Canada DS 8465 (CAN)
EN 55022 (EC)
EMC
(Radiated Immunity)
To meet customer expectations
for radiated immunity of
Telecommunication/Computing
Equipment
Telcordia GR-1089-CORE (USA)
Bell Canada DS 8465 (CAN)
EMC
(Conducted Immunity
- DC Power leads)
To meet customer expectations
for conducted immunity of
Telecommunication/Computing
Equipment
Telcordia GR-1089-CORE (USA)
Bell Canada DS 8465 (CAN)
EMC
(Conducted Immunity
- on signal leads)
To meet customer expectations
for conducted immunity of
Telecommunication/Computing
Equipment
Telcordia GR-1089-CORE (USA)
Bell Canada DS 8465 (CAN)
ESD To meet customer expectations
for ESD immunity of
Telecommunication/Computing
Equipment
Telcordia GR-1089-CORE (USA)
Bell Canada DS 8465 (CAN)
EFT To meet customer expectations
for EFT immunity of
Telecommunication/Computing
Equipment
Bell Canada DS 8465 (CAN)
Acoustic Noise To meet customer expectations
for Acoustic noise
Telcordia GR-63 CORE (USA/CAN)
ETS 300753 (EC)
Power & Grounding To meet customer expectations
for Power & Grounding
ETS 300 132-2 (EC)


UA06.0 RNC Product Specification
Passing on or copying of this document, use and communication of its contents not permitted without AlcatelLucent written authorization
05.01 / EN Preliminary Page 158//160
9 ABBREVIATIONS AND DEFINITIONS
9.1 ABBREVIATIONS
3GPP Third Generation Partnership Project
ALCAP Access Link Control Application Part
APC Application Processor
ARP Address Resolution Protocol
ASN Abstract Syntax Notation
ATM Asynchronous Transfer Mode
BPDU Bridge Protocol Data Unit
BTS Base Transmission Station
CC Congestion Control
CCP Common Communication Port
CG Carrier Grade
CoS Class of Service
CP Control Processor
CP, C-Plane Control Plane
CPSO CP Switchover
DHCP Dynamic Host Configuration Protocol
DNS Domain Name Server
DP Discard Priority
DSCP Differentiated Services Code Point
ECMP Equal Cost Multiple Path
EP Equipment Protection
ESP Encapsulation
FACH Forward Access Channel
FP Function Processor
FQM Frame Queue Manager
GBR Guaranteed Bit Rate
Gpbs Gigabit per second
HSDPA High Speed Downlink Packet Access
HS-DSCH High Speed Dual Service Channel
UA06.0 RNC Product Specification
Passing on or copying of this document, use and communication of its contents not permitted without AlcatelLucent written authorization
05.01 / EN Preliminary Page 159//160
HSM Hitless Software Migration
ICMP Internet Control Message Protocol
IP Internet Protocol
IPSEC IP Security
LAN Local Area Network
LLC-SNAP Link Layer Control Subnetwork Access Protocol
LMT Local Maintenance Terminal
LOS Loss of Signal
NAT Network Address Translation
NBAP Node B Application Protocol
NCP Network Control Protocol
OAM Operations, Administration, and Maintenance
OMC Operations and Maintenance Center
PCH Paging Channel
PDR Protected Default Route
PHB Per-Hop Behavior
PQC Passport Queue Controller
QoS Quality of Service
RAB Radio Access Bearer
RACH Random Access Channel
RNC Radio Network Controller
RTO Retransmission Time Out
SC Service Category
SCTP Stream Control Transmission Protocol (IETF RFC2960) A
secure protocol for sending signaling packet data Iub
Control Plane and SS7 over IP
SCTP INIT mechanism used in the initialization of the SCTP connection
SGW Security Gateway
SHTTP Secure HyperText Transfer Protocol, also Secure HTTP (IETF
RFC2660) used for the transmission of OAM parameters from
the OMC through the RNC to the Configuration Server. The
RNC does not terminate SHTTP or process the content of the
SHTTP payload.
SMLC Serving Mobile Location Center
SRB Signaling Radio Bearer
UA06.0 RNC Product Specification
Passing on or copying of this document, use and communication of its contents not permitted without AlcatelLucent written authorization
05.01 / EN Preliminary Page 160//160
TM Traffic Management
TRB Traffic Radio Bearer
TTI Time Transmission Interval
UDP User Datagram Protocol
UE User Equipment
UMTS Universal Mobile Telecommunication System
UP, U-Plane User Plane
VR Virtual Router



END OF DOCUMENT