Вы находитесь на странице: 1из 162

IU TRANSPORT ENGINEERING GUIDE

Document number:
Document issue:
Document status:
Date:

UMT/IRC/APP/11676
07.02 / EN
Standard
11/06/2009

Author:

Philippe DELMAS

External document

Copyright 2007 Alcatel-Lucent, All Rights Reserved


Printed in France

UNCONTROLLED COPY: The master of this document is stored on an electronic database and is write
protected; it may be altered only by authorized persons. While copies may be printed, it is not recommended.
Viewing of the master electronically ensures access to the current issue. Any hardcopies taken must be regarded
as uncontrolled copies.
ALCATEL-LUCENT CONFIDENTIAL: The information contained in this document is the property of AlcatelLucent. Except as expressly authorized in writing by Alcatel-Lucent, the holder shall keep all information
contained herein confidential, shall disclose the information only to its employees with a need to know, and shall
protect the information from disclosure and dissemination to third parties. Except as expressly authorized in
writing by Alcatel-Lucent, the holder is granted no rights to use the information contained herein. If you have
received this document in error, please notify the sender and destroy it immediately.

Iu Transport Engineering Guide

PUBLICATION HISTORY
External
Edition

UMTS
Release:

Date

ReasonsForChange

7.2

11-06-09

3.11.7.2: Mod, up to 19 remote Sctp endPoints per RNC


Standard Edition

7.01

25-01-09

3.11.7.3: added Sigtran robustness.


3.10.1.2.3: Mod, 16pOC3 MS3 FP ECMP.

02-12-08

3.11.2: FRS89869 IuPS CP/UP traffic separation


3.11.6: change, deadIntervalreduced to 3 seconds

01-10-08

Preliminary edition.

17-09-08

7.00

UA6

FRS 23479 Advanced QoS Transport Framework,


FRS 29869 RNC Dimensioning,
FRS 33334 & 33365 RNC Hybrid,
FRS 34202 BwPool,
FRS 33363 IuPS over IP,
FRS 34105 UtranSharing,
FRS 34220 dual stack SS7.

6.00

UA5-1

04-04-07

- FRS18855 Utran sharing,

5.00

UA5-0

10-01-07

- 5 IuFlex R99/R4 Cs CN vpi modification,

01-07-06

- FRS27083: Rnc aal2 Cac enhancement


- FRS29417: IuFlex
- FRS30782: 16pOC3 MS3 FP

12-06-06

- Change the Passport Release associated to the UA4-2.- Buffer Size setting case of shaped VPC.
- Add 3.10.2.2.1 SaalNni abnormal cases.

42.3
UA4-2

42.2

12-04-06

42.1
300605
UA4-1
4.03

300605

3, 5: update with BICN


DocumentReferenceNumber change.
UA4-2 Update:
3 & 5: RNC Aal2 new components,
3 & 5 Qaal2 alternateRouting added,
3 aal2LinkCac ACR enhancement.
Wording update:
3: IuCS RNC Path selection: available path added
3: AAL2 address: replace reserved by not used.

4.02

021204

5: ATM PointToPoint connection case removed.


3.11.1.2.1: CS MTP3 PC re-formulation
3.4.6.4 APC Threshold value change.

ALU confidential

UMT/IRC/APP/11676

07.02 / EN

Standard

11/06/2009

Page 3/162

Iu Transport Engineering Guide


4.01

281004

5: ATM PointToPoint connection case removed.


3.11.1.2.1: CS MTP3 PC re-formulation
3.4.6.4 APC Threshold value change.

4.00

010504

Reviewed 07-2004.
Release 4-1 Update:
3 & 5: CS CoreNetwork BICN Architecture added.
3. & 5:SS7 quasiAssociatedMode update,
3 & 5: AAL2 switch added,
3.10: RNC1500 information added
3.5 PNNI: updated
3/RNC/ SS7 stack migration: update,
3.3 APS update

Na

UA3

15/01/04

3/ATM/OAM LoopBack: modification.


3/InverseARP: added.
RNC Original is replaced by Default Configuration.
WG Original is replaced by Alternative Configuration.

Na

UA3

18/08/03

5.5.2.1: Pathid Range Modification


3/Transmission: added
3/ATM/QOS: added
Release3 Update:
3.6.6 ATM/PNNI
3.6.7 SS7 ProtocolStack migration
ATM/Addressing removed.
+ Review remarks.

ALU confidential

UMT/IRC/APP/11676

07.02 / EN

Standard

11/06/2009

Page 4/162

Iu Transport Engineering Guide

CONTENTS
IU TRANSPORT ENGINEERING GUIDE ............................................................................................... 1
CONTENTS ............................................................................................................................................. 5
1
INTRODUCTION ........................................................................................................................... 8
1.1 OBJECT...................................................................................................................................... 8
1.2 SCOPE OF THIS DOCUMENT ......................................................................................................... 8
1.3 AUDIENCE FOR THIS DOCUMENT .................................................................................................. 8
1.4 APPROVAL & REVIEWER ............................................................................................................. 8
2
RELATED DOCUMENTS ............................................................................................................. 8
2.1 APPLICABLE DOCUMENTS ............................................................................................................ 8
2.2 REFERENCE DOCUMENTS ............................................................................................................ 9
3
TRANSPORT NETWORK LAYERS, DESCRIPTION ................................................................ 11
3.1 TRANSMISSION ......................................................................................................................... 13
3.1.1
PDH: ............................................................................................................................. 13
3.1.1.1 T3 Link: ................................................................................................................... 14
3.1.2
SDH/SONET................................................................................................................. 15
3.1.2.1 Throughput.............................................................................................................. 16
3.1.2.2 Transmission OAM ................................................................................................. 16
3.1.2.3 APS ......................................................................................................................... 18
3.2 ATM ........................................................................................................................................ 23
3.2.1
ATM interface Type ...................................................................................................... 23
3.2.2
VPC, VPT ..................................................................................................................... 23
3.2.2.1 VPC......................................................................................................................... 23
3.2.2.2 VPT ......................................................................................................................... 24
3.2.3
Oversubscription .......................................................................................................... 26
3.2.4
ATM CAC ..................................................................................................................... 27
3.2.5
Traffic Management ..................................................................................................... 28
3.2.5.1 TrafficDescriptor parameter presentation ............................................................... 28
3.2.5.2 TrafficShaping......................................................................................................... 29
3.2.6
QOS.............................................................................................................................. 30
3.2.6.1 Buffer ...................................................................................................................... 30
3.2.6.2 Scheduling .............................................................................................................. 32
3.2.6.3 CDV, CDVT............................................................................................................. 33
3.2.6.4 Discard Policy ......................................................................................................... 34
3.2.7
Addressing.................................................................................................................... 37
3.2.8
ATM OAM..................................................................................................................... 37
3.2.8.1 ATM Node, OAM Type ........................................................................................... 38
3.2.8.2 ATM OAM Flows..................................................................................................... 38
3.2.8.3 ATM OAM Signals .................................................................................................. 40
3.2.8.4 ATM OAM Card characteristics & Compliancy: ...................................................... 44
3.3 PNNI ....................................................................................................................................... 44
3.3.1
Pnni Topology: ............................................................................................................. 44
3.3.2
Pnni Protocols: ............................................................................................................. 45
3.3.3
Pnni Channels: ............................................................................................................. 46
3.3.4
Pnni ReRouting: ........................................................................................................... 46
3.3.5
Pnni on UMTS Interfaces ............................................................................................. 48
3.3.5.1 sPVC, sPVC Hairpins ............................................................................................. 49
3.3.5.2 Plane description .................................................................................................... 49
3.4 AAL2....................................................................................................................................... 52
3.4.1
Addressing:................................................................................................................... 52
3.4.2
ALCAP.......................................................................................................................... 52
3.4.3
AAL2 Switching: ........................................................................................................... 53
3.5 IP ............................................................................................................................................ 55
3.5.1
InverseARP .................................................................................................................. 55
3.5.2
ECMP ........................................................................................................................... 57
3.5.3
QOS DownGrading ...................................................................................................... 59
3.6 GTP-U: ................................................................................................................................... 60
ALU confidential

UMT/IRC/APP/11676

07.02 / EN

Standard

11/06/2009

Page 5/162

Iu Transport Engineering Guide


3.7
3.8

SAAL-NNI:.............................................................................................................................. 60
SS7......................................................................................................................................... 60
3.8.1
MTP2 ............................................................................................................................ 60
3.8.2
MTP3 ............................................................................................................................ 61
3.8.2.1 SS7 Network Topologies ........................................................................................ 61
3.8.2.2 PointCode ............................................................................................................... 63
3.8.2.3 Service Indicator ..................................................................................................... 64
3.8.2.4 NetWork Indicator ................................................................................................... 64
3.8.2.5 RouteSet, LinkSet ................................................................................................... 65
3.8.2.6 Loadsharing: ........................................................................................................... 67
3.8.2.7 ChangeOver............................................................................................................ 69
3.8.2.8 MTP3 interface name ............................................................................................. 70
3.8.3
SCCP............................................................................................................................ 71
3.8.3.1 SSN......................................................................................................................... 71
3.8.3.2 Routing.................................................................................................................... 71
3.8.3.3 SCCP Frame........................................................................................................... 72
3.8.3.4 Timers ..................................................................................................................... 72
3.9 IU TOPOLOGY ........................................................................................................................... 73
3.9.1
Transmission link:......................................................................................................... 73
3.9.2
BICN NSS18:................................................................................................................ 74
3.9.3
BICN NSS19:................................................................................................................ 75
3.9.4
UTRAN with BICN: ....................................................................................................... 76
3.9.5
two stm1/Oc3 links on RNC: ........................................................................................ 77
3.9.6
PP15k-POC: ................................................................................................................. 78
3.9.7
Aal2 switch: .................................................................................................................. 79
3.9.8
quasi associated mode:................................................................................................ 80
3.9.9
IuFlex............................................................................................................................ 81
3.9.10 Hybrid IuPS .................................................................................................................. 81
3.10
RNC ATM............................................................................................................................. 82
3.10.1 FP ................................................................................................................................. 82
3.10.1.1 16pOC3/Stm1 FP.................................................................................................. 82
3.10.1.2 16pOC3/Stm1 Atm/POS FP ................................................................................. 83
3.10.1.3 4pOC3/Stm1 FP.................................................................................................... 88
3.10.1.4 PSFP/DCPS.......................................................................................................... 89
3.10.2 SS7 Protocol Stack: ..................................................................................................... 92
3.10.2.1 Icn interface, SS7 migration from RNC-CN to RNC-IN impact:........................... 93
3.10.2.2 RNC architecture impact:...................................................................................... 93
3.10.2.3 Amount of SL per routeSet: .................................................................................. 95
3.10.3 Aal2 Pathid ................................................................................................................... 96
3.10.4 Aal2 RNC Identifiers:.................................................................................................... 97
3.10.5 Aal5 connections .......................................................................................................... 99
3.10.6 QOS............................................................................................................................ 100
3.10.6.1 UMTS QOS information ...................................................................................... 100
3.10.7 Transport Map ............................................................................................................ 100
3.10.7.1 transportMap tables: ........................................................................................... 103
3.10.8 Transport admission Control ...................................................................................... 105
3.10.9 Alcap........................................................................................................................... 106
3.10.10
Q-AAL2 Alternate Routing ...................................................................................... 106
3.10.10.1 Transport User Plane aspect: ........................................................................... 107
3.10.10.2 Transport control Plane aspect:........................................................................ 108
3.10.11
Aal2 Path assignment to PMC-PC ......................................................................... 111
3.10.12
Aal2 CID Selection.................................................................................................. 114
3.10.13
aal5 connections..................................................................................................... 116
3.10.14
IuFlex ...................................................................................................................... 117
3.10.15
Utran Sharing.......................................................................................................... 122
3.10.15.1 Utran interface impact:...................................................................................... 123
3.10.15.2 Transport Identifiers: ......................................................................................... 123
3.10.15.3 Routing Table:................................................................................................... 124
3.11
RNC HYBRID ....................................................................................................................... 127
3.11.1 FP ............................................................................................................................... 127
3.11.1.1 4pGE FP ............................................................................................................. 127
3.11.2 Virtual Router RNC composition ................................................................................ 127
3.11.3 LocalMedia ................................................................................................................. 130
ALU confidential

UMT/IRC/APP/11676

07.02 / EN

Standard

11/06/2009

Page 6/162

Iu Transport Engineering Guide


3.11.4 QOS............................................................................................................................ 132
3.11.5 PDR ............................................................................................................................ 133
3.11.6 ICMP HeartBeat ......................................................................................................... 136
3.11.7 Sigtran ........................................................................................................................ 137
3.11.7.1 M3UA .................................................................................................................. 138
3.11.7.2 SCTP .................................................................................................................. 140
3.11.7.3 Resiliency............................................................................................................ 143
3.11.8 IuFlex.......................................................................................................................... 144
3.11.9 UtranSharing .............................................................................................................. 144
4
VARIATIONS BETWEEN RELEASES..................................................................................... 144
4.1 RNC...................................................................................................................................... 144
4.1.1
16pOC3/stm1 MS3 FP: .............................................................................................. 144
4.1.2
Hardware: ................................................................................................................... 144
4.1.3
PNNI ........................................................................................................................... 145
4.1.4
SS7............................................................................................................................. 145
4.1.5
AAL2........................................................................................................................... 146
4.1.6
IuFlex.......................................................................................................................... 147
4.1.7
IuFlex.......................................................................................................................... 148
4.1.8
SOC............................................................................................................................ 148
4.2 PLANE DESCRIPTION .............................................................................................................. 148
4.2.1
CS & PS Control Plane .............................................................................................. 148
4.2.1.1 TMU, SL mapping: ................................................................................................ 148
4.2.2
CS User Plane............................................................................................................ 148
4.2.3
PS User Plane ............................................................................................................ 149
5
TRANSPORT IDENTIFIERS..................................................................................................... 150
5.1 VPI ......................................................................................................................................... 150
5.2 IUCS INTERFACE .................................................................................................................... 153
5.2.1
vpi.vci.......................................................................................................................... 153
5.2.2
aal2If / IucsIf ............................................................................................................... 154
5.2.2.1 Pathid: ................................................................................................................... 155
5.2.2.2 Aal2 QOS:............................................................................................................. 155
5.2.3
Specific Network Topologies ...................................................................................... 155
5.2.3.1 AAL2 Backbone: ................................................................................................... 155
5.3 IUPS INTERFACE .................................................................................................................... 157
5.3.1
vpi.vci:......................................................................................................................... 157
5.4 IU/IUR PNNI SPVC HAIRPINS .................................................................................................... 158
5.5 FP ATTRIBUTES ...................................................................................................................... 159
5.5.1
Classical 16pOC3/Stm1 FP Attributes ....................................................................... 159
5.5.2
16pOC3/Stm1 MS3 FP............................................................................................... 159
5.6 TRAFFIC CONTRACT ............................................................................................................... 159
ABBREVIATIONS ............................................................................................................................... 159

ALU confidential

UMT/IRC/APP/11676

07.02 / EN

Standard

11/06/2009

Page 7/162

Iu Transport Engineering Guide

1.1

INTRODUCTION

OBJECT
This document intends to describe Transport layers on the IuCS and IuPS interfaces, and includes the following:
Description of Transport in the context of UMTS,
Engineering rules,
Configuration of the UMTS Nodes Transport interfaces and Edge nodes of the backbone.

1.2

SCOPE OF THIS DOCUMENT


The document is related to UA6 GlobalMarket.
The section 4 indicates the variations with previous releases.
UTRAN Release:

UA 4-1

OAM Release:

UA 5-0
PCR6-1

Passport Release:
OAM4.2

OAM5.0

UA 5-1
PCR7-2

PCR8-2

OAM5.0

The document covers:


 IU interface.
 Transport network layers (Transmission, ATM, SS7, IP, ALCAP ),
 ALU RNC, MGW and SGSN,
 Impact of public backbone on the interface.
Document structure:
 On Section 3 are described IU Transport features unrelated to UMTS releases.
 On section 4, are specified IU Transport features availability per UMTS Releases.
 On section 5, are provided some parameter values for different Network topologies.

1.3

AUDIENCE FOR THIS DOCUMENT


The operators, R&D, WPS, VO, Trial, networkDesign.

1.4

APPROVAL & REVIEWER

2 RELATED DOCUMENTS
2.1

APPLICABLE DOCUMENTS

ALU confidential

UMT/IRC/APP/11676

07.02 / EN

Standard

11/06/2009

Page 8/162

Iu Transport Engineering Guide

2.2

REFERENCE DOCUMENTS

ALU confidential

UMT/IRC/APP/11676

07.02 / EN

Standard

11/06/2009

Page 9/162

Iu Transport Engineering Guide


[
[
[
[[
[
[
[
[
[
[
[
[
[
[
[
[
[[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[[

R
R
R
R
R
R
R
R
R
R
R
R
R
R
R
R
R
R
R
R
R
R
R
R
R
R
R
R
R
R
R
R
R
R
R
R
R
R
R
R
R
R
R
R
R
R
R
R
R
R
R
R
R
R
R
R
R
R
R
R
R
R
R
R
R
R
R
R
R
R
R
R

1
2
3
456
798
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79

]
]
]
]]
]
]
]
]
]
]
]
]
]
]
]
]
]]
]
]
]
]
]
]
]
]
]
]
]
]
]
]
]
]
]
]
]
]
]
]
]
]
]
]
]
]
]
]
]
]
]
]
]
]
]
]
]
]
]
]
]
]
]
]
]
]
]
]
]
]]

UMT/IRC/APP/11676

UMT/IRC/APP/7149
UMT/IRC/APP/011674
UMT/IRC/APP/12509

Iub TEG
Iu TEG
Iur TEG
Addressing TEG

3GPP TS 25 430
3GPP TS 25 442
3GPP TS 34 108
3GPP TS 25.410
3GPP TS 25. 412
3GPP TS 25. 414
3GPP TS 29.060
3GPP TS 25 420 v3.5.0
3GPP TS 25.422 v3.6.0
3GPP TS 23.236 v5.4.0
3GPP TS 25.425 v3.6.0
3GPP TS 25.309 Rel6

UTRAN Iub: General Aspects and Principles


UTRAN Implementation Specific O&M Transport
RAB definition
UTRAN IU Interface: general aspects and principles
UTRAN IU Interface: Signaling transport, Rel99
UTRAN IU Interface: Data & Signaling transport
GTP
Iur general aspect and principles
Iur Signalling transport
Intra-domain connection of RAN node to multiple CN node (Release5)
Iur User Plane protocols for common Transport Channel data Streams.
FDD Enhancement Uplink, overall description, stage 2 (Release 6)

G702
G703
G707
G804
G700 to 706
Q711 to Q 714
G832
G783
G841
G775
I361
I610
I363-1
I363-2
I363-5
I732
I761
I762
Q2210
Q2931
Q2630.1
Q2630.2
E191
X213
Q2941.2
Q1970 and 1990
Q.1950
Q.765.5
Q.2150.1
G711
atmf 0055.000
af-phy-0086.000
af-phy-0086.001
af-phy-0130.00
af-ra-0105.000
af-tm-0121.000
af-vtoa-0078.000
af-cs-0173.000
af-vtoa-0113.000

PDH, Digital Hierarchy Bit Rates


PDH, Physical/Electrical Characteristics of Hierarchical Digital interfaces
Network node interface for the SDH
ATM cell mapping into PDH
MTP NarrowBand
SCCP
Transport of SDH element on PDH network
Characteristics of SDH equipments
Types & Characteristics of SDH Network Protection Architectures
LOS and AIS defect detection and clearance criteria
Specification of ATM layer for B-ISDN.
OAM for broadband ISDN
AAL1
AAL2
AAL5
Functional Characteristics of ATM Equipment
IMA
ATM over fractional physical link
MTP3 functions & messages using the services of Q2140
B-ISDN
Aal2 Signaling, capabilitySet1
Aal2 Signaling, capabilitySet2
B-ISDN numbering & addressing
Addresses
GIT
Nb interface info
Specifications of signaling related to Bearer Independent Call Control (BICC)
Application transport mechanism: Bearer Independent Call Control (BICC)

RFC 1541
RFC 2225
RFC 1629
RFC1293
RFC826
RFC1485
RFC1483
RFC2991
RFC 3331
RFC 3332
RFC 2960
RFC 3309

DHCP
IP & ARP over ATM
Guideline for OSI NSAP allocation in internet
InverseARP
ARP
IP over ATM
IP over AAL5
ECMP
SS7 MTP2 User Adaptation (M2UA) Layer
SS7 MTP3 User Adaptation (M3UA) Layer
SCTP, Stream Control Transmission Protocol
SCTP, Stream Control Transmission Protocol

PCM of Voice Frequencies


PNNI version 1.0
IMA v1-0
IMA v1-1.
ATM on fractional E1/T1.
Addressing, userGuide. V1.0
TrafficManagement Specification.
AAL1
Domain based rerouting
Atm Trunking using Aal2 for narrowband Services

ALU confidential

07.02 / EN

Standard

11/06/2009

Page 10/162

Iu Transport Engineering Guide


[
[[
[[
[
[
[
[
[[
[
[
[

GR253
GR2878

Synchronous Optical Network (SONET)


ATM HSL

241-5701-705
241-5701-706
241-5701-707
241-5701-708
241-5701-702

NTP ATM TrafficManagement


NTP ATM TrafficShaping and Policing
NTP ATM Queuing and Scheduling
NTP ATM CAC and Bandwidth
NTP Routing and Signaling

UMT/IRC/APP/007147
UMT/IRC/APP/7146

PEI NodeB
PEI RNC

UMT/DCL/DD/0020

UPUG

3GPP TS 23.002
3GPP TS 23.221
3GPP TS 23.205
3GPP TS 29.205
3GPP TS 29.232
3GPP 29.414
3GPP 29.415
3GPP 29.232

R4 Network Architecture
Architectural Requirements
BICN; Stage 2
Application of Q.1900 Series to BICN Architecture; Stage 3
MGC, MG Interface, Stage 3, Release 4
CN Nb Data Transport and Transport Signaling
CN Nb interface User Plane Protocols
TFO package

[ R 150 ]
[ R 151 ]

FRS 25647
FRS 33767

aal2LinkCac evolution
Iub over protected atm ring

[ R 160 ]
[ R 161 ]

UMT_SYS_DD_023235
UMT/Sys/DD/023092

UA6_Bandwidth_Pools_FN
Hybrid Iub FN

[[
[[
[[
[
[
[
[
[
[
[
[
[
[
[
[

R
R
R
R
R
R
R
R
R
R
R
R
R
R
R
R
R
R
R
R
R
R

80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119

]
]]
]]
]
]
]
]
]]
]
]
]]
]]
]]
]]
]

R
R
R
R
R
R
R
R
R
R
R

120
121
122
123
124
125
126
127
128
129
130

]
]
]
]
]
]
]
]
]
]
]

3 TRANSPORT NETWORK LAYERS, DESCRIPTION


The Iu protocol stacks according to 3Gpp [R13].
The TEG scope is the TNL part (Transport Network layers) of the 3gpp UMTS protocol stacks:

ALU confidential

UMT/IRC/APP/11676

07.02 / EN

Standard

11/06/2009

Page 11/162

Iu Transport Engineering Guide

UMTS CP

UMTS UP

TRANSPORT
CP

RNL

RANAP CS

UMTS UP
ALCAP

SCCP

Q2150.1
MTP3B
SSCF-NNI

TNL

SSCOP
AAL5

AAL2
ATM
PHY

IU CS
RNC

UMGw

TEG
Figure 3-1: IU CS protocol stack

UMTS CP

RNL

UMTS UP

RANAP PS

Iu PS UP

SCCP

TNL

MTP3B

GTPU

SSCF-NNI

UDP

SSCOP

IP
AAL5
ATM
PHY
IuPS

SGSN

RNC
TEG

Figure 3-2: IU PS over atm protocol stack

ALU confidential

UMT/IRC/APP/11676

07.02 / EN

Standard

11/06/2009

Page 12/162

Iu Transport Engineering Guide

UMTS CP

RNL

UMTS UP

RANAP PS

Iu PS UP

SCCP
SCCP

GTP-U
GTP-U

M3UA
M3UA

TNL

SCTP
SCTP

UDP
UDP
IP
IP
ETHERNET
ETHERNET
PHY
PHY
IuPS

SGSN

RNC
TEG

Figure 3-3, IuPS over ip, protocol stack

3.1

TRANSMISSION
The transmission layer encompasses two subLayers:
- The PMD (PhysicalMedium) subLayer: type of medium: micro wave, copper or optical fiber,
- The TC (TransmissionConvergence) subLayer:
- Frame format,
- Payload field,
- Overhead fields: synchronization, error detection, etc.
Different transmission formats may co-exist within a telecommunications network:
- PDH (PlesiochronousDigitalHierarchy) is used for copper medium or microwave, whereas SDH/SONET
(SynchronousDigitalHierarchy / SynchronousOpticalNETwork) is commonly used for optical fiber medium but
may also be used for copper medium and microwave.
PDH may specify the following link levels: ITU/ E1, E3, E4, E5, ANSI/T1, T2, T3, T4.
- SDH/SONET links typically provide higher throughput than PDH links. PDH links may be encapsulated within
SDH/SONET containers.
- Ethernet.

3.1.1

PDH:
Different kinds of PDH links are specified either at ITU or at ANSI.
- ITU PDH links:
- E1, 2048 Kbps,
- E2, 8448 Kbps, multiplex of 4 E1,
- E3, 34368 Kbps, multiplex of 4 E2 (16 E1),
- E4, 139 264 Kbps, multiplex of 4 E3 (64 E1), or
- E5, 564 992 Kbps, multiplex of 4 E4 (256 E1).
- ANSI PDH links are called: T1, T2, T3, and T4:
- T1, 1544 Kbps,
- T2, 6312 Kbps, multiplex of 4 T1,
- T3, 44736 Kbps, multiplex of 7 T2 (28 T1),
ALU confidential

UMT/IRC/APP/11676

07.02 / EN

Standard

11/06/2009

Page 13/162

Iu Transport Engineering Guide


-

T4, 274 176 Kbps, multiplex of 6 T3 (168 T1).

3.1.1.1 T3 LINK:
T3 link is resulting from the multiplexing of seven T2 tributaries.
T3 signal may be either Channelized or unChannelized.
- Channelized means that T3 signal results from the two stages multiplexing: T3 = 7 * T2 and T2 = 4 * T1.
A Channelized T3 is the result of the multiplexing of 28 DS1 links.
A Channelized T3 is composed of 7 * 4 * 24 = 672 timeSlots.
- Unchannelized means that T3 payload is filled with bulk data, either cell direct mapping or PLCP based.
Only Channelized T3 is defined in this section, since unChannelized are not used in UMTS network.

7 * T2

T3

T3
overhead

Throughput

7 * 6312 = 44 184 Kbps

44 736 Kbps

552 Kbps

# Bits

7 * 789 = 5523 bits

4760 bits (note)

56 bits

# Timeslot

7 * 96 = ts

672 ts

2 ts

Note:
T3 is defined as 44 736 Kbps throughput, and 4760 bits multiframe size, therefore 1,174 T3 frames are
transmitted each 125 s.
Detail: 44,736*125 = 5592 bits are transmitted; 5592 bits / 4760 bits = 1,174 T3 multiframes are
transmitted.
T3 multiframe is composed of 4760 bits:
- 4704 bits payload, and
- 56 bits overhead.
T3 user throughput in cells/s: [44 736 Kbps / (53 * 8)] * 4704 / 4760 = 104 268 cells/s
Rule: IuTEG_T3_1
T3 ATM throughput is 104 268 Cells/s.
T3 multiframe is composed of 7 frames 680 bits length, called M-sub-frames.
One M-sub-frame is divided into 8 blocks of 85 bits. One block is composed of one overhead bit, following by 84
bits payload. Indeed 7*8=56 bits overhead within a T3 frame.

X1
X2
P1
P2
M1
M2
M3

F1
F1
F1
F1
F1
F1
F1

C11
C21
C31
C41
C51
C61
C71

F2
F2
F2
F2
F2
F2
F2

C12
C22
C32
C42
C52
C62
C72

F3
F3
F3
F3
F3
F3
F3

C13
C23
C33
C43
C53
C63
C73

F4
F4
F4
F4
F4
F4
F4

Figure 3-4 T3 multiframe structure


Overhead bits functions:
X-bits (X1, X2):
X1 and X2 are used to indicate received errored multiframes to the remote-end (remote alarm indication
RAI or yellow signal);
X1 = X2 = 1 during error free condition,
ALU confidential

UMT/IRC/APP/11676

07.02 / EN

Standard

11/06/2009

Page 14/162

Iu Transport Engineering Guide


X1 = X2 = 0 if Loss of Signal (LOS), Out of Frame (OOF), Alarm Indication Signal (AIS) or Slips are
detected in the incoming signal.
P-bits (P1, P2):
P1 and P2 are used for performance monitoring; these bits carry parity information calculated over the 4704
payload bits in the preceding multiframe:
P1 = P2 = 1 if the digital sum of all payload bits is one,
P1 = P2 = 0 if the digital sum of all payload bits is zero.
The P-bits are calculated and may be modified at each section of a facility; therefore, the P-bits provide
SECTION performance information NOT end-to-end performance information.
Multiframe alignment signal (M1, M2, M3):
The multiframe alignment signal 010 (M1 = 0, M2 = 1, M3 = 0) is used to locate all seven M-subframes,
within the multiframe.
M-subframe alignment signal (F1, F2, F3, F4):
The M-subframe alignment signal 1001 (F1 = 1, F2 = 0, F3 = 0, F4 = 1) is used to identify the overhead bit
positions.
C-bits (C11, C12, C13, C21, ... Cij, ... C73):
Used for bit Parity, or bit stuffing.

3.1.1.1.1

OAM:

Alarm Indication Signal (AIS):


AIS signal consists in setting information bits with 1010... sequence, starting with a binary one (1) after
each M-bit, F-bit, X-bit, P-bit, and C-bit.
The C-bits are set to binary zero (C1=0, C2=0, C3=0).
The X-bits are set to binary one (X1=1, X2=1).
Idle Signal (Idle):
Idle Signal consists in setting information bits are set to a 1100... sequence, starting with a binary one (1)
after each M-bit, F-bit, X-bit, and C-bit. The C-bits are set to binary zero (C1 = 0, C2 = 0, C3 = 0), in the 3rd
M-subframe (C31, C32, C33); the remaining C-Bits (three C-bits in M-subframes 1, 2, 4, 5, 6, and 7) may be
individually set to 1 or 0, and may vary with time. The X-bits are set to binary one (X1 = 1, X2 = 1).

3.1.1.1.2

PASSPORT INTERFACE CARD:

4pDS3 FP Channelized to the T1 level is supported in PP15K used as POC, not on RNC-AN (based on PP7k).
Up to 56 IMA linkGroups may be configured on a 4pDS3 FP.
Remarks:
4pE3/DS3 FP clear channel is not supported by the RNC nor the NAM.
4pE3 FP Channelized doesnt exist.

3.1.2

SDH/SONET
Refer to [R32, 49, 50, 110].
SONET is specified in ANSI recommendations, whereas SDH is specified in ITU recommendations.
Difference between SDH and SONET:
- Frame field terminology,
- Minor differences in the application of certain overhead bytes, level of detail beyond the scope of this
document.

ALU confidential

UMT/IRC/APP/11676

07.02 / EN

Standard

11/06/2009

Page 15/162

Iu Transport Engineering Guide

3.1.2.1 THROUGHPUT
Two levels of SDH/SONET signals are specified within UMTS network.
These levels and associated throughputs are presented in the following table:

SDH Level

SONET Level

Throughput

STM1
ClearChannel

STS3 = OC3
Concatenated

155.52 Mbps

STM4

STS12 = OC12

622.08 Mbps

Throughput
User in Mbps
149.76 Mbps

Throughput
User in Cells/s
353 207
1 412 828

STM-n stands for SynchronousTransferModule level n. It identifies the level of SDH signal.
STS-n stands for SynchronousTransferSignal level n. Electrical specification for signal generation level.
OC-n stands for OpticalCarrier level n: Optical specification for signal generation level.

3.1.2.2 TRANSMISSION OAM


Severe problems in signal transmission are notified by means of Maintenance Signals and Status Signals.
Maintenance signals are resulting from problem detected on the incoming SDH/SONET signal.
At Transmission layer are defined four levels of OAM Flows. These OAM flows carry Maintenance and Status
signals related to different SDH/SONET sections:
- Regenerator Section OAM flow:

Carries Maintenance and Status signals related to SDH RegeneratorSection / SONET Section.
Multiplex Section OAM flow:

Carries Maintenance and Status signals related to SDH MultiplexSection / SONET Line.
LowOrder Section OAM flow:

Carries Maintenance and Status signals related to SDH lowOrder PathSection / SONET Path.
HighOrder Section OAM flow:
Carries Maintenance and Status signals related to SDH highOrder PathSection / SONET Path.

The mechanisms to provide OAM functions and to generate Transmission OAM flows depend on the transport
mechanism of the transmission system as well as on the supervision functions contained within the physical layer
termination functions of equipments. Three types of transmission can be provided in ATM networks:
- SDH-based transmission systems;
-

PDH-based transmission systems.

Cell-based transmission systems;

Rule: IuTEG_SDH_OAM_1
On ALU nodes, the OAM Flow Type of Transmission implemented is SDH/PDH based. Not Cell based.

The following transmission status event may be detected:


- LOS (Loss Of Signal): disconnection, idle signal, unequipped signal [R25, R80].
An LOS defect is detected when an all-zeros pattern on the incoming SDH/SONET signal lasts 100
s or longer. If an all-zeros pattern lasts 2.3 s or less, an LOS defect is not detected.
The 16-port OC-3 function processor does not monitor signal level for loss.
-

LOF (Loss Of Frame):


An SEF (Severely Errored Framing) condition is detected when the incoming signal has a minimum
of four consecutive errored RSOH A1-A2 framing patterns.
ALU confidential

UMT/IRC/APP/11676

07.02 / EN

Standard

11/06/2009

Page 16/162

Iu Transport Engineering Guide


A LOF defect is triggered when an SEF condition persists for 3 ms.
The following Maintenance signals may be generated on different kinds of transmission OAM levels:
- AIS (Alarm Indication Signal):
AIS signal notifies the adjacent downstream node that a failure has occurred upstream. AIS may be
generated at MultiplexSection, LowOrder and HighOrder PathSection level.
The SDH MS-AIS is renamed L-AIS in SONET.
AIS triggers: LOS, LOF condition within 125 s on the incoming link.
The MS-AIS is generated in a STM-N / OC-N that contains a valid MultiplexSection overhead, the
K2 byte indicating MS-AIS and a all-ones pattern in the payload:
K2 byte

Bits 6, 7, 8
111

MS-AIS

The HO & LO P-AIS are coded with as all one in the container and Path pointers.
-

RDI (Remote Defect Indication):


RDI signal notifies the adjacent upstream node that a failure has occurred upstream.
RDI may be generated at MultiplexSection, LowOrder and HighOrder PathSection level.
The SDH MS-RDI is renamed L-RDI in SONET.
RDI triggers:
- LOS, LOF condition within 125 s,
-

Reception of AIS signal.

Remark: FERF is the old name for RDI.


The MS-RDI is coded within K2 byte:
K2 byte

Bits 6, 7, 8
110

MS-RDI

The HO and LO RDI are coded in one bit of the HO / LO container overhead.
Examples of AIS and RDI generation on UMTS interface composed of SDH nodes:
POH
MSOH

P-RDI
K1

K2

0000 0000 0000


POH

P-AIS

0 000

UMTS
UMTS

UMTS
UMTS
LOS Condition

AIS

ATM
ATM

ATM
ATM

crossConnect
Line 1

TX
TX

RX
RX

W
RX
RX

PTE
PTE
TX
TX
RX
RX

TX
TX

Regenerator
RX
RX

Selected Line

TX
TX

RX
RX

W
TX
TX

RX
RX

TX
TX

TX
TX

TX
TX
RX
RX

MultiPlex Section 1

RX
RX

TX
TX

PTE
PTE

RSTE
RSTE

MSTE
MSTE
RX
RX

Line 0

RX
RX

TX
TX

TX
TX

RX
RX

RX
RX
TX
TX

MultiPlex Section 2
Figure 3-5 AIS/RDI generation, example 2
ALU confidential

UMT/IRC/APP/11676

07.02 / EN

Standard

11/06/2009

Page 17/162

Iu Transport Engineering Guide

3.1.2.3 APS
Reference documents: [R32 & R33 & R80].
Each Iub ATM SDH line is duplicated on the RNC, referred as 1+1 port redundancy, to ensure that the network will
continue operation when a single SDH line is defective. The duplicated SDH line is either located:
- In the same card case of MSA32E1/Oc3, it is referred as intraCard APS or
-

In a twin card case of 16pOC3 FP, it is referred as interCard APS.

One line is configured as the Working line, whereas the associated line is configured as the Protected Line. See
MML Attributes:
ATTRIBUTE Aps workingLine (working)
ATTRIBUTE Aps protectionLine (protection)
Both the Working and the Protected lines carry the same payload in the transmit direction, but only the Working line
is used for received payload.
At the node startup, workingLine is selected for receiving user payload data.
Nodes permanently monitor quality of the received signal. Based on these measurements, either node keeps the
working line selected or decides to select protectionLine for receiving user payload data; this operation is referred as
APS Switch.

RNC
RNC

UMTS
UMTS
Node
Node

ActiveCard
16pOC3
16pOC3

WorkingLine

Rx
Rx
Tx
Tx

STM1

Rx
Rx
Tx
Tx

Same Signal

Rx
Rx
Tx
Tx

Rx
Rx
Tx
Tx

STM1

STM1

SDH Network

RNC Traffic
Traffic generator
generator
RNC

16pOC3
16pOC3
Tx
Tx
Rx
Rx

Tx
Tx
Rx
Rx

16pOC3
16pOC3
STM1

Protected Line

Tx
Tx
Rx
Rx

Tx
Tx
Rx
Rx

16pOC3
16pOC3

Duplication of the egress stream

Decision of the ingress stream to take into consideration


Figure 3-6 APS mechanism

Rule: IuTEG_SDH_APS-1
It is recommended to activate APS on:
- All the Nortel MGW and SGSN interfaces,
- All the RNC 16pOC3 FP ports,
It is recommended to configured them as
WorkingLine, Stm1 links on the 16pOC3/Stm1 FP located on RNC-IN Slot 8,
ProtectedLine, Stm1 links on the 16pOC3/Stm1 FP located on RNC-IN Slot 9.
Rule: IuTEG_SDH_APS-2
Even if RNC-CN does not support APS, it is recommended to activate LAPS on the RNC
ALU confidential

UMT/IRC/APP/11676

07.02 / EN

Standard

11/06/2009

Page 18/162

Iu Transport Engineering Guide


16pOC3/Stm1 FP port dedicated to Icn interface.
Abnormal Case:
In some cases, operators may decide temporarily not to connect the second fiber on some interfaces.
In such a context:
- Aal2If constraint: If LAPS component is configured on one port supporting aal2 vccs, LAPS must be
configured on all RNC 16pOC3 FP ports supporting aal2 vccs, even if the second fiber not connected, in
order to ensure a proper behavior of the equipment.
- No constraint on other application: AtmMpe
- Moreover to minimize outage duration, for conformity with R&D platform RNC configuration and to
facilitate introduction of future features (eg: Y-Splitter)
Its recommended to configure LAPS on each RNC 16pOC3/Stm1 FP port.
Rule: IuTEG_SDH_APS-3
It is recommended to configure the LAPS component between each port of the pair of
16pOC3/Stm1 FP even so the protected fiber is not connected to the RNC 16pOC3/Stm1
FP.

Remark:
If LAPS component is configured whereas the protected fiber is not connected, one port is in LOS condition and
then alarms appear on the Management platform for the non connected fiber; to remove these useless alarms
either lock the non populated port (lock Lp/9 Sdh/i) or activate the "alarm filtering" (W-NMS).

3.1.2.3.1
-

APS OPTIONS (CONFIGURABLE)


Unidirectional/bidirectional mode:
-

Unidirectional mode: APS switching occurs only in the node which detects misbehavior, when APS is
configured in unidirectional mode. There is no APS switching in the remote node.
Indeed on node which detects traffic disturbance, selected line is switched from the working line to the
protected line, whereas on adjacent node selected line remains the working line.
Even in unidirectional APS, K1 byte is still used to inform the remote SDH node of the local action.
Moreover, K2 byte/Bit5 is set to 0 to reflect unidirectional mode.
Bidirectional mode: APS switching occurs in the node which detect misbehavior on one hand, and then
in the remote node on the other hand, when APS is configured in bidirectional mode.
Node which detects misbehavior on a link invokes APS and informs remote node by means of SDH
MSOH K1 bytes on the protected line that this link is experiencing a defect.
K1 byte is set with the channel number, 0 or 1 in case of redundancy 1+1, and the nature of the defect
which occurs on this link e.g.: SF, SD
Indeed on both adjacent nodes, selected line is switch from working line to protected line.
The K2 byte is transmitted too on the protected line.

Remark:
According to SONET recommendation, each node extremity of a SONET link has to be configured with
the same mode unidirectional or bidirectional. If nodes are configured on different way, each node will
apply unidirectional mode.
See MML Attribute: ATTRIBUTE Aps mode
Unidirectional is the default mode on RNC.
-

Revertive mode:

ALU confidential

UMT/IRC/APP/11676

07.02 / EN

Standard

11/06/2009

Page 19/162

Iu Transport Engineering Guide


After APS switching has been invoked, APS feature allows, since misbehavior has been corrected, to come
back to the initial configuration, if Revertive option is enabled.
Such information is exchanged between SDH nodes by means of K1 bytes set with:
- WaitToRestore: the protection line is active and the working line has recovered from a Signal Fail or
Signal Degrade. After the period defined by attribute waitToRestorePeriod the working line will
automatically revert to being the received active line and the request will change to noRequest.
- ReverseRequest: This request is only applicable when the provisioned mode is bidirectional.
- DoNotRevert: the protection line is active, and Revertive option is not activated. the working line has
recovered from a signalFail or signalDegrade; or a forcedSwitch or manualSwitch request has been
cleared.
- NoRequest: the working line is active and no other requests are in effect.
APS option rules:
Rule: IuTEG_SDH_APS-4
It is recommended to set APS in bidirectional mode, on the RNC and UMGW and USGSN nodes, with
exception of the following cases where APS has to be configured in unidirectional mode:
- Icn, since RNC-CN doesnt support APS,
-

VPT 2 ports.

Alcatel UMTS node to an otherVendor Node which doesnt provide APS.

Beside, per default, the Revertive mode is activated.


Remark:
Bidirectional APS setting allows identifying with any doubt the working fiber.
Unidirectional APS setting is slightly more robust because it can tolerate a failure on the working fiber
transmit side and failure on the protected fiber receive side.
See MML Attribute:
ATTRIBUTE Aps revertive
When this attribute is yes, the Aps will revert the receive active line from protection back to working when
a working line request is cleared, after the provisioned waitToRestorePeriod has expired.
ATTRIBUTE Aps waitToRestorePeriod (wtrPeriod)
This attribute specifies the time during which the protection line will remain the received active line after
the working line recovers from the fault that caused the switch.

3.1.2.3.2

APS TRIGGERS

APS invocation is triggered on two different criteria: SF (SignalFail) or SD (SignalDegrade):


- SF criteria, is based on following indicators/Conditions:
-

LOS (LossOfSignal) condition, it results from reception during at least 100 s, of SDH frame
filed with only 0.

Example: This event occurs on link failure.


LOF (LossOfFrame) condition, it results from reception of 4 consecutive errored frames.

Example: Bad timing, faulty A1, A2 bytes


Reception of MS-AIS.

Reception of MS-RDI (TBC).

SF BER (BlockErrorRate) threshold is reached.


ALU confidential

UMT/IRC/APP/11676

07.02 / EN

Standard

11/06/2009

Page 20/162

Iu Transport Engineering Guide


BER threshold is fixed to 10-3.
BER is calculated on the multiplex section overhead. BER counts discrepancy between received
BIP-24N (Bit Interleaved Parity) B2 byte within the SDH MultiplexSection overhead, and Even
parity code calculated on the received SDH frame.
SF must be detected within 0.08 seconds.
Example: extremely bad fiber, or attenuation problems.
-

SD criteria, is based on one indicator:


SD BER (BlockErrorRate) threshold is reached. SD BER threshold is in the range 10-3 to 10-10.
BER is calculated on the multiplex section overhead. BER counts discrepancy between received
BIP-24N (Bit Interleaved Parity) B2 byte within the SDH MultiplexSection overhead, and Even
parity code calculated on the received SDH frame.
See MML Attribute:
ATTRIBUTE Aps signalDegradeRatio (sdRatio)
This attribute specifies the minimum (BER) for which a Signal Degrade failure is declared. Its value
is the exponent of the BER, with possible values of -5 through -9 inclusive, which correspond to a
BER range of 10e-5 through 10e-9.
The switch initiation time for a Signal Degrade varies depending on the observed BER:
BER - Switch Initiation Time
10e-3 - 10 milliseconds
10e-4 - 100 milliseconds
10e-5 - 1 second
10e-6 - 10 seconds
10e-7 - 100 seconds
10e-8 - 16 minutes 40 seconds
10e-9 - 2 hours 46 minutes 40 seconds
The clearing threshold for a Signal Degrade is one-tenth the signalDegradeRatio; for example, if the
provisioned signalDegradeRatio is -5, the corresponding clearing threshold will be 10e-6. The
clearing time varies depending on the clearing threshold (not the observed BER), and can be
determined from the above table.

Moreover in case of bidirectional mode, a SDH node is notified by means of K1 and K2 bytes that APS has
been invoked in the remote node, APS is then invoked on the local node.
See MML Attributes for APS monitoring:
ATTRIBUTE Aps nearEndRequest (neReq)
ATTRIBUTE Aps timeUntilRestore
This attribute indicates the amount of time until the received active line is automatically switched back
from the protection line to the working line.
On following figures are represented failure conditions which triggered APS in UMTS nodes:

ALU confidential

UMT/IRC/APP/11676

07.02 / EN

Standard

11/06/2009

Page 21/162

Iu Transport Engineering Guide


POH

LOS Condition

No Path OAM signal

MSOH

K1

UMTS
UMTS

0000

K2
0000

0000

0 110

MS-RDI

UMTS
UMTS

ATM
ATM

ATM
ATM

Regenerator
TX
TX

RX
RX

Regenerator
Line 1

TX
TX

RX
RX

Selected Line

TX
TX

RX
RX

W
RX
RX

TX
TX

PTE
PTE

RX
RX

TX
TX

RSTE
RSTE

RX
RX

TX
TX

PTE
PTE

RSTE
RSTE

Selected Line

Line 0

TX
TX

P
RX
RX

RX
RX

TX
TX

RX
RX

TX
TX

TX
TX

RX
RX

TX
TX

RX
RX

RX
RX

MultiPlex Section

TX
TX

APS Invokation

POH

No Path OAM signal

MSOH

K1
1101

K2
0001

0000

0000

SF HP Work

Prot

Idle

Figure 3-7 APS switch on LOS Condition

POH

No Path OAM signal

MSOH

K1

MSOH
K2

0000

0 110

MS-RDI

0000 0000

K1
0000

K2
0000

0000

0 111

MS-AIS

UMTS
UMTS

UMTS
UMTS
LOS Condition

AIS

ATM
ATM

ATM
ATM

Regenerator
TX
TX

RX
RX

TX
TX

Regenerator
Line 1

RX
RX

Selected Line

TX
TX

RX
RX

W
RX
RX

TX
TX

PTE
PTE

TX
TX

RSTE
RSTE

RX
RX

TX
TX

PTE
PTE

RSTE
RSTE

Selected Line

Line 0

TX
TX

P
RX
RX

RX
RX

RX
RX

TX
TX

RX
RX

TX
TX

TX
TX

RX
RX

TX
TX

RX
RX

MultiPlex Section

RX
RX
TX
TX

APS Invokation
2

MSOH

K1
1101

K2
0001

0000

0000

SF HP Work

Figure 3-8 APS switch on MS-AIS reception


Remark:
ATM switches and UMTS Nodes take appropriate action on reception of Transmission OAM signals, or
under Transmission defect conditions:
- APS invocation,
- F4, F5 OAM message generation.

ALU confidential

UMT/IRC/APP/11676

07.02 / EN

Standard

11/06/2009

Page 22/162

Iu Transport Engineering Guide

3.2

ATM
ATM implemented on the IU interface is compliant with ITU I361.
Cell header format and encoding is according to NNI in RNC and WG IU interface, as defined in 3GPP.
UMTS interfaces may go through ATM backbone, either Public or Private.
A private ATM backbone is a network belonging to the UMTS Operator, whereas a public ATM backbone refers to
a network manages by a tiers operator, it is commonly called an ASP stands for ATM Service Provider.

3.2.1

ATM INTERFACE TYPE


ATM implemented on RNC is compliant with ITU I361.
Two types of interfaces are specified in ATM UNI and NNI. On UNI interface the VPI field is 8 bits length,
whereas on NNI interface the VPI field is 12 bits length.
See MML: AtmIf maxVpiBits
Public ATM backbone provides UNI accesses.
Private ATM backbones may provide NNI accesses.

3.2.2

VPC, VPT
A VPC (Virtual Path Connection) is an ATM Connection resulting from the grouping of different VCCs. A VPC is
identified by Vpi (Virtual Path Identifier).
The AtmForum defines the notion of a VP endPoint. A VP endPoint is the node where VCCs within a VPC are
grouped together. This function is realized in PassPort due to VPT component (VirtualPathTermination).
The goal is to group several VCCs within one VPC, by means of VPT, and to apply a common treatment to the set
of VCCs, e.g.: TrafficManagement, OAM.

3.2.2.1 VPC
The intention of this section is to summarize the contexts where VPCs are required.
Configuring VPC becomes useful when crossing a policed ATM backbone. For such a configuration, VPCs are
configured on UMTS nodes and trafficShaping is activated at the VP level.
According to the network topology and specific customer requirements, one or several VPCs may be configured per
RNC on the IU interface.
In the case of the Alcatel coreNetwork, one VPC may be configured per RNC, since CS and PS traffic go through
WG.
In the case of the UMGW and USGSN non-collocated, on RNC at least two VPCs may be configured, one dedicated
to CS traffic (user and control plane), the second dedicated to PS traffic (user and control plane).
Rule: IuTEG_VP_01
When ASP is included on the Iu interface and provides policed access, it is recommended to invoke
TrafficShaping at VP level in the UMTS nodes.
VP CS
STM1

VP PS

STM1

VP PS

ATM
Switch

RNC

ATM Switch

RNC

STM1

UMG
W

ATM
Switch

VP CS

STM1

STM1

RNC

USGS
N

ALU confidential

UMT/IRC/APP/11676

07.02 / EN

Standard

11/06/2009

Page 23/162

Iu Transport Engineering Guide

VP ServiceCategory:
It is recommended to configure Iu Vpc with rtVBR serviceCategory.
Setting VP serviceCategory with rtVBR instead of CBR provides the Vp connection with a longer
bufferSize e.g.: Passport CBR default bufferSize is 96 cells, whereas rtVBR default bufferSize is 480 cells.
Within ASP, CBR serviceCategory might be reserved for traffic more vulnerable to delay: GSM, ATM
CES
Rule: IuTEG_VP_02
VPC ServiceCategory is set to rtVBR
VP TrafficDescriptor parameters:
When trafficShaping is activated at the VP level, the TrafficDescriptor configured on the VPC is taken into
account in the egress traffic regulation.
Remark:
16pOC3 FP provides (BasicVPT, singleRate Shaping): only PCR within the VPC trafficDescriptor is taken
into account in the trafficShaping.
4pOC3 FP provides (Basic+StandardVPT, DualRateShaping: PCR, SCR and MBS within the VPC
trafficDescriptor are taken into account in the trafficShaping.
VPC TrafficDescriptor parameter values are the result of a dimensioning exercise based on customer
trafficModel and customer bandwidth limitations.
Without a customer trafficModel, the VPC trafficDescriptor may be established based on IU bandwidth
limitation and CAC constraint, with PCR set with IU bottle neck bandwidth.
TrafficRegulation mechanisms:
When an ASP is included on the Iub interface and provides policed access, it is recommended to group the
Vcc into a single Vp and to invoke TrafficShaping at VP level either on POC and RNC side.
Rule: IuTEG_VP_03
When an ASP is included on the Iub interface and provides policed access, it is recommended
to activate the TrafficShaping at Vp level, on the RNC side or on the POC side (if present).

Moreover when activating trafficShaping at VP level on one port of the 16pOC3/Stm1 FP, the size of the
assigned perVc queue need to be configured with an appropriate value see 3.2.6.1.

3.2.2.2 VPT
Passport supports two types of VPT: basicVPT and Standard VPT.
- BasicVPT is available on APC based cards, e.g.: 16pOC3 FP,
- StandardVPT and basicVPT are available on AQM based cards, e.g.: 4pOC3 FP.
TrafficShaping at VP level:
- On APC based card, when configuring basicVPT on a port, activation of trafficShaping is not allowed. For
activating trafficShaping at VP level, VPT is configured on one port; traffic is diverted to a second port where
trafficShaping is activated. It is called the two port solution.
- On AQM based card, a one port solution allows to activate trafficShaping at VP level.

ALU confidential

UMT/IRC/APP/11676

07.02 / EN

Standard

11/06/2009

Page 24/162

Iu Transport Engineering Guide

BackPlane
Rx

APC
C la s s
S c h e d u le r:
EP & M BG

C o n n e c tio n S c h e d u le r :
W FQ O r SFQ

T ra n s m it
P erV C & co m m on Q u eu es

EP0

Per VC Q ueues :
V CC 1 Q ueue
W e ig h t 1

Port1

V CC 2 Q ueue
W e ig h t 2

L in k C la s s
Q ueue
for E P 0

VPC

Tx

VPT

C o m m on Q u eues:

Q ueue
W e ig h t 3

E P 1 to E P 7 Q u e u e s
Per VC Q ueues :
V C C 7 1 Q u eu e
W e ig h t 1

L in k C la s s
Q ueue
for E P 7

V C C 7 2 Q u eu e
W e ig h t 1

EmissionPriority/MostUrgent

E P 0 Q u e ue s

C o m m o n Q u eu es:

EP2

CBR

EP3

rtVBR

EP4 nrtVBR
EP7

UBR

V C C 7 3 Q u eu e
W e ig h t 1

Rx

APC
C la s s
S c h e d u le r :
EP & M BG

C o n n e c ti o n S c h e d u le r :
W FQ O r SFQ

T ra n s m it
P e rV C & c o m m o n Q u e u e s

CBR

E P 0 Q ueues

Port2

VCC 1

Q ueue
W e ig h t 1

VCC 2

Q ueue
W e ig h t 2

L i n k C la s s
Q ueue
fo r E P 0

VPC

Tx

Q ueue
W e ig h t 3

C om m on Q ueues:

E P 1 to E P 7 Q u e u e s
P er V C Q ueues :
V CC 71 Q ueue
W e ig h t 1

L i n k C la s s
Q ueue
fo r E P 7

V CC 72 Q ueue
W e ig h t 1

EmissionPriority/MostUrgent

P er VC Q ueues :

Com m on Queues:

rtVBR

nrtVBR

EP0
EP2
EP3
EP4
EP7

V CC 73 Q ueue
W e ig h t 1

Figure 3-9 APC, two ports solution

Configuration of a basicVPT consists in:


- Identifying the VPC by means of a Vpi, Vpi range [0, 4095],
- Grouping VCCs within the VPC,
- Configuring kind of OAM loopBack, segment or endToEnd.
- If trafficShaping is required, it is activated on a second port.

AtmIf n1
VPT vpi
VPd
- SegmentLoopBack on, off, sameAsInterface
- EndToEndLoopBack on, off, sameAsInterface
AtmIf n2
VPC vpi
VPd
TM
TrafficShaping disabled, sameAsCa

Figure 3-10 basicVPT configuration


ALU confidential

UMT/IRC/APP/11676

07.02 / EN

Standard

11/06/2009

Page 25/162

Iu Transport Engineering Guide


Configuration of a standardVPC consists in:
- Identifying the VPC by means of a Vpi, Vpi range [0, 4095],
- Grouping VCCs within the VPC,
- Configuring kind of OAM loopBack, segment or endToEnd.
- Activation of the trafficShaping,

AtmIf n1
VPT vpi
VPd
- SegmentLoopBack on, off, sameAsInterface
- EndToEndLoopBack on, off, sameAsInterface

TM
- TrafficShaping disabled, sameAsCa
- ServiceCategory,
- Tx TrafficDescriptor Type & Parameters,
- Rx TrafficDescriptor Type & Parameters,

Figure 3-11 standardVPT configuration

VPT-CAC:
When configuring a VPT, the VPT-CAC is invoked. Its action consists in checking that the VPT ECR
(bandwidth reserved for the VPT) is higher than the sum of ECR of each VCC grouped under the VPT.
The VPT CAC is not required. In such a way to bypass its action, it is suggested to do overSubscription
under the VPT.
Remark:
Another way to bypass the VPT-CAC consists in removing the Ca component under VPT. Its allowed
for basicVPT, but not allowed for standardVPT.

3.2.3

OVERSUBSCRIPTION
When configuring Passport, part or all of its port capacity is allocated to one or several bandwidthPools
(bandwidthPool is a Passport component). One to five bandwidth pools are available. ATM ServiceCategories are
then associated with these bandwidthPools.
Rule: IuTEG_OvS_01
It is recommended to allocate all port capacity to one bandwidth pool, and associate all
service categories to this bandwidth pool.

The setting of bandwidthPools is described below.

Passport
E1
1,92 Mbps

BwPoll-1
100%

All SC

Figure 3-12 OverSubscription


The bandwidth pool is equal to 100% of the link capacity in the case of normal subscription.
ALU confidential

UMT/IRC/APP/11676

07.02 / EN

Standard

11/06/2009

Page 26/162

Iu Transport Engineering Guide


The bandwidth pool is higher than 100% of the link capacity in the case of over subscription.
Passport nodes permit over-subscription for each bandwidth pool at 128 times the port capacity.
Over subscription is configured when setting bandwidth pool component:
ATTRIBUTE AtmIf CA bandwidthPool (bwPool)
Rule: IuTEG_OvS_02
Alcatel doesnt provide any recommendation on the Oversubscription factor value to
configure in UMTS network.
If no UMTS traffic study is done to determine a specific oversubscription factor
value, keep the default value of 100%.

3.2.4

ATM CAC
CAC (Connection Admission Control) is a generic function which checks that connections request less bandwidth
than available.
CAC (Connection Admission Control) is an algorithm invoked at AtmConnection setup. It verifies that bandwidth
required for the new atmConnection is below than bandwidth available on the physical link.
For Permanent VC, CAC is invoked in provisioning phase, whereas CAC is invoked in establishment phase for a
Switched VC.
Based on atmConnection trafficDescriptor, CAC calculates the ECR (EquivalentCellRate) associated to this
atmConnection. ECR is the bandwidth required for an atmConnection from the CAC point of view.
AtmConnection is rejected when it requires more bandwidth than available at physical link.
Two kinds of CAC algorithms are implemented in Passport at ATM level:
- ACAC (Actual CAC) and
- GCAC (Generic CAC).
ACAC:

ACAC is a Alcatel algorithm implemented in Passport.


ACAC is a hop-by-hop reservation scheme performed in the egress and ingress direction regardless of the
connection type: PVC, PVP, SPVC, SPVP, SVC and SVP. ACAC is invoked each time PVCs are
provisioned or SVCs are established.
ACAC calculates atmConnection ECR, based on the following ATM QOS and TrafficManagement
parameters:
- Buffer size,
- LinkRate,
- ServiceCategory,
- CLR (CellLossRate),
- PCR, SCR, CDVT, MBS
Moreover ACAC checks that sum of ECR of all atmConnections configured on a link is lower than link
bandwidth.
If not, then some atmConnections are rejected.
In such a way to avoid such a situation, before provisioning phase, when determining amount of
atmConnections per link and atmConnection trafficDescriptors, the ACAC algorithm Excel macro is used
to estimate the bandwidth (ECR) required per atmConnections. Since atmConnection bandwidth (ECR) is
known, the amount of atmConnections per link and atmConnection trafficDescriptors may be tuned in such
a way sum of ECR of all atmConnections configured on a link is lower than link bandwidth.
GCAC:

GCAC is a CAC algorithm specified in atmForum.


It is invoked when setting PNNI atmConnections in the PNNI sPVC originating node.
Moreover GCAC algorithm is used in RNC AAL2 CAC and RNC PC CAC, when setting aal2 connections.
ALU confidential

UMT/IRC/APP/11676

07.02 / EN

Standard

11/06/2009

Page 27/162

Iu Transport Engineering Guide


GCAC is based on simplified formula:
Rule: IuTEG_CAC_O1
GCAC ECR = 2*PCR*SCR/(PCR+SCR)

3.2.5

TRAFFIC MANAGEMENT
At the ATM level, the bandwidth per ATM Connection is managed by means of TrafficDescriptor parameters.
TrafficDescriptor parameters may be involved in traffic regulation, only if trafficRegulation mechanism is activated.
Regulation of traffic on Egress side is achieved by trafficShaping whereas regulation of traffic on Ingress side is
achieved by means of Policing.

3.2.5.1 TRAFFICDESCRIPTOR PARAMETER PRESENTATION


TrafficDescriptor are presented in a PowerPoint file stored on Transport web site. Here an extract from this file.
Picture below indicates relationship between PCR, SCR and MBS.
Let us called , the ATM Cells transmission duration.
depends on the physical link throughput:
-

= 220 s for an E1,

= 2,83 s for an STM1,

For a VBR service category, traffic is considered bursty, and then cells are sent within a burst. Such traffic is
described at ATM level by means of 2 throughput parameters: PCR and SCR and size of the burst MBS.
Let us called TAT, the theorical arrival time that means time at which cell is expected. One TAT is defined for
each throughput PCR and SCR, they determine date at which cell is candidate for transmission.
For each throughput is calculated a Cell transmission Period: T=1/PCR, Ts=1/SCR. TAT is determined based on
these periods.
Moreover a tolerance applies to each period: CDVT applies to T, BT applies to Ts.
On setting CDVT, trafficRegulation mechanism allows BackToBack cells (Cells transmitted at linkRate). Amount
of cells sent backToBack (N) depends on CDVT value: N = Integer [ 1 + CDVT / (T d) ].
On setting BT, trafficRegulation mechanism allows ATM Burst (Cells transmitted at PCR). Amount of cells within
an ATM burst depends on BT value: MBS = Integer [ 1 + BT / ( Ts T )].
Remark: On Passport, BT is not set; MBS is set.

ALU confidential

UMT/IRC/APP/11676

07.02 / EN

Standard

11/06/2009

Page 28/162

Iu Transport Engineering Guide

Context:
TrafficSource: continuous
DualRateShaping:
PCR = linkRate
CDVT = 0
SCR = 1/8 LR
MBS = 3
=> BT = 2*(Ts-T)

ATM-User traffic: Traffic is continuous

123456

Shaped traffic
TAT

TAT

Period2

Period3

Period4

Period5

Period6

T= 1/PCR

Ts= 1/SCR
BT

Ts= 1/SCR

Ts= 1/SCR

BT

BT

Ts= 1/SCR

BT

Figure 3-13 PCR, SCR, MBS representation


When the ATM-User provides a continuous flow of traffic, it is observed that MBS cells are sent at PCR while the
following cells are sent at SCR.

3.2.5.2 TRAFFICSHAPING
TrafficShaping is presented in a PowerPoint file stored on Transport web site. Here an extract from this file.
Traffic shaping regulates egress traffic according to trafficDescriptor parameter values.
According to line card, different kinds of trafficShaping are available: SingleRate (Linear) and DualRate shaping.
On 16pOC3/STM1, only LinearShaping is available, whereas on 4pOC3 both kinds of trafficShaping are available.
LinearShaping regulates egress traffic according to PCR, whereas DualRateShaping regulates egress traffic
according to PCR, SCR and MBS. DualRateShaping may be called VBR shaping.
When a continuous flow of traffic has to be transmitted, and singleRateShaping is activated, cells are sent at PCR:
ATM-User traffic

1 2 345
t
Shaped traffic
TAT

TAT

TAT

TAT

TAT

TAT

TAT

T= 1/PCR

Figure 3-14 SingleRate trafficShaping

ALU confidential

UMT/IRC/APP/11676

07.02 / EN

Standard

11/06/2009

Page 29/162

Iu Transport Engineering Guide

When a continuous flow of traffic has to be transmitted, and dualRate Shaping is activated, MBS cells are sent at
PCR, following cells are sent at SCR:
ATM-User traffic: Traffic is continuous

1 23456

Shaped traffic
TAT

TAT

Period2

Period3

Period4

Period5

Period6

T= 1/PCR

Ts= 1/SCR
BT

Ts= 1/SCR
BT

Ts= 1/SCR
BT

Ts= 1/SCR

BT

Cell6 suffers a Delay of 27 , so the whole frame is delayed by 4*Ts


Figure 3-15 DualRate trafficShaping
TrafficShaping is recommended on UMTS nodes, when policing is activated in the downstream ATM node (e.g.:
ATM Backbone).
For such a configuration trafficShaping will be activated at VP level. Therefore before activating trafficShaping, it is
necessary to group VCCs within a VPC by means of VPT.
Rule: IuTEG_ATM-TM_1

To the extent that public ATM backbone is not involved on the UMTS interface,
Alcatel doesnt recommend activating TrafficShaping or Policing function, in
order to avoid cell delay and cell discard.

3.2.6

QOS
At ATM level, QOS is handled by means of:
Different parameters:
- AtmForum Parameters: ServiceCategory, CTD, CDV, CLR,
- Alcatel parameters: EmissionPriority, DiscardPriority.
Mechanism:
- Scheduling, (buffer).

3.2.6.1 BUFFER
A buffer may be provisioned either per ATM Connection (VCC or VPC) or per serviceCategory; it is referred as
perVcQueuing or Common Queuing.

ALU confidential

UMT/IRC/APP/11676

07.02 / EN

Standard

11/06/2009

Page 30/162

Iu Transport Engineering Guide

AQM
Class
Scheduler:
EP & M BG

Transm it
PerVC & com m on Queues

Connection Scheduler:
W FQ O r SFQ

EP0 Queue
Per VC Queues :

Comm on
Queues:

LinkClass
Queue
for EP0

n1 cells

ATM Connection 1 Queue


W eight 1

n2 cells

ATM Connection 2 Queue


W eight 2

n3 cells

ATM Connection 3 Queue


W eight 3

ATM IP FP Q OS M apping
(according to AQM
Mapping Table) :

SC and CLP
 EP
(EP0 to EP7)
 DP
(DP0 to DP7)

FROM
PQC

EmissionPriority/M ostUrgent

Or Comm on Queue:
EP0

cells

EP1

EP2

CBR
(CLP1)

CBR
(CLP0)

EP3

rtVBR
(CLP1)

rtVBR
(CLP0)

EP4

nrtVBR
(CLP1)

I = [2, 3, 7]
EP i Queue

To Link
Egress

Per VC Queues :

Com m on
Queues:

ATM Connection 71 Queue


W eight 1

nrtVBR
(CLP0)

EP5
ATM Connection 72 Queue
W eight 1

LinkClass
Queue
for EP i

EP6
ATM Connection 73 Queue
W eight 1
Or Comm on Queue:

UBR
EP7 (CLP0+1
)
DP3

DiscardPriority
DP2

DP1

DP0

AQM M apping Table,


SC m apped to EP & DP. configurable

Figure 3-16 Buffers within AQM card

Rule: IuTEG_Buffer_01
In the context of UMTS network, only perVcQueuing is configured.

Within a serviceCategory, buffers allocated to each AtmConnection have the same characteristics (Size, thresholds).
The size of the buffer depends on the serviceCategory. For the most urgent serviceCategory (CBR), the buffer size is
short to minimize delay, whereas for less urgent serviceCategory (nrtVBR), buffer size is longer since delay is more
acceptable.
On the other hand when buffer size is short, cells are more candidates to discard than when buffer size is longer.
Buffer size is configurable; nevertheless within the context of UMTS network the default values are used with
exception of the shaped VPC queue case. Indeed the default queue size calculated by the Passport for the shaped
VPC configured on the 16pOC3/Stm1 FP is too short:
Rule: IuTEG_Buffer_02
On the 16pOC3/Stm1 FP port configured with shaped rtVbr Vpc, set:
atmIf Ca rtVbr TxQueueLimit = 4000 cells
The Vpc being configured with:
atmIf Vpc Vpd Tm txQueueLimit (txQlim) = sameAsCa

Example of MML commands for CBR serviceCategory buffer size setting:


ATTRIBUTE AtmIf CA Cbr txQueueLimit (txql)
This attribute specifies the default maximum queue length for the emission queues used to buffer the traffic
of the CBR service category. It is used as the basis for setting both:
- the queue length and
- the discard thresholds
The discard thresholds are set at approximately 35, 75 and 90 percent of the scaled queue limit for traffic
at discard priority 3 (DP=3), DP=2 and DP=1 respectively.
ALU confidential

UMT/IRC/APP/11676

07.02 / EN

Standard

11/06/2009

Page 31/162

Iu Transport Engineering Guide

When this attribute is set to autoConfigure, an appropriate value is selected based on the card type. It is set
to 96 for the following Passport ATM cards (DS1, E1, DS3, E3, and OC-3).
For ATM IP FPs, the per-VC queue limit may be overridden for a permanent connection by specifying a
value in the Vcd Tm or Vpd Tm txQueueLimit attribute. The operational value of the maximum length of a
queue (common or per-VC) is indicated by the Vcc Tm, Vpc Tm, or Vp Tm txQueueThresholds attribute.
Default autoConfigure
ATTRIBUTE AtmIf Vcc Vcd Tm txQueueLimit (txQlim)
This attribute specifies an override to the default transmit queue limit for this connection.
A value of sameAsCa means that the default per-VC transmit queue limit, as defined by the CA service
category, is used for this connection.
Default sameAsCa
Per Card and per serviceCategory, a summary of the default buffer size values:

16pOC3/Stm1 FP and 4pOC3/Stm1 FP:


xpOC3 FP
serviceCategory
CBR
rtVBR
nrtVBR

BufferSize
maximum
effective

ratio

96
86
90%
480
432
90%
10240
7680
75%
Figure 3-17 xpOC3 FP buffer size

MSA32 FP:
MSA32
serviceCategory
CBR
rtVBR
nrtVBR

BufferSize
maximum
effective

ratio

96
86
90%
288
259
90%
1792
1344
75%
Figure 3-18 MSA32 Buffer Size

Within these tables:


- BufferSize/Maximum is the size of the buffer, configured in ATTRIBUTE AtmIf CA Cbr txQueueLimit (txql).
- The Ratio is the buffer threshold at which Clp0 cells are discarded.
- The effective BufferSize is the buffer occupancy at which incoming cells, whatever their CLP value, are
discarded.
BufferSize/Effective = Ratio * BufferSize/Maximum
Rule: IuTEG_Buffer_03
When setting trafficDescriptor, BufferSize/Effective is used by the CAC for processing ECR.

3.2.6.2 SCHEDULING
At ATM level, QOS is provided by means of Scheduling mechanism. Scheduling takes into account the ATM
Connection ServiceCategory Parameter.
The scheduling applies to egress traffic only.
This section describes Scheduling within Passport node.
Passport provides a two stage Scheduling algorithm:
- Connection Scheduling and
- Class Scheduling.
ALU confidential

UMT/IRC/APP/11676

07.02 / EN

Standard

11/06/2009

Page 32/162

Iu Transport Engineering Guide

On a Passport Node, each ServiceCategory values is mapped to an EmissionPriority parameter value, EP being a
Passport internal parameter.
At provisioning time, QOS mapping table is configured:
1/ per port, An EP value is associated to each SC,
2/ a set of perVC queues is available on each EP,
ATM Connections are configured with SC and TrafficDescriptor,
3/ a VCC is configured with a specific SC, (An EP is associated to this SC, a pool of perVC queues is
associated to this EP)
4/ a perVC queue within the EP pool of queues, is assigned to the VC,
5/ cells to be transmitted on this VCC, are buffered in the queue dedicated to this VCC
When traffic is running,
- Connection Scheduler (WFQ or SFQ):
Within each EP pool of Queues, cells are extracted from each perVC queue, and are stored in the associated
linkClass Queue.
The amount of cells extracted per perVC queue depends on PerVC Weight (WFQ), or Frequency of pulling
per VC queue depends on shaping Rate (SFQ, T=1/PCR)
-

Class Scheduler:
Each linkClass (5 for APC, and 8 for AQM) queue is pulled according to its priority.

AQM
Class
Scheduler:
EP & M BG

Transm it
PerVC & com m on Q ueues

Connection Scheduler:
W FQ O r SFQ

EP0 Queue
Per VC Queues :

Com m on
Q ueues:

LinkClass
Queue
for EP0

n1 cells

ATM Connection 1 Queue


W eight 1

n2 cells

ATM Connection 2 Queue


W eight 2

n3 cells

ATM Connection 3 Queue


W eight 3

AT M IP FP QO S M apping
(according to AQ M
Mapping Table) :

SC and CLP
 EP
(EP0 to EP7)
 DP
(DP0 to DP7)

FRO M
PQC

EmissionPriority/MostUrgent

Or Comm on Q ueue:
EP0

cells

EP1

CBR
EP2 (CLP1)

CBR
(CLP0)

rtVBR
EP3 (CLP1)

rtVBR
(CLP0)

I = [2, 3, 7]
EP i Q ueue
Per VC Queues :

To Link
Egress

Com m on
Q ueues:

ATM Connection 71 Queue


W eight 1

nrtVBR
EP4 (CLP1)

nrtVBR
(CLP0)

EP5
LinkClass
Queue
for EP i

ATM Connection 72 Queue


W eight 1
EP6
ATM Connection 73 Queue
W eight 1
Or Comm on Q ueue:

UBR
EP7 (CLP0+1
)
DP3

DiscardPriority
DP2

DP1

DP0

AQM M apping Table,


SC m apped to EP & DP. configurable

Figure 3-19 Example of the Passport AQM Scheduling

3.2.6.3 CDV, CDVT


Cell Delay variation is the jitter introduced in the transmission of ATM cells due to buffering and Scheduling.
ALU confidential

UMT/IRC/APP/11676

07.02 / EN

Standard

11/06/2009

Page 33/162

Iu Transport Engineering Guide


Increasing the buffer size, or selecting a lower priority serviceCategory, increases the maximum possible CDV.
However, on the other hand it decreases the probability of cell Loss.
Decreasing the buffer size, or selecting a higher priority serviceCategory, decreases the maximum possible CDV.
However, on the other hand increases the probability of cell Loss.
The selection of a SC for an ATM Connection, or the buffer size for a service category, is based on a tradeoff
between CDV and cell Loss.
When no trafficShaping applies, and for the highest priority serviceCategory, CDV may be calculated on the
following way: CDV = BufferSize/ EgressLinkThroughput.
Some adaptations of this formula have to be done for lower priority SC.
Due to CDV introduced by the ATM backbone, ATM cells arrive in the downstream node, before or after the TAT
(Theorical Arrival Time).
In order to reduce the amount of non conform cells in the downstream node, the
(CellDelayVariationTolerance) has to be set. CDVT unit is in micro-seconds.
According to the CDVT value, the cells arriving before TAT, in the limit of CDVT, are declared conform.
C D V T = < ( T ), e g : C D V T = ( T ):
P e rio d 1

TAT

P e rio d 2

T AT

P e rio d 3

TAT

P e rio d 4

CDVT

TAT

t
T = 1 /P C R
CDVT

CDVT

CDVT > ( T

CDVT

), e g : C D V T = 2 *( T ):

P e rio d 1

CDVT

TAT

P e rio d 2

TAT

P e rio d 3

TAT

P e rio d 4

TAT

t
T = 1 /P C R

CDVT

CDVT

CDVT

CDVT

Figure 3-20 CDVT representation

3.2.6.4 DISCARD POLICY


Discard policy depends on type of card. A card is populated with one of the two TrafficManagement devices:
- APC+PQC,
- AQM+PQC
AQM Characteristics:
- 8 EmissionPriority and 4 DiscardPriority
- TrafficShaping available on up to two EP (from 0 to 7)
- Single and DualRate shaping,
- Basic and Standard VPT.
ATM Card populated with AQM: 4pOC3/STM1 FP, MSA32,
APC characteristics:
- 5 EP (EP0, EP2, EP3, EP4 and EP7),
ALU confidential

UMT/IRC/APP/11676

07.02 / EN

Standard

11/06/2009

Page 34/162

Iu Transport Engineering Guide

- TrafficShaping available on EP0 only.


- singleRate Shaping only,
- basic VPT only.
ATM Cards populated with APC: 16pOC3/STM1 FP,
AQM Discard Policy description:
According to the AtmConnection serviceCategory, and CLP field value within the received cell, Passport
node associates a DiscardPriority value to the received cell.
According to buffer load and DiscardPriority value associated to the received cell, cell is either buffered or
discarded.

Less severe
congestion state

Most severe
congestion state

The AQM perVC buffers are implemented with 3 hardcoded thresholds, called CongestionControl levels,
involved in the discard policy. The CC level thresholds are identical for each serviceCategory:

CC0

CC1

CC2

CC3

Node discards
DP3 cells

No Cell discarded

35%

Node discards
DP2 & DP3 cells

75%

Node discards
DP1 & DP2 &
DP3 cells

90%

Figure 3-21 CongestionControl levels on AQM card.


The buffer CC thresholds work in conjunction with DiscardPriority value associated to a serviceCategory.
For each

serviceCategory
is
associated
two
DiscardPriority
values;
cell with CLP field set to
second for CLP field set

EmissionPriority

one for
0, and a
to 1.

Most Urgent
EP0

EP1

EP2

CBR
(CLP1)

CBR
(CLP0)

EP3

rtVBR
(CLP1)

rtVBR
(CLP0)

EP4

nrtVBR
(CLP1)

nrtVBR
(CLP0)

EP5

EP6

EP7

UBR
(CLP0&1)

ALU confidential

UMT/IRC/APP/11676

least Urgent
07.02
/ EN

DP3

DP2
Standard

Least important

DP1

DP0

11/06/2009

DiscardPriority
Most important

Page 35/162

Iu Transport Engineering Guide

Figure 3-22 QOS mapping table example


On reception of a cell from the backplane, according to the QOS mapping table, the cell CLP value and the
serviceCategory of the AtmConnection, a discardPriority value is assigned to the cell.
E.g.: on reception of a cell with CLP=1 within an nrtVBR atmConnection, DiscardPriority 3 is associated
to the cell.
Then a comparison is done between discardPriority value associated to the cell and buffer load, resulting in
storage or discard of the cell:

When buffer load is below CC2 threshold, all cells candidate to transmission are accepted in the
buffer.

When buffer load has reached CC1 threshold, cells with discardPriority value 3, are discarded, whereas
cells with discardPriority value 2, 1 and 0 are stored in the buffer.

When buffer load has reached CC0 threshold, cells with discardPriority value 3, 2 and 1, are discarded,
whereas cells with discardPriority value 0 are stored in the buffer.

Queues:

MappingTable :

ATM Interface :

EmissionPriority
Tagged cell

NonTagged cell

Most Urgent

EP0 Queue

CBR
EP0 (CLP1)

CBR
(CLP0)

rtVBR
EP1 (CLP1)

rtVBR
(CLP0)

CLP1

Cell2

CLP0

Cell1

VCC CBR
CC0 CC1 CC2

CC3

EP0 queue is in CC2.


Cell1, DP1, is accepted.
Cell2, DP3, is rejected.

EP2
EP3
nrtVBR
EP4 (CLP1)

Tagged cell

nrtVBR
(CLP0)

EP4 Queue

CLP1
EP5

Cell2

NonTagged cell
CLP0

Cell1

VCC nrtVBR

EP6
CC0 CC1 CC2
EP4 queue is in CC2.
Cell1, DP2, is accepted.
Cell2, DP3, is rejected.

CC3
UBR
EP7 (CLP0+1
)
Least Urgent
DP3

DiscardPriority
DP2

Least important

DP1

DP0

Most important

Figure 3-23 Discard Policy on AQM card, based on an example of QOS mapping table.

APC Discard Policy description:


According to the AtmConnection serviceCategory, the CLP field value and the buffer load, the received
cell is either stored or discarded.

ALU confidential

UMT/IRC/APP/11676

07.02 / EN

Standard

11/06/2009

Page 36/162

Iu Transport Engineering Guide

The APC perVC buffers are implemented with 2 hardcoded thresholds, called CongestionControl levels,
involved in the discard policy. The CC level thresholds are set per serviceCategory:

90%
CLP0 discard
CC0
CC1

CBR

38%
CLP1 discard
CC2

CC3

90%
rtVBR

CLP0 discard
CC0

nrtVBR

38%
CLP1 discard
CC1
CC2

CLP0 discard
CC0
CC1

75%
CLP1 Discard
CC2

CC3
32%
CC3

Figure 3-24 CongestionControl levels on APC card.

No DiscardPriority parameter on the APC based card, the cell CLP values is directly compared to the buffer
occupancy:

MappingTable :

Queues:

ATM Interface :

EmissionPriority
EP0 Queue

Tagged cell

NonTagged cell

Most Urgent
90%

38%

CBR

CLP1

Cell2

CLP0

Cell1

EP0
VCC CBR
EP1
CC0 CC1 CC2

CC3
rtVBR

EP0 queue is in CC2.


Cell1, CLP0, is accepted.
Cell2, CLP1 is rejected.

EP2

EP3
EP4 Queue

Tagged cell

nrtVBR

NonTagged cell

EP4
90%

CLP1

38%
EP5

Cell2

CLP0

Cell1

VCC nrtVBR

EP6
CC0 CC1 CC2

CC3
UBR

EP4 queue is in CC2.


Cell1, CLP0, is accepted.
Cell2, CLP1, is rejected.

EP7
Least Urgent

Figure 3-25 Discard Policy on APC card, based on an example of QOS mapping table.

3.2.7

ADDRESSING
Since UNI signaling or PNNI is configured in the network, ATM addresses are required. ATM addresses are called
AESA (ATM End System Address). An AESA is structured according to NSAP format (Network Service Access
Point)
In order to establish sPVCs or SVCs across the network, ATM source and destination points need to be uniquely
identified with ATM addresses.

3.2.8

ATM OAM
Refer to [R36 & R40].
ALU confidential

UMT/IRC/APP/11676

07.02 / EN

Standard

11/06/2009

Page 37/162

Iu Transport Engineering Guide


The Transport OAM Signal purpose is to detect and isolate equipment failure within the Transport path. The
Transport OAM information is handled at Transmission level (see Transmission/OAM section), and at ATM level.
The ATM OAM Information is bidirectional and it is carried once the ATM connection is established.
The ATM OAM PDU is directly encapsulated within the ATM cell payload, without AAL (ATM AdaptationLayer).

3.2.8.1 ATM NODE, OAM TYPE


Within an ATM network, different types of node are considered from an OAM perspective:
- VPC/VCC endPoint node:
ATM node where ATM-User SDUs are inserted in ATM cell payload.
- VPC/VCC connectionPoint node:
ATM node where only ATM Connection switching occurs, no ATM-USER SDU insertion.
- VPC/VCC segmentPoint node:
ATM node which initiates and terminates Segment OAM flows.
The OAM Type of node is defined by means of configuration:
- EndToEnd F4 OAM VCC is automatically created when VPT is configured or VPC NEP,
- EndToEnd F5 OAM trafficFlow is automatically created when VCC is configured.
Segment end point may be provisioned by means of the following MML:
- ATTRIBUTE AtmIf oamSegmentBoundary (sb)
- ATTRIBUTE AtmIf Vpc Nrp oamSegmentBoundary (sb)
- ATTRIBUTE AtmIf Vcc Nrp oamSegmentBoundary (sb)
The ATM Connection OAM function type may be displayed by means of the following MML:
- ATTRIBUTE AtmIf Vpc connectionPointType (cpt)
- ATTRIBUTE AtmIf Vcc connectionPointType (cpt)
- ATTRIBUTE AtmIf Vpt connectionPointType (cpt)
- ATTRIBUTE AtmIf Vpt Vcc connectionPointType (cpt)
This attribute reflects the role of the connection component at this interface.
- Cpt = connectionEndPoint indicates that user cells, endToEnd OAM cells, and segment OAM cells are
processed by the connection component.
- Cpt = segmentEndPoint indicates that user cells and endToEnd OAM cells are relayed by the connection
component, while segment OAM cells are processed by the connection component.
- Cpt = connectingPoint indicates that user cells, end-to-end OAM cells, and segment OAM cells are relayed by
the connection component.
- Cpt = unknown indicates that the connection component is inactive.

3.2.8.2 ATM OAM FLOWS


ATM OAM flows are called either F4 OAM and F5 OAM flows according to whether they are related to a VP or a
VC Connection.
Both F4 and F5 ATM OAM flows may be specified as an EndToEnd OAM flow and a Segment oriented OAM
flow:
- EndToEnd OAM flow:
The EndToEnd OAM flow is transmitted between two adjacent ATM connection EndPoint nodes. An ATM
Connection endpoint node is the node where the ATM-User PDU is encapsulated within the ATM cell
payload.
- Segment OAM flow:
The Segment OAM flow is transmitted between two nodes at the extremity of an OAM boundary. OAM
boundaries are configured (see [I610]).

ALU confidential

UMT/IRC/APP/11676

07.02 / EN

Standard

11/06/2009

Page 38/162

Iu Transport Engineering Guide

NodeB

OAM Boundary

RNC

VCC

ATM
switch

ATM
switch

VCC / VPC

Segment OAM flow


EndToEnd OAM flow
See Passport MML attributes:
- ATTRIBUTE AtmIf oamSegmentBoundary (sb)
- ATTRIBUTE AtmIf Vpc Nrp oamSegmentBoundary (sb)
These attributes specify whether the interface/AtmConnection is on an OAM segment boundary.
The ATM OAM Information purpose is Fault detection, Fault Location and Performance monitoring:
- Fault Management:
- Alarm monitoring:
- AIS (AlarmIndicationSignal), F5 : VC-AIS or F4 : VP-AIS,
- RDI (RemoteDefectIndication), F5: VC-RDI, F4: VP-RDI,
- Connectivity verification:
- LoopBack,
- ContinuityCheck, Not supported by the 16pOC3/Stm1 FP.
- Performance Management:
Not supported by the 16pOC3/Stm1 FP. Not covered in this document
The EndToEnd and Segment F4 OAM flows are carried into dedicated VCCs:
- VPI: any allowed VPI values,
- VCI=3, for segment OAM flow,
- VCI=4, for endToEnd OAM flow,
The EndToEnd and Segment F5 OAM flows are not carried in dedicated VCCs, but transmitted together with ATMUser traffic in VCCs.
Within a VCC, the OAM traffic is distinguished from user traffic by means of PayloadType field (ATM Header PT
field):
PTI = 100 for Segment F5 OAM traffic,
PT = 101 for endToEnd F5 OAM traffic,
PT = 000 for ATM-User traffic,
F4/F5 OAM traffic is carried within ATM cells, coded as following:
Cell Header

OAM Type

Function
Type

Function Specific Field

Reserved

CRC-10

5 bytes

4 bits

4 bits

45 bytes

6 bits

10 bits

Figure 3-26 ATM OAM Cell Format


Table below, indicated OAM Cell Field values used within UMTS networks:

ALU confidential

UMT/IRC/APP/11676

07.02 / EN

Standard

11/06/2009

Page 39/162

Iu Transport Engineering Guide


OAM Type

0001

Fault Management

Function Type

0000

AIS

0001

RDI

0100

ContinuityCheck

1000

LoopBack

When the LoopBack is activated, loopBack related information is inserted in the OAM Cell Function specific field:

LoopBack
indication

Correlation
Tag

LoopBack indication ID

Source ID

Unused

1 bytes

4 bytes

16 bytes

16 bytes

8 bytes

Figure 3-27 LoopBack OAM Cell structure


Where:
- LoopBack indication:
The source loopBack point fills this field with the value: 0000 0001. The destination loopBack point returns
the value: 0000 0000.
- Correlation Tag:
Identifier of the loopBack invocation checked in loopBack source point on reception of loopBack answer.
- LoopBack Location ID:
Identification of the node where loopback has to occur.
- Source ID:
Coded as an option, identification of the loopBack source point.

3.2.8.3 ATM OAM SIGNALS


3.2.8.3.1

AIS (ALARMINDICATIONSIGNAL):
The purpose of AIS is to notify downstream endPoint node with a failure that has occurred between
endPoint nodes.
The AIS is sent downstream to all affected active VPC (resp VCC) from the VPC (resp VCC) connection
point (ATM cross connect), which detects the VPC (resp VCC) failure.
F4/F5 AIS OAM Cells triggers:
- SDH/PDH condition indicating physical link failure E.g. LOS.
- Reception of SDH MS-AIS, PDH AIS,
- LCD,
- IMA LinkGroup failure, when RNC IMA releases VCC, ATM layer generates AIS OAM
notification in both directions. RNC IN VCC NEP notifies application of VCC failure, and
application takes appropriate action.
- Loss of Continuity at ATM layer detected by LoopBack or ContinuityCheck when activated.
- VC-AIS may be triggered by VP-AIS.
The AIS is sent at a frequency of 1 cell/s usually. AIS cells generation stops when the failure detection is
removed.
AIS generation feature is systematically activated when configuring a static AtmConnection (PVC or PVP).
AIS generation becomes an option when configuring dynamic AtmConnection PNNI sPVC.
This option is available at the source and destination of the PNNI sPVC through attributes:
- Atmif Vcc Dst Config aisGeneration,
ALU confidential

UMT/IRC/APP/11676

07.02 / EN

Standard

11/06/2009

Page 40/162

Iu Transport Engineering Guide


-

Atmif Vcc Src aisGeneration.


Rule: IuTEG_ATM-OAM_1

It is highly recommended to activate aisGeneration attribute on source and destination of


sPVCs dedicated to UserPlane and OAM traffic.
It is not recommended to activate aisGeneration attribute on source and destination of
sPVCs dedicated to ControlPlane, since RNC-CN doesnt manage AIS signal.
Reason for activating aisGeneration attribute on source and destination of sPVCs dedicated to UserPlane
and OAM traffic, is that if a VCC goes down, Aal2If/Path associated to this VCC is informed about VCC
failure and then doesnt allow calls on this Path.
When activating aisGeneration on source and Destination of a sPVC, failure of the sPVC, is a trigger for
aisGeneration on Permanent AtmConnections, the sPVC is switched on at the source and destination:

PVC

PNNI sPVC

ATM
Switch

F5 AIS

PNNI sPVC

PVC

ATM
Switch

PNNI
AtmIf

AtmIf

AtmIf

AtmIf

sPVC CdPyNb,
AisGeneration
activated

F5 AIS

sPVC CgPyNb,
AisGeneration
activated

Upon receiving an F4/F5 AIS OAM cell, VPC/VCC endPoint node returns a F4/F5 RDI to alert upstream
intermediate and endPoint nodes that a failure has been detected downstream.

3.2.8.3.2

RDI (REMOTE DEFECT INDICATION):


The RDI is used to return an indication to the upstream VP/VC EndPoint node that the received end has
detected an incoming section defect or is receiving AIS.
The RDI is an upstream signal, whereas AIS is a downstream signal.
The F4/F5 EndToEnd RDI is generated by a VP/VC EndPoint node, whereas The F4/F5 Segment RDI is
generated by the segment endPoint node or a VP/VC EndPoint node.
F4/F5 RDI OAM Cells triggers:
- SDH/PDH condition indicating physical link failure E.g. LOS in Segment boundary node or
VPC/VCC EndPoint node.
- Reception of SDH MS-AIS, PDH AIS in a Segment boundary node or VPC/VCC EndPoint node.
- Reception of F4/F5 AIS EndToEnd in a VPC/VCC EndPoint node,
- Reception of F4/F5 AIS Segment in a Segment boundary node,
The RDI frequency generation is usually 1 cell/s. RDI generation stops when failure detection is removed.

ALU confidential

UMT/IRC/APP/11676

07.02 / EN

Standard

11/06/2009

Page 41/162

Iu Transport Engineering Guide


LOS Condition

UMTS
UMTS

UMTS
UMTS

ATM
ATM

ATM
ATM

ATM
ATM
F4/F5-AIS

P-AIS
TX
TX

RX
RX

PTE
PTE

TX
TX

RX
RX

MSTE
MSTE

RX
RX

TX
TX

TX
TX

RX
RX

PTE
PTE

RX
RX

TX
TX

PTE
PTE

RX
RX

TX
TX

MS-RDI
Ete F4/F5-RDI

Ete F4/F5-RDI

VPC/VCC
EndPoint node

Ete F4/F5-RDI

VPC/VCC
ConnectionPoint node

VPC/VCC
EndPoint node

Figure 3-28 F4/F5 RDI generation on Reception of F4/F5 AIS


LOS Condition

UMTS
UMTS

UMTS
UMTS

ATM
ATM

ATM
ATM

ATM
ATM
MS-AIS

TX
TX

RX
RX

PTE
PTE

TX
TX

RX
RX

PTE
PTE

RX
RX

TX
TX

RX
RX

Ete F4/F5-RDI

VPC/VCC
EndPoint node

TX
TX

RX
RX

RSTE
RSTE
TX
TX

PTE
PTE

RX
RX

Ete F4/F5-RDI

TX
TX

Ete F4/F5-RDI

VPC/VCC
ConnectionPoint node

VPC/VCC
EndPoint node

Figure 3-29 F4/F5 RDI generation on Reception of MS-AIS


LOS Condition

UMTS
UMTS

UMTS
UMTS

ATM
ATM

ATM
ATM

TX
TX

RX
RX

PTE
PTE

ATM
ATM

TX
TX

RX
RX

PTE
PTE

RX
RX

TX
TX

TX
TX

RX
RX

RSTE
RSTE

RX
RX

TX
TX

PTE
PTE

RX
RX

TX
TX

Path-RDI
Ete F4/F5-RDI
VPC/VCC
EndPoint node

Ete F4/F5-RDI

VPC/VCC
ConnectionPoint node

VPC/VCC
EndPoint node

Figure 3-30 RDI generation on LOS Condition

ALU confidential

UMT/IRC/APP/11676

07.02 / EN

Standard

11/06/2009

Page 42/162

Iu Transport Engineering Guide

3.2.8.3.3

LOOPBACK:
This feature consists in inserting a pattern in an ATM Connection, and indicates to the remote node that this
pattern has to be returned.
The loopback initiator node checks that the received pattern is the expected one, and decides to keep the
atmConnection active or to disable it.
The purpose of loopBack is to:
Prevent from mis-configuration of VPC/VCC translation in a connectionPoint node, and verify
continuity of an AtmConnection.
Rule: IuTEG_ATM-OAM_1
Therefore ATM Loopbacks should be enabled during I&C and disabled once I&C
operations are terminated.

Detect failure within an ATM Network where nodes dont handle ATM OAM signals: AIS, RDI.
Loopback configuration options:
The LoopBack signal is configured either at VC level (F5 OAM) or at VP level (F4 OAM).
The LoopBack is configured either on a predefined OAM segment, or between VPC/VCC
endpoints. Loopback is then named either Segment OAM loopback or endToEnd LoopBack.
The endPoint or connectionPoint inserts a bit pattern 0000 0001 in one OAM flow (F4/F5,
Segment/endToEnd), and expects within a timeout, the bit pattern 0000 0000 returned by the
destinated node.
Such an operation is performed without having to take the ATM Connection out of service.
- LoopBack mechanism:
The loopback pattern is inserted every 30 seconds by the connection end-points,
A loopback is considered successful when the loopback pattern is returned by the remote endpoint within 5 seconds,
It takes 15 to 45 seconds to detect a connection failure (3 consecutive loopback failures),
After a failure is detected, loopback cells are inserted every second.
Loopback recommendations:
Since amount of IuPS UP vcc is limited (up to 3* 4 vcc), then:
Rule: IuTEG_ATM-OAM_2
On IuPS UP, It is recommended to activate endToEnd Loopback at VC level (F5
OAM flow) or at VP level (F4 OAM) is VPT is activated.

Amount of IuCS vcc may be very large then instead of activating systematically loopBack, it is
preferable to ascertain that F4/F5 OAM AIS/RDI Signals are activated within each node of the
ATM Backbone.
If F4/F5 OAM AIS/RDI Signals are not available in otherVendor nodes, then for security reason,
we can decide to activate loopback on some RNC atmConnections.
Rule: IuTEG_ATM-OAM_3
On IU interface, ascertains that F4/F5 OAM AIS/RDI Signals are activated in each
node of the ATM backbone.
If F4/F5 OAM AIS/RDI Signals is not available on otherVendor nodes within the
ATM backbone, then loopBack must be activated on UMTS Nodes.
On 16pOC3/STM1 FP, up to 50 loopbacks may be activated simultaneously.

Remark:
ALU confidential

UMT/IRC/APP/11676

07.02 / EN

Standard

11/06/2009

Page 43/162

Iu Transport Engineering Guide


When activating loopBack in the RNC, it is assumed that intermediate equipments are transparent
for this signal and the far end support loopBack (Alcatel NodeB and coreNetwork nodes support
loopBack).
Moreover without LocationID value inserted in the loopBack signal, it is not guaranteed that the
far end node which answers is the expected one.
Loopback Passport Commands:
ATTRIBUTE AtmIf endToEndLoopback (eeLbk)
ATTRIBUTE AtmIf segLinkSideLoopback (segLkLbk)
ATTRIBUTE AtmIf Vpc Vpd endToEndLoopback (eeLbk)
ATTRIBUTE AtmIf Vpc Vpd segLinkSideLoopback (segLkLbk)
ATTRIBUTE AtmIf Vcc Vcd endToEndLoopback (eeLbk)
ATTRIBUTE AtmIf Vcc Vcd segLinkSideLoopback (segLkLbk)
Interworking with otherVendor Nodes
In case of interworking with otherVendor coreNetwork nodes not supporting, or configured with
the loopBack not activated then the loopBack mechanism must NOT be activated in the Alcatel
RNC.

3.2.8.3.4

CONTINUITYCHECK:
The Continuity check mechanism consists in periodically sending cell, from endPoint node, at a
predetermined interval (every second) so that the connectingPoints and the remote endPoint can distinguish
between a connection that is idle and one that has failed.
The continuity check can detect failure that AIS cannot, such as an erroneous VP cross-connect change
(VPI translated to incorrect value).

3.2.8.4 ATM OAM CARD CHARACTERISTICS & COMPLIANCY:


16pOC3/STM1 FP:
Partly compliant withI.610:
- Supported:
- Fault management:
- Alarm Indication Signal (AIS), Remote Defect Indication (RDI),
- LoopBack.
- Not Supported:
- PerformanceManagement,
- FaultManagement: ContinuityCheck.

3.3

PNNI
PNNI Stands for Private Network to Network Interface. It is specified at the atmForum.
PNNI recommendations encompass Topology and Protocols notions.
The reason for using PNNI, on ATM interfaces, is to simplify configuration and to increase Reliability and
availability of the network.
The PNNI allows shorter outage times, and transparent recovery from network outage, by means of dynamic
rerouting mechanisms.
Therefore, a better GOS behavior is expected when configuring PNNI.

3.3.1

PNNI TOPOLOGY:
A PNNI network may be configured either as a flat network or a hierarchical network.
ALU confidential

UMT/IRC/APP/11676

07.02 / EN

Standard

11/06/2009

Page 44/162

Iu Transport Engineering Guide

A PNNI flat network is composed of one single routing area as a consequence all nodes within the network receive
the same routing information, all nodes have knowledge of the complete network.
A PNNI two Layer hierarchy is split in several routing areas. A routing area is called a peerGroup. The routing
information is flooding within a peerGroup.
Through the configuration, each node is assigned to a specific peerGroup, e.g.: all UTRAN nodes connected to a
AggregationNode may be grouped into one peer group, UTRAN nodes connected to another AN may be grouped in
another peerGroup.
Within each peerGroup is designated a peerGroupLeader which assures the routing between nodes belonging to
different peerGroups.
The ATM addressing plane is specified in such a way to reflect this topology choice :
Case of flat network, an ATM Address 20 bytes length is split in 3 fields
- One field identifies the PNNI network,
- A second field identifies the Node within the PNNI network,
- A third field identifies the user (interface, application, ) within the node.
Case of two Layer Hierarchy network, an ATM Address 20 bytes length is split in 4 fields
- One field identifies the PNNI network,
- A second field identifies the peerGroup within the PNNI network,
- A third field identifies the Node within the peerGroup,
- The fourth field identifies the user (interface, application, ) within the node.

3.3.2

PNNI PROTOCOLS:
PNNI RoutingProtocol:
Objective: Distribution of the Topology information within the peerGroup.
On reception of routing protocol messages, the nodes update their routing tables with reachability, QOS,
resource availability information.
The PNNI Routing protocol encompasses Hello packets, and PTSE (PNNI Topology State Element).
The PNNI Routing protocol is carried over the Rcc Vcc (RoutingControlChannel) identified by Vpi0 and
Vci18.
PNNI Signaling Protocol:
Objective: callControl of atmConnection.
The PNNI Signaling protocol is responsible for atmConnection establishment, release, rerouting
This protocol is based on B-ISDN [R44 & 90] with some additional functionalities: sourceRouting,
crankBack.
The PNNI Signaling protocol is carried over the Signaling Vcc identified by Vpi0 and Vci 5.
PNNI SignallingProtocol PNNI RoutingProtocol

SSCF-UNI Q2130
SSCOP Q2110

SAAL UNI

CPCS (AAL5 used)


AAL5
ATM
Physique

Figure 3-31 PNNI Protocol Stack

ALU confidential

UMT/IRC/APP/11676

07.02 / EN

Standard

11/06/2009

Page 45/162

Iu Transport Engineering Guide

3.3.3

PNNI CHANNELS:
Within the UMTS context, PNNI is used for handling atmConnection, called sPVC.
The PNNI sPVC is configured only in the node where the connection originates.
The PNNI sPVC configuration consists in specifying:
- The AESA of the destination ATM port,
- The Pvc configured on the destination port, on which the PNNI sPVC is switch over.

ATM switch
CalledParty
Called VCC
VCC 2/32

Called port
AESA_1

ATM switch
CallingParty

ATM switch
sPVC

Intermediate

sPVC

Calling VCC : VCC 3/33


Called VCC= 2/32
CalledAddress = AESA_1

On AtmSwitch CallingParty, is configured VCC 3/33, wich is switched to VCC 2/32 in the AtmSwitchCalledParty, by
means of the sPVC autmatically established through Intermediate AtmSwitch.

Figure 3-32 sPVC example


Since the sPVC is configured in the originating node, the sPVC is automatically established between the originating
node to the terminating node through each intermediate node/interface.
In the sPVC establishment phase, the GCAC is invoked on the originating interface and the ACAC is invoked on
each intermediate atm interface:
- The GCAC takes into account the bandwidth required for the sPVC through its trafficDescriptor, and the
available bandwidth within the network. Indeed the routing table within the originating node is periodically
informed of the available bandwidth within the PNNI network thanks to routing protocol.
- The ACAC compares the bandwidth required for the sPVC to the available bandwidth on the immediate atm
interface.
On each intermediate atm interface, the atm Connection identifiers (Vpi, Vci) are assigned to the Pnni sPVC.
Within the Passport and the Passport based node, the selection of the atm identifiers depends on Node Identifier:
NodeId.
The NodeId attribute is located under atmRouting Pnni ConfiguredNode.
On an intermediate atm interface, if the sPVC source node has a higher nodeId than sPVC destination node, is then
assigned locally to the sPVC:
- The Vpi=0 and a Vci in the range: [ minAutoSelectedVciForVpiZero ; maxAutoSelectedVciForVpiZero ].
- If no more Vci is available for Vpi=0, then a Vci is chosen in the next Vpi. The Vci is then selected in the
range: [minAutoSelectedVciForNonZeroVpi ; maxAutoSelectedVciForNonZeroVPI ].
Else, the sPVC originating node allows the sPVC destination node to assign the VPI/VCI.

3.3.4

PNNI REROUTING:
The Rerouting is a PNNI mechanism which secures dynamic Atm Connections.
On Link or Node failure within the PNNI network, the node where is initiated the sPVC decides to re established the
connection on another path.
The Rerouting is referred as Hard Rerouting in [R62], and Connection recovery in NTP.
The Rerouting mechanism provides route recovery across a PNNI routing domain, but is not performed on routes set
through different PNNI routing domains. When a failure occurs outside of the routing domain, no rerouting
operation can be performed.

ALU confidential

UMT/IRC/APP/11676

07.02 / EN

Standard

11/06/2009

Page 46/162

Iu Transport Engineering Guide


It is the responsibility of the sPVC calling endpoint to try to re-establish the connection.
Within a PNNI peerGroup (the whole PNNI network in case of Flat network), the node which detects Link or Node
failure sends RELEASE message to both sPVC originating node and sPVC destination node.
The original connection segment is released before the establishment of the rerouting connection segment.
On reception of Release message, in conversation phase, if connection recovery has been activated for the call, see
MML: AtmIf PNNI Ebr ConnectionRecovery, the ATM node where the sPVC originates, determines an alternative
path for re-establishing the sPVC.
The sPVC originating node blocks the release message and attempts to establish an alternative connection segment
to the destination node.
The sPVC destination node also blocks the release message of the call and waits for the sPVC originating node to
establish the alternative connection segment.
If a route exists, a new establishment procedure is initiated on a new path; the connection is restored over the new
route. Otherwise, the connection is cleared back to its end points through normal call clearing procedures.
In order to be able to reroute the connection, the rerouting node must find a route that meets or exceeds the
applicable QOS characteristics requested by the atm connection.
Rerouting triggers:
The Rerouting mechanism is always triggered by a failure in a PNNI routing domain.
Failure events:
- Reception of PNNI messages:
- RELEASE or RELEASE COMPLETE with re-Routing cause EI,
- SAAL Failure,
- Restart or Status messages, with incompatible state.
- The Hello protocol runs as long as the link is operational. It can therefore act as a link failure detector
when other mechanisms fail. The Hello protocol monitors the status of the Pnni Routing Svc between
the two LGNs to increase robustness.
- Failure of the Pnni Routing Svc indicated from lower levels (ATM, PHY, and Signaling) is treated as a
linkDown Event. Procedures to re-establish the svc are followed.
- Expiry of timer T310 or T303

The following table provides the duration for re-routing a set of sPVC connections, according to the type of Passport
equipment failure (card, shelf):
Recover 10,000 Connections

13 sec

FP reset 45,000 Connections

2 min

Shelf reset 200,000 Connections

35 min

Figure 3-33 SPVC Recovery Time for PCR 2.2


If unsuccessful, it will then retry at an interval specified by a provisionable timer. The failure detection time is
dependent on the type of failures (for example, link, VC, Function Processors, and so forth).

ALU confidential

UMT/IRC/APP/11676

07.02 / EN

Standard

11/06/2009

Page 47/162

Iu Transport Engineering Guide

LOS Condition

UMTS

1/ RELEASE
2/ SETUP

PNNI
ATM

1/ RELEASE

PNNI

PNNI

PNNI

ATM

ATM

ATM

2/ SETUP

PNNI

ATM

TX

RX

TX

RX

TX

RX

TX

RX

RX

TX

RX

TX

RX

TX

RX

TX

2/ SETUP

2/ SETUP

PNNI
ATM

TX

RX

TX

RX

RX

TX

RX

TX

Figure 3-34 PNNI re-Routing operation


See MML:
ATTRIBUTE AtmIf Vcc Src retryCount
This attribute indicates the number of failed attempts to set up the soft PVP or soft PVC since the last time
the connection failed.
Values Decimal (0..4294967295)
ATTRIBUTE ARtg Pnni failedRoutingAttempts
This attribute counts all calls routed through PNNI which failed. The counter wraps to zero when it exceeds
the maximum value.

3.3.5

UMT

PNNI ON UMTS INTERFACES


In spite of the PNNI advantages, Alcatel does not recommend PNNI for a few network topologies:
Rule: IuTEG_PNNI_1
Alcatel does NOT recommend to configure PNNI on Iu interface if:
1/ the Iu interface goes through a large atm Pnni backbone,
2/ the Iu interface goes through a Policed atm Pnni backbone.

Comments:
-

Case1: Case of a large atm Pnni Backbone on the Iu interface:


As a consequence of the Pnni Routing flooding, an important PNNI routing traffic would be
submitted to the RNC, reducing its performance for UMTS traffic.
Case2: a policed atm Pnni Backbone on the Iu interface:
As long as only Pnni sPVCs are configured on UMTS Iu nodes, such a topology leads to shape at Vc
level, situation which is never recommended by Alcatel.

Alcatel suggests PNNI for following network topologies:


- Case of Iu UMTS nodes connected together without intermediate Atm Backbone,
ALU confidential

UMT/IRC/APP/11676

07.02 / EN

Standard

11/06/2009

Page 48/162

Iu Transport Engineering Guide


-

Case of a None policed ATM backbone within the IU interface; an atm Backbome with a limited
amount of atm switch. A network dedicated study must be done, in such a way to verify that ATM
Backbone does not provide an excessive PNNI routing traffic load.

Rule: IuTEG_PNNI_2
Alcatel supports PNNI for following network topologies:
Case of Iu UMTS nodes connected together without intermediate Atm Backbone,
Case of little size, non policed atm backbone.

Since PNNI is configured on IU interface, an ATM addressing plane has to be defined.

3.3.5.1 SPVC, SPVC HAIRPINS


Iub sPVCs originate in RNC-IN.
It is planed to initiate Iu/Iur sPVCs on RNC-IN side, too.
Rule: IuTEG_PNNI_3
Iu, Iur and Iub sPVCs originate in the RNC (Src component).

The current UMTS applications: aal2If, atmMpe, SS7 dont allow configuration of PNNI sPVC, these application
support only Pvc. Therefore the PNNI sPVC Hairpins are configured on the 16pOC3 FP.
The PNNI sPVC Hairpins are specified UNI atm interface.
The application is linked to the User side of the Pnni sPVC Hairpin, whereas the PNNI sPVC is configured on the
network side of the Pnni sPVC hairpin.
On the RNC, are already configured two Pnni sPVC hairpins dedicated to Iub interface. Two more Pnni sPVC
hairpins are configured for handling the Iu/Iur Pnni traffic.
Rule: IuTEG_PNNI_4
On RNC 16pOC3/Stm1 FP, two Pnni sPVC Hairpins are dedicated to Iu/Iur traffic.
Beside two different Pnni sPVC Hairpins are dedicated to Iub traffic.

Remark:
Iu/Iur Pnni traffic and Iub Pnni traffic are not carried over the same Hairpin, for bandwidth and atm
connectivity raisons.
Moreover, handling both Iub, Iu and Iur traffic on same Hairpin, would require to revisit the service
category assigned to Iu, Iur and Iub atm connections.
The amount of sPVC Hairpins dedicated to Iu/Iur traffic is the result of a UMTS traffic dimensioning study.

3.3.5.2

PLANE DESCRIPTION

The following sections describe PNNI options per traffic plane.

3.3.5.2.1

IU CS/PS CONTROL PLANE

It is suggested to establish ControlPlane ATM connection by means of PNNI.


With the SS7 protocol stack implemented in RNC-IN, a hairpin is required within the RNC-IN:

ALU confidential

UMT/IRC/APP/11676

07.02 / EN

Standard

11/06/2009

Page 49/162

Iu Transport Engineering Guide


IU
RNC-IN
RNC-IN

Src

CN internal
CN
CN

AN
AN

CP sPVC

Rp

SaalNNI
SaalNNI
shelf
shelf

Rp

CP PVC

SS7
SS7

Dst

PNNI Signaling

CallingPyAd

Static PVCs

Nep

CalledPyAd

Figure 3-35 IU CP path, SS7 handled by RNC-IN


Each SL configured in the SS7 application is linked to a Pvc configured on the user side of the Pnni sPVC Hairpin.
On the Pnni sPVC Hairpin network side, is configured the CP Pnni sPvc. The configuration consists in assigning to
the sPvc:
- The local Pvc configured on the Hairpin,
- The atm address (AESA) of the destination port, eg: a port in the AggregationNode,
- And a Pvc configured on the destination port.
- QOS and trafficManagement information
Case of Alcatel coreNetwork, the sPVC terminate on the AN, where the sPVC traffic is switch on a PVC
terminating in a SaalNNI I/O FP card.

3.3.5.2.2

IU PS USER & IUPC PLANE

The configuration of Pnni sPvc under atmMpe/Ac component is not carrierGrade, as a result Alcatel doesnt
recommend Pnni on IuPS user plane.
Rule: IuTEG_PNNI_5
The IuPS UP Vcc remains a Permanent Vc.
It is not recommended to configure Pnni sPvc on IuPS UP.
Remark:

3.3.5.2.3

The Pvc reliability is provided by ECMP.

IU CS USER PLANE

On IuCS UserPlane many UP Vcc may be required; it has been suggested up to 384 VCCs for covering case of full
meshed network. Even if the current configurations dont reach this limit, the amount of IuCS UP Vcc may be
important, then Alcatel supports Pnni on IuCS User Plane.
Within the RNC, the IuCS UP traffic arises out of an aal2If/Pathid application. A pathid is linked to a Pvc
configured on the user side of the Pnni sPvc hairpin.
On the Pnni sPVC Hairpin network side, is configured the CS UP Pnni sPvc. The configuration consists in assigning
to the sPvc:
- The local Pvc configured on the Hairpin,
- The atm address (AESA) of the destination port, eg: a port in the AggregationNode,
- And a Pvc configured on the destination port.
- QOS and trafficManagement information
Case of Alcatel coreNetwork MGW, the sPVC terminate on the AN, where the sPVC traffic is switch on a Pvc
terminating in a MGW Shelf.
ALU confidential

UMT/IRC/APP/11676

07.02 / EN

Standard

11/06/2009

Page 50/162

Iu Transport Engineering Guide


AN
AN

RNC-IN
RNC-IN

MGw
MGw
VSPshelf
VSPshelf

aal2If/Path
aal2If/Path
User side

Nep

CS UP sPVC
Rp

UNI

CS UP PVCs

CS UP PVC

Network side

Dst

Src

PNNI Signaling

Nep
Static PVCs

Figure 3-36 IU CS UP path

3.3.5.2.4

OAM PLANE

On IU, one InBand OAM Vcc carries an aggregate of OAM traffic from NodeB, RNC-AN, RNC-IN, RNC-CN.
The using Pnni for Oam Vcc doesnt simplify the configuration; since this flow is based on IP, ECMP may be
configured for reliability.

Rule: IuTEG_PNNI_6
Iu OAM VCC remains a permanent VC, no need for PNNI.

3.3.5.2.5

IU TRAFFIC SUMMARY

When PNNI is configured on IU, IU link(s) is loaded by following different sources of traffic:
- Iu sPVC Hairpin traffic (CS UP, CS CP and PS CP),
- IuPS UserPlane traffic,
- OAM traffic,
Moreover on Iu link (s) are carried Pnni intrinsic traffics: Pnni Routing and Signaling.
RoutingControlChannel traffic:
The RCC Vcc (0-18) is automatically configured by the system with the following TrafficDescriptor:

PNNI RCC PVC 0-18, TDT=8


AAL
SC
TDT
PCR (Cell/s)
SCR (Cell/s)
MBS (Cells)
CDVT (s)
EP
ECR

5
rtVBR
8
906
453
171
0
3
627

Figure 3-37 PNNI RCC trafficDescriptor


One RCC Vcc is setup per PNNI Hierarchical Level.
The RCC Vcc does not go through Hairpin.
Signaling Channel traffic:
The Signaling Vcc (0-5) is automatically configured by the system with the following TrafficDescriptor:

ALU confidential

UMT/IRC/APP/11676

07.02 / EN

Standard

11/06/2009

Page 51/162

Iu Transport Engineering Guide

PNNI SIG PVC 0-5, TDT=6


AAL
SC
TDT
PCR (Cell/s)
SCR (Cell/s)
MBS (Cells)
CDVT (s)
EP
ECR

5
rtVBR
6
250
160
5
0
3
193

Figure 3-38 PNNI SignalingChannel TrafficDescriptor

3.4

AAL2
AAL2 is the protocol used over ATM for IuCS UP trafficFlow.
At the AAL2 layer, bearers are identified by a PathId and CID (Channel Identifier). A bearer is established
dynamically by means of ALCAP signaling.
Moreover at AAL2 are specified addresses.

3.4.1

ADDRESSING:
The A2EA (AAL2 Service Endpoint Address) is the address used at aal2 layer. It is required to control the
establishment of the AAL2 bearers.
This address is used on the IuCS interface for establishing an aal2 CID between the RNC-IN and UMGW/VSP.
Therefore, an aal2 address is configured per VSP card.
Moreover aal2 address is used on Iur interface for establishing aal2 CID. Therefore one aal2 address is configured
per RNC.
The Transport addressing recommendations are gathered in a dedicated TEG called the AddressingTEG. This
document covers A2EA and AESA .

3.4.2

ALCAP
AAL2 signaling protocol Capability Set 1 [R45 & 4R6] chosen as ALCAP (Access Link Control Application Part),
is the signaling protocol to control AAL2 connections on the IuCS and Iur interfaces.
The establishment and release of User Plane Transport bearers for the IuCS interface are initiated by the RNC using
ALCAP messages, typically in response to the RANAP RAB assignment procedures.
Aal2 connections are identified by means of a CID and a PATHID. The CID identifies an Aal2 connection, whereas
the PATHID identifies an aal2 Path. There are up to 248 Aal2 connections within each Aal2 Path.
The remote aal2 node is identified by means of A2EA addresses.
The A2EA, PATHID and CID are carried in ALCAP ERQ messages for establishing the Aal2 connections.
ALCAP identifiers are used only in the Transport Network Control plane:
- On the IU interface, the RNC is in charge of allocating a CID.
- On the Iur interface, the SRNC is in charge of allocating a CID.

ALU confidential

UMT/IRC/APP/11676

07.02 / EN

Standard

11/06/2009

Page 52/162

Iu Transport Engineering Guide

RNC-IN
RNC-IN

RNC-CN

MGw

MGC

RANAP, RAB AssignmentRequest


TransportLayerAddress: A2EA

CID allocationRequest
CID allocationResponse

Alcap, ERQ
- A2EA,
- Pathid
- CID
Alcap, ECF
RANAP, RAB AssignmentResponse

Figure 3-39 Aal2 Connection establishment call flow

3.4.3

AAL2 SWITCHING:
An AAL2 Switch is an interconnection system level AAL2; it consists in switching an incoming AAL2 Connection
over an outgoing AAL2 Connection. An AAL2 Connection is identified by a CID/Pathid.
ALCATEL doesnt provide AAL2 Switch in UMTS network; nevertheless a ALCATEL UMTS Node may be
integrated in an UMTS Network where otherVendor AAL2 Switch (es) is/are included.
Alcatel RNC may have to interwork with an AAL2 Switch. There is still no case of Alcatel CoreNetwork
interworking with AAL2 switch, therefore this case is not covered in this document.
Transport ControlPlane:
When an AAL2 Switch is inserted on IuCS interface in front of the Alcatel RNC, the ALCAP routeSet is identified
by the RNC PC and the AAL2 switch PC.
In the AAL2 switch, the destination AAL2 endNode is identified by its A2EA. The A2EA is used to determine a
CID/Path toward the remote AAL2 endNode; AAL2 endNode is either a MGW or a driftRNC in case the Iur traffic
together with IuCS traffic goes through the AAL2 Switch.
For that the AAL2 Switch is configured with a Translation table, associating to each AAL2 endNode a PC per
A2EA.

ALU confidential

UMT/IRC/APP/11676

07.02 / EN

Standard

11/06/2009

Page 53/162

Iu Transport Engineering Guide

ERQ

ERQ
A2EA2, PathId 21, Cid 88

A2EA2, PathId 11, Cid 22

ECF

ECF

ALCAP

ALCAP

ALCAP

SL 1

SL 1

SL 2

SL 2

MTP

MTP
PathId 2
Saal
NNI

Saal
NNI

MTP
Saal
NNI

VCC 51

VCC 61

VCC 52
ATM

VCC 62
ATM

Eg: E1, capacity = 4528 cell/s


PHY

ATM
Eg: E1, capacity = 4528 cell/s

PHY

AAL2 EndNode

PHY
AAL2 EndNode

AAL2 switch

Configuration:

Configuration:

Configuration:

SS7: PC 1

SS7: PC 3

SS7: PC 2

AAL2: A2EA 1

Translation Table: A2EA -> -> Path

AAL2: A2EA 2

Figure 3-40 AAL2 Switch ProtocolStack CP


Transport UserPlane:
When an AAL2 Switch is inserted on IuCS interface in front of the Alcatel RNC, an Aal2If value is assigned to each
AAL2 switch, under an Aal2If value is configured a set of AAL2 Path serving the AAL2 switch.
Therefore AAL2 paths are initiated in the RNC and terminate in the AAL2 switch.
In case the RNC handles both IuCS and Iur traffic on a same transmission link terminating on an AAL2 switch, the
CID within a Path may be allocated either to IuCS traffic or to Iur traffic.
The AAL2 Switch is configured with a Translation table, associating to each AAL2 endNode, a list of PCs to reach
that A2EA and a set of AAL2 paths per PC; an AAL2 endNode being either a MGW or driftRNC.
Within the AAL2 switch an outgoing CID/Path is associated to the incoming CID/Path as a result of the ALCAP
bearer establishment phase.

ALU confidential

UMT/IRC/APP/11676

07.02 / EN

Standard

11/06/2009

Page 54/162

Iu Transport Engineering Guide

Example:
CID 22 on PathId 11
is switched on
CID 88 on PathId 21

AAL2
User

AAL2
User

CID 22
CID 88

PathId 11

UP

PathId 1
UP

PathId 12
AAL2

UP

PathId 21

PathId 2
AAL2

VCC 11

AAL2
VCC 21

VCC 12
ATM

ATM
Eg: E1, capacity = 4528 cell/s

PHY

ATM
Eg: E1, capacity = 4528 cell/s

PHY

AAL2 EndNode

PHY

AAL2 switch

AAL2 EndNode

Figure 3-41 AAL2 Switch ProtocolStack UP


Remark: This network architecture may become more complex when inserting SignalingTransferPoint node

3.5

IP
On IU interface, IP over ATM is involved in UMTS PS UserPlane and UMTS OAM traffic.

3.5.1

INVERSEARP
Inverse ARP provides a method for dynamically discovering the IP address of the Remote IP Host connected to a
VCC.
InverseARP is a protocol defined in [R68 & R69].
InverseARP uses services of ATM/AAL5.

ARP
IP
LLC / SNAP
AAL5
ATM
Layer1
An IP node sends InvARP Request to the remote IP node, and expects to receive InvARP Reply with remote Host IP
address included. Then the ARP table is updated with the mapping between VCC and Host IP address.
Within Passport based nodes, ARP table is updated periodically, according to the setting of MML.

ALU confidential

UMT/IRC/APP/11676

07.02 / EN

Standard

11/06/2009

Page 55/162

Iu Transport Engineering Guide

COMPONENT Vr ProtocolPort (Pp): PP 2


Vr Pp linkToMedia (media): linkToPP
COMPONENT Vr Pp IpPort
Vr Pp IpPort arpStatus: auto
Vr Pp IpPort arpNoLearn: dynHost enable
Vr Pp IpPort mediaType: atmMpeLlcEncap
COMPONENT Vr Pp IpPort
IpLogicalInterface: IP@2

VR

PP5

IP@2

IP@5

ATM MPE Configuration:


AtmMpe encapType: llcEncap
AtmMpe linkToProtocolPort: PP5

ATM MPE Configuration:


AtmMpe encapType: llcEncap
AtmMpe linkToProtocolPort: PP2
COMPONENT AtmMpe Ac: AC2
AtmMpe Ac atmConnection (link): Vcc x-y

COMPONENT AtmMpe Ac: AC5


AtmMpe Ac atmConnection (link): Vcc x-y

MPE X
AC

AC2

MPE
AC

AC

AC5

ATM@: Empty if PVC

AC

Vcc x-y

Vcc x-y

ARP Table:

VR

PP2

COMPONENT Vr ProtocolPort (Pp): PP 5


Vr Pp linkToMedia (media): linkToPP
COMPONENT Vr Pp IpPort
Vr Pp IpPort arpStatus: auto
Vr Pp IpPort arpNoLearn: dynHost enable
Vr Pp IpPort mediaType: atmMpeLlcEncap
COMPONENT Vr Pp IpPort
IpLogicalInterface: IP@5

VpiVci: VCC x-y


IP:

IP@5

@Age:

xxx

AtmIf xx1

AtmIf xx2

inAtmARP Request, for VCC x-y


inAtmARP Reply, for VCC x-y
IP@5

Figure 3-42 inverseARP


When inverse ARP is absent in the remote node or not supported by the ATM interface, the IP address of the remote
Host must be provisioned using Passport feature called StaticArp.
On RNC-CN, InverseARP is not supported, therefore on Icn interface RNC-IN is configured with StaticArp.
Remarks:
- InverseARP is the Passport default configuration.
- On one Virtual Router, some ProtocolPorts may be configured with staticARP, whereas other ProtocolPorts
are configured with InverseARP.
- Mapping information from staticARP overwrites mapping information from InverseARP.
- Do not configure StaticARP at one end of a connection and configure InverseARP at the other end of the
connection.

Rule: IuTEG_ARP_1
Within UMTS nodes, on IP over ATM interfaces, InverseARP is configured except for
Interface terminating on otherVendor nodes not supporting InverseARP.
Choice of Dynamic or Static InverseARP, impacts the PassPort ATM MPE component configuration.
When configuring an ATM MPE component, type of encapsulation must be specified:
Encapsulation type values:
- IpVcEncapsulation: is set when only IP packets are inserted in ATM cells. No InverseARP packet may be
inserted. Therefore with such an encapsulation mechanism InverseARP can not be configured, only StaticArp
applies.
- LLCEncapsulation: is set when IP packets, inverseARP packets, and other kinds of packets are inserted in
ATM cells. Then an LLC and SNAP overhead is added to the encapsulated packet. Within SNAP overhead
two bytes are filled with the EthernetType to reflect the type of inserted packet:
- EthernetType = 0x0800 for IP packet,
- EthernetType = 0x0806 for InverseARP packet.
ALU confidential

UMT/IRC/APP/11676

07.02 / EN

Standard

11/06/2009

Page 56/162

Iu Transport Engineering Guide

Rule: IuTEG_ARP_2
When configuring Passport ATM MPE,
EncapsulationType must be set with LLC Encapsulation value.
Remarks:
- If StaticArp is configured for all nexthop Ip@ used on the PP, EncapsulationType= IpVcEncapsulation may
be configured.
- Moreover on RNC-IN, when VPT is configured, DynamicARP works correctly only if ATTRIBUTE AtmIf
faultHoldOffTime is set to 0 on port where is configured the VPT.

Rule: IuTEG_ARP_3
If VPT activated, Set ATTRIBUTE AtmIf faultHoldOffTime = 0 on RNC port where is
configured VPT.
Passport MML for dynamic InverseARP:
COMPONENT Vr Ip Arp DynHostEntry (DynHost)
ATTRIBUTE AtmMpe encapType (etype)
Passport MML for staticARP:
COMPONENT Vr Ip Arp HostEntry (Host)
This component defines a static host entry in the ARP table.
ATTRIBUTE Vr Ip Arp Host permanentVirtualCircuitNumber (pvcNo)
This attribute specifies the number of the PVC for the static host entry.
ATTRIBUTE Vr Ip Arp autoRefreshTimeout
This attribute defines the timeout value, in minutes, which is assigned to updated ARP entries, or newly
created ARP entries.
The range for the timeout is 1 minute to 1440 minutes (24 hours). Default 5.
ATTRIBUTE Vr Pp IpPort arpNoLearn:
When this attribute is set with Disable, the ARP table is automatically updated with InverArpResponse.

3.5.2

ECMP
The ECMP (EqualCostMultiPath) allows the distribution of IP traffic onto up to three different VCCs.
Two kinds of ECMP are specified in [R72]: Packet based and Flow based ECMP.

Rule: IuTEG_ECMP-1
On Passport node, only Flow based ECMP is available.
Flow based ECMP means that IP sessions are distributed on different bearers. Packets within one session are carried
on the same bearer.
In the context of IU interface, a session is identified by PMC-RAB IP@ and USD IP@.
When ECMP is configured in RNC or the aggregationNode, the vcc carrying IP traffic may be configured on same
or different physical links.
The ECMP may be required for reliability reason, therefore the Vccs involved in ECMP are configured on different
physical links, either on UMTS node or on a upstream node, in such as to provide alternative routes in case of
VCC/route failure.
ECMP works with the following IP routing protocols: OSPF, static routes.
Remark:
ALU confidential

UMT/IRC/APP/11676

07.02 / EN

Standard

11/06/2009

Page 57/162

Iu Transport Engineering Guide


OSPF is not configured on UMTS networks.
ECMP does not require any configuration on Passport (hence there is no specific ECMP attribute). ECMP is
integrated in the IP software package.

Rule: IuTEG_ECMP-2
ECMP allows a maximum of three next hop IP addresses. Therefore ECMP allows
loadSharing over up to 3 parallel ATM links.
ECMP will automatically run if the following conditions are met:
- These links run the same routing protocol (supported is OSPF and static routes),
- These links are on the same virtual router,
- These links have routes assigned that lead to the same destination
- These links have the same cost: e.g. for static routes CAS command route/x.x.x.x.,x.x.x.x,0 (metric of route)"

"add vr/0 ip static

The ECMP does not generate any protocol overhead (no polling algorithm or e2e protocol). It is a local algorithm
which derives common routes within the local database and applies a flow based load sharing algorithm (e.g.
IP@+TCP port 20 in case of FTP).
It means that the node has to determine locally which routes it should use for loadSharing.
VR/1 IP Static Route /IP@2, DestMask, Tos
Nh / IP@8
Nh / IP@9

Host
Host 11

Host
Host 22

IP@1 172.253.20.126

Host
Host 33

IP@2

pp
pp

IP@3
pp
pp

pp
pp

OVR

OVR

IP@6 pp
pp

IP@7
pp
pp

IP@6=10.20.0.1

IP@8

IP@7=10.20.0.150

MPE

IP@8=10.20.0.151

MPE

AC
AC AC
AC AC
AC AC
AC

pp
pp

IP@9

IP@9=10.20.0.2

MPE

MPE

AC
AC AC
AC AC
AC AC
AC

IP@1
IP@1 IP@3
IP@3

AC
AC AC
AC AC
AC AC
AC

IP@1
IP@1 IP@3
IP@3

AC
AC AC
AC AC
AC AC
AC
UBR Vcc

Nrt VBR Vcc

Cbr Vcc

Rt VBR Vcc

UBR Vcc

Nrt VBR Vcc

Cbr Vcc

Rt VBR Vcc

UBR Vcc

Nrt VBR Vcc

Cbr Vcc

Rt VBR Vcc

UBR Vcc

Nrt VBR Vcc

Cbr Vcc

Rt VBR Vcc

AtmIF / 815

pp
pp

AtmIF / xx1

AtmIF / 814

AtmIF / xx2

IP@1
IP@1 IP@2
IP@2

IP@1
IP@1 IP@2
IP@2

Figure 3-43 ECMP, example with 2 routes.


The IP traffic between the RNC and the CN is loadshared on the RNC side between the two set of 4 PS UP PVCs,
configured on two different STM1s.

ALU confidential

UMT/IRC/APP/11676

07.02 / EN

Standard

11/06/2009

Page 58/162

Iu Transport Engineering Guide

3.5.3

QOS DOWNGRADING
When a UP PS VCC IPCos (i) fails, its traffic is diverted to a UP PS VCC IPCos (i-1), in other words, a VCC with
lower QOS.
This mechanism allows traffic to continue to be routed. However, it doesnt guarantee QOS, since traffic falls back
to a lower IPCos VCC.
PS UP PVC CBR, IpCos 3
PS UP PVC rtVBR, IpCos 2

RNC

ATM

PS UP PVC CBR, IpCos 3


PS UP PVC rtVBR, IpCos 2
PS UP PVC nrtVBR, IpCos 1

PS UP PVC rtVBR, IpCos 2

IP/
ATM

PS UP PVC UBR, IpCos 0

PS UP PVC nrtVBR, IpCos 1

ATM

PS UP PVC UBR, IpCos 0

1 STM1

CoreNetwork

IP/
ATM

PS UP PVC CBR, IpCos 3

PS UP PVC nrtVBR, IpCos 1


PS UP PVC UBR, IpCos 0

PS UP PVC CBR, IpCos 3


F5 OAM AIS/RDI
PS UP PVC rtVBR, IpCos 2

RNC

ATM

PS UP PVC CBR, IpCos 3

IP/
ATM

PS UP PVC rtVBR, IpCos 2

PS UP PVC CBR, IpCos 3


PS UP PVC rtVBR, IpCos 2

IP/
ATM

PS UP PVC nrtVBR, IpCos 1

PS UP PVC nrtVBR, IpCos 1

ATM

PS UP PVC UBR, IpCos 0

PS UP PVC UBR, IpCos 0

1 STM1
PS UP PVC nrtVBR, IpCos 1+2+3
PS UP PVC UBR, IpCos 0

When the physical link, supporting CBR and rtVBR PVCs, becomes unavailable, then the ATM backbone Node
which detects failure returns to RNC F5 OAM AIS or RDI signal for the concerned VCCs.
On reception of the F5 OAM AIS/RDI signal, the Passport diverts VCC traffic to lower QOS PVC still available.
It has been discovered that AIS may not always be generated (depending on failure scenario); therefore for IuPS
VCCs we also use F5 OAM end-to-end loopbacks. These loopbacks must be enabled on VCC at RNC. Not
receiving loopback response would also cause VCC to be taken out of service, and IP COS redirection would take
place, If there is no lower IP COS path (VCC in this case), packet will be redirected to higher IP COS.
The loopback mechanism has been strongly recommended.
Remark:
As indicated, when QOS downgrading is invoked, QOS is no longer guarantee to an ATM-User. To avoid Qos
downgrading:
- Configure 2 or 3 sets of PS UP VCCs, and group them into different VPCs. VPCs are configured on different
atm backbone pathes. When failure occurs within the Atm Backbone, UMTS nodes are notified by means of
VP-AIS/RDI about VP failure. Traffic is then diverted, using ECMP, to the available VPCs.
- PNNI sPVCs.

ALU confidential

UMT/IRC/APP/11676

07.02 / EN

Standard

11/06/2009

Page 59/162

Iu Transport Engineering Guide

3.6

GTP-U:
Reference: [R16].
GTP-U Tunnel is initiated in the RNC, transit in the SGSN and terminates in the GGSN.
In UMTS network, GTP version1 is applicable.
Remark: since SGSN has to deal with 2G-3G nodes, SGSN support GTP version 0 to interwork with GPRS nodes
GTP version1 to interwork with UMTS RNC.
On the RNC Side GTP is only used over IuPS UP. User traffic is encapsulated in a GTP-U tunnel. Several GTP-U
tunnels are transported over an IP flow.
Rule: IuTEG_GTP-U_1
RNC only deals with GTP version v1.
SGSN deals with GTP version0 and version1. GTP version1 is used in UMTS network.

3.7

SAAL-NNI:
The SaalNni provides services to the MTP3B layers, and uses services from the atm layer.
The SaalNNI consists of the following sub-layers: SSCF-NNI, SSCOP and AAL5.
The SSCF maps the requirements of the layer above to the requirements of SSCOP. Also SAAL connection
management, link status and remote processor status mechanisms are provided.
The SSCOP is a connection oriented protocol. It provides mechanisms for the establishment and release of
connections and the reliable exchange of signaling information between signaling entities.
Each MTP3 SL is carried over one SSCOP connection.
SAAL-NNI Adapts the upper layer protocol to the requirements of the Lower ATM cells.
Each SSCOP connection is carried over one vcc. One vcc carries one SSCP connection.

3.8

SS7
A SS7 signaling network is composed of SPs (SignalingPoint) identified each by a PC (pointCode). Moreover
STPs (signallingTransferPoint) may be inserted in the network. A STP is also identified by a PC.
A SS7 signaling network composed of only SPs is called associatedMode, whereas a SS7 signaling network
composed of SPs and STPs is called quasiAssociatedMode.
Between adjacent SPs, and between adjacent SP and STP, are configured MTP3 links called SL (signalingLink).
SLs between two adjacent SPs or between SP and STP are grouped within a LS (linkSet).
Moreover in case of quasiAssociatedMode, in the originating SP, is configured one routeSet per destination SP, the
routeSets are composed of all linkSets allowing to reach the DPC (DestinationPointCode).
Within the UMTS network, SS7 protocol stack is composed of MTP3B and SCCP on Iu and Iur interfaces, and
MTP2 on IuCS.

3.8.1

MTP2
The MTP2 protocol is only implemented on the IuCS interface, between the WG and the 3G-MSC.
MTP2 is an HDLC based protocol that ensures an error free connection by protecting the data with a CRC and
retransmitting frames that are in error. As with many other HDLC based protocols, MTP2 also implements flow
control.
ALU confidential

UMT/IRC/APP/11676

07.02 / EN

Standard

11/06/2009

Page 60/162

Iu Transport Engineering Guide

3.8.2

MTP3
MTP3 is the level 3 of the SS7 protocol stack. There are two kinds of MTP3 defined by the standards:

MTP3 NarrowBand:
This is the standard MTP layer 3 from [R29] that supports a MSU userField of up to 272 bytes length.

MTP3 BroadBand:
This is the MTP layer 3 broadband variant that supports a MSU userField of up to 4091 bytes length [R43].
The MTP3B payload and header maximum length is 4091 bytes. The MTP3B header includes the SIO and the
routingLabel.
MTP3 NarrowBand is involved in 2G networks, whereas MTP3 BroadBand has been chosen solution for 3G
networks, and is referred to as MTP3b.

3.8.2.1 SS7 NETWORK TOPOLOGIES


ITU specifies two SS7 network topologies called associated mode and quasiAssociatedMode.
SS7 Associated mode:
Definition: Point to point SS7 connections are configured between two SignalingPoints.

SP

SL

SP

F links

PC

PC
SL
1 LS

Up to 16 SL within a LS

Figure 3-44 Associated mode


In terms of the SS7 network topology, no STP (SignalingTransferPoint) is included between SP.
Rule: IuTEG_MTP3_1
When MTP is configured in the associated mode only one linkSet is configured per
routeSet.
Each SL is mapped to a dedicated ATM VCC, called controlPlane (CP) Vccs.
Remark:
In associated mode, there is a one-one relationship between a routeSet and a linkSet, since they both
identify a SS7 route between two SPs.
SS7 quasiAssociatedMode:
Definitions: SignalingTransferPoints are inserted between SignalingPoints.
Point to point SS7 connections are configured between one signalingPoints and one SignalingTransferPoint.
A SignalingTransferPoint is a signalingPoint without MTP-User.
SP and STP are each identified by a pointCode value.
SignalingLinks are configured between a SP and the adjacent STPs. SLs serving one STP are gathered within one
LinkSet. The LS is identified by the local PC and the PC of the adjacent STP.
The SP is then configured with one or several routeSets identified by the PC of the local SP and the PC of the
remote SP(s).
The LinkSets are then assigned to the routeSets.

ALU confidential

UMT/IRC/APP/11676

07.02 / EN

Standard

11/06/2009

Page 61/162

Iu Transport Engineering Guide


routeSet1 (OPC, DPCx)

routeSet2 (OPC, DPCy)

linkSet 1 (OPC, STP1 PC)

linkSet 1 (OPC, STP1 PC)

signallingLink 1

signallingLink 1

signallingLink 2

signallingLink 2

signallingLink 3

signallingLink 3

signallingLink 4

signallingLink 4

linkSet 2 (OPC, STP2 PC)

linkSet 2 (OPC, STP2 PC)

signallingLink 1

signallingLink 1

signallingLink 2

signallingLink 2

signallingLink 3

signallingLink 3

signallingLink 4

signallingLink 4

Figure 3-45 QuasiAssociatedMode configuration

SP
A links
1 LS

SP
PC

STP
PC
C links

1 LS

routeSet
Up to 16 SL within a routeSet

STP
PC

SP
PC

SP

Figure 3-46 QuasiAssociatedMode


Notation:
- F links are SLs between two adjacent SPs,
- A links are SLs between a SP and an adjacent STP,
- C links are SLs between two adjacent STPs.
Rule: IuTEG_MTP3_2
RNC supports quasiAssociated mode. UMGW and USGSN supports only associated mode.
When MTP is configured in the quasiAssociatedMode, RNC allows up to two linkSets per
routeSet.
At transmission node, the routeSet traffic is loadshared between the LSs and between SLs within each LS.
The loadSharing mechanism takes into consideration the SLS field value within the MTP frame for selecting the LS
and the SL. Moreover, MTP3 loadSharing mechanism may influenced by setting in the RNC different Priority
values to SLs within a LS, and/or setting different Priority values to LSs within a routeSet.
Example of quasiAssociatedMode topology on UTRAN interfaces:

ALU confidential

UMT/IRC/APP/11676

07.02 / EN

Standard

11/06/2009

Page 62/162

Iu Transport Engineering Guide


MGC
MGC
PC1
PC1
MGw
MGw
routeSet

LS

SL

MGw
MGw
PC2
PC2

STP
STP
PC6
PC6

PC
PC

RNC
RNC
SGSN
SGSN

SL

PC10
PC10
LS

PC3
PC3

STP
STP
PC7
PC7

driftRNC
driftRNC

-routeSet RANAP CS, PC10-PC1,

PC4
driftRNC
PC4
driftRNC

-routeSet ALCAP, PC10-PC2,


-routeSet RANAP PS, PC10-PC3,

PC5
PC5

-routeSet RNSAP, PC10-PC4,


-LinkSet 1, PC10-PC6,
-LinkSet 2, PC10-PC7.
Figure 3-47 quasiAssociatedMode in the context of the UMTS network

3.8.2.2 POINTCODE
At the MTP3 layer, each SP and STP is identified by means of a PointCode.
Rule: IuTEG_MTP3_3
From an SS7 routing perspective
- The RNC node is represented by one single ownPC.
- The MGC node is represented by one single ownPC,
- The MGW node is represented by one single ownPC,
- A neighbor RNC is represented by one single ownPC.
The RNC PC is configured to exist in one network and should have only one network
indicator (NI=0, 1, 2 or 3).
On the RNC, the OPC=0 and DPC=0 are not allowed.

The PointCode is a MSU field of:


- 14 bits length, according to ITU [R29],
- 24 bit length point codes are used for ANSI (or in some China networks).
It should be noted that within the UMTS Network, SS7 is configured with ITU pointCode, indeed 3GPP only
defines 14-bit point codes.
The 14-bit ITU pointCode may be parsed in several ways, depending on the convention used within the specific
country (e.g. 3-8-3 or 4-7-3). In the case of the more common format, the pointCode 3112 may be represented
1.133.0, as shown below:

ALU confidential

UMT/IRC/APP/11676

07.02 / EN

Standard

11/06/2009

Page 63/162

Iu Transport Engineering Guide

PC:

3112

bits:

14 13 12

11 10

7
0

6
1

5
0

133

Figure 3-48 pointCode coding formats


Following picture summarizes required PointCodes within UMTS network:
MGC
MGC
PC30
PC30
MGw
MGw
x SL

STP
STP
PC20
PC20

STP
STP
PC22
PC22

PC31
PC31

MGw
MGw

RNC
RNC

SS7
SS7Network
Network

PC10
PC10

PC39
PC39
RNC
RNC

STP
STP
PC21
PC21

STP
STP
PC23
PC23

PC40
PC40

RNC
RNC
PC59
PC59
SGSN
SGSN
PC60
PC60

Figure 3-49 Required Point Codes

3.8.2.3 SERVICE INDICATOR


MTP3 users are ALCAP and SCCP.
MTP3 identifies its users using SIO (ServiceIndicator Octet) sub field:

Rule: IuTEG_MTP3_4
ServiceIndicator = 0011 identifies SCCP
ServiceIndicator = 1100 identifies ALCAP

3.8.2.4 NETWORK INDICATOR


The NetworkIndicator is a MTP3 Frame field two bits in length.
A NetworkIndicator value identifies a Semaphore network.
A pointCode is defined within a Semaphore network; therefore a pointCode value has to be unique within a
semaphore network identified by a NetworkIndicator value.
A same pointCode value may be used on different semaphore networks.
NetworkIndicator value defined in ITU recommendations:
- NI= 0 International,
- NI= 1 Spare for international use,
- NI= 2 National,
ALU confidential

UMT/IRC/APP/11676

07.02 / EN

Standard

11/06/2009

Page 64/162

Iu Transport Engineering Guide


- NI= 3 National spare
All four NI values are available in RNC and CoreNetwork.
Rule: IuTEG_MTP3_5
It is recommended to configure IuCS and IuPS within the national spare semaphore
network, NI=3.

3.8.2.5 ROUTESET, LINKSET


Case of quasiAssociatedMode:
A routeSet is configured between two SPs (MTP3 endPoint), whereas a linkSet is configured between a SP
and an adjacent STP (MTP3 switchingPoint).
A routeSet is composed of one or two routes (linkSets); each linkSet supporting up to 16 links. Where
there are 2 routes configured with equal loadSharing priorities, the maximum number of links between the
two linkSets is 16 links.
The routeSet is identified by means of the OPC-DPC of the extremity SPs (Originating and Destination
PC).
The linkSet is identified by means of the SP PC and the STP PC.
A linkSet is composed of one or several signalingLinks, up to 16.
Case of associated mode:
A routeSet and a linkSet have same definition. They are configured between two adjacent SPs.

Application to UMTS Iu interface:


Case1: Assume a combined CS coreNetwork (R99 UMGW)
- One routeSet is dedicated to CS Domain; within this routeSet are carried RANAP CS and ALCAP
protocols,
- One routeSet is dedicated to PS Domain; within this routeSet is carried RANAP PS.

ALU confidential

UMT/IRC/APP/11676

07.02 / EN

Standard

11/06/2009

Page 65/162

Iu Transport Engineering Guide


RANAP CS
SCCP

ALCAP

MTP3b
SAAL-NNI
AAL5

RNC
RNC
PC
PC xx

CoreNetwork
CoreNetwork

ATM

3G-MSC
3G-MSC

PHY / STM1

RouteSet: OPC x / DPC y

PC
PC yy

Aal2 VCCs
MGw
MGw

RANAP PS
SCCP
MTP3b
SAAL-NNI
AAL5
ATM
PHY / STM1

RouteSet: OPC x / DPC z

SGSN
SGSN
PC
PC zz

Figure 3-50 One routeSet per Domain (R99 UMGW case)

Case2: Assume a BICN R4 CS architecture


- One routeSet is configured between the RNC and the MGC, within this routeSet is carried
RANAP CS protocol,
- One routeSet is configured between the RNC and each MGW; as many routeSet as amount of
MGW. This routeSet carries ALCAP protocol,
- One routeSet is dedicated to PS Domain, within this routeSet is carried RANAP PS.

ALU confidential

UMT/IRC/APP/11676

07.02 / EN

Standard

11/06/2009

Page 66/162

Iu Transport Engineering Guide


RANAP CS
SCCP
MTP3b
RANAP PS

SAAL-NNI
AAL5

SCCP
MTP3b

ATM

SAAL-NNI
AAL5

PHY / STM1

MGC
MGC

RouteSet RANAP CS: OPC x / DPC y

ATM

CS
CS Core
Core
Network
Network

PC
PC yy

PHY / STM1
ALCAP

RouteSet RANAP PS: OPC x / DPC z

MTP3b

RNC
RNC

SGSN
SGSN

SAAL-NNI
AAL5

Aal5 VCCs

PC
PC zz

PC
PC xx

ATM
PHY / STM1

RouteSet ALCAP : OPC x / DPC a

Aal2 VCCs

VSP4e
VSP4e
PC
PC aa

Aal2 VCCs

RouteSet ALCAP : OPC x / DPC b

VSP4e
VSP4e
PC
PC bb

Aal2 VCCs

RouteSet ALCAP : OPC x / DPC c

VSP4e
VSP4e
PC
PC cc

Aal2 VCCs

RouteSet ALCAP : OPC x / DPC d

VSP4e
VSP4e
PC
PC dd

Figure 3-51 One routeSet per SGSN, MGC and MGW (R4 BICN case)

3.8.2.6 LOADSHARING:
On initiating a new MTP-User connection (SCCP or Alcap), the MTP SLS (SignalingLinkSelection) field value is
incremented.
Associated mode topology:
Based on the SLS value, the loadSharing algorithm selects a SL (SignalingLink) within the LS (LinkSet),
and transmits MTP-user information on the designated SL.
The SLS field remains unchanged during the life of the MTP-User connection. Therefore all subsequent
MSUs, relative to this MTP-User connection is transmitted in the same selected signalingLink.
On next MTP-user connection, the assigned SLS value is incremented, a new SL is selected based a new
SLS value.
The loadSharing algorithm guarantees a well balanced traffic distribution over all SL within LS, if amount
of SLs per LS equal to 2 to power of n, e.g.: 2, 4, 8 or 16.
Rule: IuTEG_MTP3_7
# SLs per LS has to be equal to 2 to power of n, e.g.: 2, 4, 8 or 16.
As a result, the offered load is evenly assigned to the full range of possible SLS values, which in turn are
evenly distributed over the available SLs. This ensures that the SLs within the selected linkSet all carry an
equivalent load.
Example of traffic distribution according SLS value and amount of SLs per LS:
ALU confidential

UMT/IRC/APP/11676

07.02 / EN

Standard

11/06/2009

Page 67/162

Iu Transport Engineering Guide


SLS

#SL/LS = 2

#SL/LS = 4

#SL/LS = 8

#SL/LS = 16

0000

0001

0010

0011

0100

0101

0110

0111

1000

1001

1010

10

1011

11

1100

12

1101

13

1110

14

1111

15

There is a one to one relationship between a SL and a VCC CP.


Up to 16 SLs may be configured per LinkSet/routeSet; as a result up to 16 CP VCCs may be configured
per DPC
QuasiAssociatedMode topology:
Based on the SLS value, the loadSharing algorithm selects first a LS, and then a SL within the selected LS.
MTP-user information is transmitted on the designated SL.
The loadSharing algorithm guarantees a well balanced traffic distribution over all LS, and all SL within LS,
if amount of LSs is 2 power of n, e.g.: 2 and SLs per LS is 2 power of n, e.g.: 2, 4, 8 or 16.
Rule: IuTEG_MTP3_8
# LS per routeSet = 2.
# SLs per LS must be equal to 2 to power of n, e.g.: 2, 4, 8 or 16.
As a result, the offered load is evenly assigned to the full range of possible SLS values, which in turn are
evenly distributed over the available LSs and SLs within LS. This ensures that the LSs and SLs within
the selected linkSet all carry an equivalent load.
Illustration of the loadSharing:

ALU confidential

UMT/IRC/APP/11676

07.02 / EN

Standard

11/06/2009

Page 68/162

Iu Transport Engineering Guide


LS selector

SL selector

LS 0
SL 3

SL 2

SL 1

LS 1
SL 0

SS7 topology example:


On SP two LS are configured,
each populated with 4 SL.

LS

SL

STP
SP
SP

SL
LS
routeSet

STP

SL 0
0

SL 1

SL 2

SL 3

Figure 3-52 LoadSharing example

3.8.2.7 CHANGEOVER
Refer to [R29]
When a MTP-User session is established on a SL and the SL becomes unavailable (due to failure, blocking or
inhibiting), MTP will invoke the ChangeOverOrder.
MTP ChangeOverOrder mechanism diverts traffic from the failed SL to all available SL(s) within the linkSet. If no
more available SL within the LS, traffic from the failed SL is diverted to SL(s) on the second LS configured in the
routeSet (case of quasiAssociatedMode).
As a result, the alternative SLs carry their own traffic and the percentage of traffic resulting from the failed SL.
This is the reason why a SL is dimensioned to less than its maximum capacity (typically 0.4Erlang) under non-fault
condition.
When a SL is unavailable, some MSUs may have been lost. Therefore MTP3B requests from level 2 (SSCF-NNI)
an indication of its unacknowledged MTP MSUs. Level 2 returns to MTP3B BSN (Backward Sequence Number).
The MTP3B stores the retrieved MSUs and will re-send them on an alternative link.
Within MTP a variety of SignalingNetworkManagement messageGroup are defined. S Some examples of messages
related to traffic rerouting are as follows:
- ChangeOverOrder,
- ChangeBackOrder,
On SL failure detection, the ChangeOverOrder message allows for the diversion of traffic from the faulty SL to
alternate SLs while avoiding MSU loss, duplication or mis-sequencing (MessageSignalingUnit), whereas
ChangeBackOrder diverts traffic back to the original SL since it becomes active.
The ChangeOver mechanism consists of sending an MSU containing a ChangeOver message on an alternate SL.
Since this order is acknowledged, messages in the retransmission buffer for the faulty SL are transferred on the
alternative link.

ChangeOver triggers:
A link failure indication is obtained from level 2. The indication may be caused by:
ALU confidential

UMT/IRC/APP/11676

07.02 / EN

Standard

11/06/2009

Page 69/162

Iu Transport Engineering Guide


-

Failure detected by ATM, by means of ATM OAM signals: AIS, RDI.


Failure of signaling terminal equipment;
Reception of consecutive link status signal units indicating out of alignment, out of service, normal or
emergency terminal status.
A request (automatic or manual) is obtained from a management or maintenance system. MTP3 tests SL , if
the far end doesnt answer, changeOver is initiated.
Moreover a signaling link which is available (not blocked) is recognized by level 3 as failed when a
changeover order is received.

Once the failed SL recovers, messages related to the MTP-User session will be transmitted again on their original
SL, after invocation of the ChangeBack MTP mechanism.
Each SL is mapped to a dedicated ATM VCC, which are called controlPlane (CP) VCCs.
Therefore if the CP VCCs, related to one routeSet, are configured on two different Physical Links, then traffic
related to this routeSet is shared on two physical links. Moreover when a physical link fails, the supported SL fails
too, traffic from these SL is diverted to another SL / VCC carried by another physical Link using the MTP3
ChangeOver procedure.
By creating ControlPlane VCCs on different physical links, reliability is provided along the path up to the Core.
This implies the duplication of VCCs along the path.
Illustration of the COO mechanism:
LS 0

Assume that LS0/SL1


fails:

SL 3

SL 2

SL 1

LS selector

SL selector

SL 0

SL 0

LS 1
SL 1

SL 2

SL 3

0 0 0 0
0 0 0 1
0 0 1 0
0 0 1 1

SS7 topology example:

0 1 0 0
LS

0 1 0 1

SL

0 1 1 0

STP

0 1 1 1

SP
SP

1 0 0 0

SL

1 0 0 1
LS
RouteSet

STP

1 0 1 0
1 0 1 1
1 1 0 0
1 1 0 1
1 1 1 0
1 1 1 1

Figure 3-53 COO Example

3.8.2.8 MTP3 INTERFACE NAME


The ITU has given name to the different kinds of MTP3 which may be encountered. The following figure gives the
name of these interface:

ALU confidential

UMT/IRC/APP/11676

07.02 / EN

Standard

11/06/2009

Page 70/162

Iu Transport Engineering Guide

SP
SP
Mated STP

Mated STP

PC
PC

C-Link
F-Link

STP
STP
PC
PC

A-Link

SP
SP

STP
STP
PC
PC

A-Link

B-Link
C-Link

SP
SP

C-Link

PC
PC

PC
PC
A-Link

A-Link

C-Link

STP
STP
PC
PC

STP
STP
PC
PC

E-Link

Figure 3-54 MTP3 interface names

3.8.3

SCCP
Refer to [R30].
SCCP is compliant with ITU White book (1996).
SCCP Class1 and Class 3 are not supported.
SCCP class 0 and class 2 are required on the IU interface.
SCCP provides basic connection-less and connection-oriented facilities to its user.

3.8.3.1 SSN
At the SCCP layer, RANAP is identified by means of the SSN (SubSystemNumber) field. There is no distinction
between the PS and CS domain:

Rule: IuTEG_SCCP_1
SSN = 1000 1110 identifies RANAP CS&PS,
SSN = 1000 1111 identifies RNSAP.

3.8.3.2 ROUTING
At SCCP level, two kinds of routing are specified: routing on DPC or routing on GT (GlobalTitle).
When routing on DPC, SCCP provides to MTP a PC which is directly understandable.
When routing on GT, the SCCP frame is filled with a calledPartyAddress and CallingPartyAddress which are called
GT. The GT can not be used as such for routing, a translation is required. By means of the GTT (Global Title
Translation table), a GT is translated in a DPC + SSN; DPC is used for routing purpose.

E164/GTT (GlobalTitleTranslation) Address Requirements:


In the SGSN, each USC card is a PC to the HLR when using the Virtual Point Code scheme, which uses Global
Titles. It should be noted that the configuration for the GT (Global Title) format must include an indication of
whether the stack is ANSI or ITU based.
The RNC doesnt support routing on Global Title.
ALU confidential

UMT/IRC/APP/11676

07.02 / EN

Standard

11/06/2009

Page 71/162

Iu Transport Engineering Guide

Rule: IuTEG_SCCP_2
Routing on DPC is recommended. Within the RNC, GTT routing is not supported, so
routing on DPC is the only supported format

3.8.3.3 SCCP FRAME


In the connectionLess mode, 3 kinds of frame are specified: UDT, XUDT and LUDT.
- UDT:
The UDT is the basic SCCP connectionLess frame supporting a payload length up to 255 bytes; such a SCCP
frame may be transmitted over either MTP3N or MTP3B.
- XUDT (Extended UDT):
When the SCCP-User message length is higher than the UDT payload size, a segmenting/reassembly function
for protocol classes 0 and 1 is provided. In this case, the SCCP at the originating node or in a relay node
provides segmentation of the SCCP-User message into multiple segments each encapsulated in a SCCP XUDT
frame.
If the network is based on MTP3B, there is no need for XUDT.
- LUDT (Long UDT): a SS7 node is able to manage LUDT, if the SCCP uses services from the MTP3B. The
LUDT payload size is 3952 bytes.
On the RNC, MGW and SGSN Iu interface, since using the service from the MTP3B, the SCCP layer is able to
manage LUDT frames (Long UnitData).
On RNC side:
SCCP ConnectionLess:
- The RNC supports reception of either: UDT, XUDT or LUDT,
- The RNC always sends UDT. The LUDT could be sent to carry RESET RESOURCE depending on the
size of the message.
On MGW side:
On reception of LUDT frames on the IuCS interface, the MGW segments the received SCCP frame in several
UDT frames transmitted on the IuCS interface, since MTP3 NarrowBand is implemented on IuCS interface.

3.8.3.4 TIMERS
Two inactivity control timers are specified at each end of a SCCP connection:
- the receive inactivity control timer T(iar) and
- the send inactivity control timer T(ias).

Rule: IuTEG_SCCP_3
The length of the receive inactivity timer must be longer than the length of the longest send
inactivity timer in the surrounding nodes.
It might be advantageous to make sure that the inactivity receive timer T(iar) is at last twice
the inactivity send timer T(ias) of the remote SCCP node.
It might be advantageous to make sure that the inactivity send timer T(ias) is at least half the
inactivity receive timer T(iar) of the remote SCCP node.
Having T(iar) value at least twice the T(Ias) value, avoids that the loss of one single IT (InactivityTest) message
(e.g. due to short term MTP congestion) causes the inadvertent release of an otherwise inactive SCCP connection.
Loss of more messages, (e.g. due to SP failure), will however still cause the connection to be released.
When any message is sent on a connection section, the send inactivity control timer is reset.
When any message is received on a connection section, the receive inactivity control timer is reset.
When the send inactivity timer, T(ias), expires, an IT message is sent on the connection section.
Default values:
ALU confidential

UMT/IRC/APP/11676

07.02 / EN

Standard

11/06/2009

Page 72/162

Iu Transport Engineering Guide


-

T (ias) = 300 seconds


T (iar) = 660 seconds

Rule: IuTEG_SCCP_4
On RNC, one couple of T(ias), T(iar) values is configured at the SCCP layer, hence the
value specified affects all Iu and Iur connections.
Remark:
RNC doesnt offer the ability to configure a couple of T(ias), T(iar) values per provisioned DPC.

3.9

IU TOPOLOGY
On IU interface, different Transport topologies may occur:
- Atm point to point connections or ATM backbone,
- Different UMTS interfaces on one transmission link, one UMTS interface on two or more
transmission links,
- SS7 associated mode or quasiAssociated mode,
- Aal2 switching function,
- Transmission network,
- Case of interworking with otherVendor equipments,
The following sections depicts different topologies which may occur on the Iu interfaces.

3.9.1

TRANSMISSION LINK:
The required number of STM1 links depends on Dimensioning studies. For each working line STM1, one protection
line STM1 is configured (APS).
The RNC 16pOC3/Stm1 PQC FP attributes must be set in such a way to satisfy the requested connectivity.

ALU confidential

UMT/IRC/APP/11676

07.02 / EN

Standard

11/06/2009

Page 73/162

Iu Transport Engineering Guide


UMGw
UMGw
AN
AN

VSP2
VSP2

VSP2
VSP2

4pOC3/STM1
4pOC3/STM1

4pOC3/STM1
4pOC3/STM1

1<2 STM 1

1 STM 1

USGSN
USGSN

AN
AN

1 STM 1

APS

1 STM 1

4pOC3/STM1
4pOC3/STM1

FP
FP

4pOC3/STM1
4pOC3/STM1

1<2 STM 1

USGSN
USGSN
4pOC3/STM1
4pOC3/STM1

APS

4pOC3/STM1
4pOC3/STM1

1<2 STM 1
FP
FP

PS-FP 00
PS-FP

16pOC3/STM1
16pOC3/STM1

PS-FP 11
PS-FP

APS

FP
FP

1<2 STM 1

ATM Backbone

16pOC3/STM1
16pOC3/STM1

PS-FP 11
11
PS-FP

1<2 STM 1

APS

VSP2
VSP2

FP
FP

APS

1 STM 1

4pOC3/STM1
4pOC3/STM1

4pOC3/STM1
4pOC3/STM1

1<2 STM 1

RNC
RNC ii
(1<i<24)
(1<i<24)

UMGw
UMGw

MAP 1
MAP 0
USD

USD
USC

USC
SG
SG
I/O (saal)
Bill/LI

Figure 3-55IU Transmission connectivity


Remark:
4pOC12/STM4 FP is available from a product point of view; its usage is not considered in the Reference
Architecture.

3.9.2

BICN NSS18:
The BICN is the name of the R4 CS coreNetwork. It is an enhancement of the R99 CS coreNetwork which consists
in splitting the UMTS Signaling and the bearer functions in two different kinds of node. Within the BICN, the MGC
is in charge of handling UMTS callControl Ranap protocol, whereas the MGW is in charge of handling the bearer
and the Transport callControl Alcap protocol. The BICN is composed of one MGC and one or several MGW(s).
In Alcatel BICN solution, the VSP4e acts as one MGW.
The NSS18 release provides VoAtm only.
The 2p GpDsk I/O FP (GeneralServicesProcessor with Disk ) provides IP connectivity between the MGW Shelf or
the aggregationNode with the MGC.
Either the 4pOC3/Stm1 singleMode or the 16pOC3/Stm1 singleMode FP provides the connectivity to the atm-based
IuCS interface.

ALU confidential

UMT/IRC/APP/11676

07.02 / EN

Standard

11/06/2009

Page 74/162

Iu Transport Engineering Guide


1/ MGw Shelf behind the AN:

IuCS CP

USP
USP
100 BT

I/O
saal
I/O
I/Osaal
saal
I/O
saal

MGw Shelf

MGw
MGw

MGw
MGw

USP
USP

RanapSigtran VPI
UNI
STM1
RanapSigtran VPI
Ranap VPI
Alcap VPI
UNI
STM1

3/ MGw Shelf without AN, Eth links in the MGwShelf:

Ranap VPI = 1 < 32


Transport VPI = 33 < 88

UNI

STM1

2 CS Ranap VPi,
Up to 10 Alcap VPi
1 PS Ranap VPi
Up to 20 RNSAP VPi
UNI
STM1
IU

4/ MGw Shelf without AN, Atm connection to the CO:

MGw
MGw

MGw
MGw

I/O
saal
I/O
I/Osaal
saal
I/O
saal

MGw Shelf

100 BT

MSS15
MSS15

USP
USP

MGC
MGC

100 BT

IuCS CP
Atm

RanapSigtran VPI

RanapSigtran VPI
Ranap VPI
Alcap VPI

Port
Port

MGw
MGw

I/O
saal
I/O
I/Osaal
saal
I/O
saal

100 BT

MGw Shelf

100 BT

2pGpDsk
2pGpDsk
2pGpDsk
2pGpDsk

USP
USP

MGC
MGC

IuCS CP

MGw
MGw

16p OC3/STM1
OC3/STM1 FP
FP
16p

MGw
MGw

AN
AN
ATM
FP
ATM
ATMFP
FP
ATM
FP

I/O
saal
I/O
I/Osaal
saal
I/O
saal

MGw Shelf

MGw
MGw

UNI
STM1

AtmBackbone
Backbone
Atm
VPswitching
switching
VP

100 BT

RNC-IN
RNC-IN

Port
Port

IuCS CP

MGC
MGC

AN
AN
ATM
FP
ATM
ATMFP
FP
ATM
FP

2pGpDsk
2pGpDsk
2pGpDsk
2pGpDsk

2/ MGw Shelf behind the AN:

AN
AN
ATM
FP
ATM
ATMFP
FP
ATM
FP

2pGpDsk
2pGpDsk
2pGpDsk
2pGpDsk

MGC
MGC

IUR
UNI

UNI

Figure 3-56 NSS18 BICN supported IuCS configurations

3.9.3

BICN NSS19:
The NSS19 release introduces the VoIP.
The MGW Shelf is populated with the 4pGigabit Ethernet I/O FP which provides connectivity to IP backbone .
Either the 4pOC3/Stm1 singleMode or the 16pOC3/Stm1 singleMode FP provides the connectivity to the atm-based
IuCS interface.
Within the BICN, different configurations occur:
1. The MGW Shelf is composed of VSP4e MGW and I/O FP card, the MGW Shelf is connected directly to the
IuCS interface without AN. The MGW shelf communicates with the CO through the 4pGigEthernet FP, case of
Nb VoIP.
2. The MGW Shelf may also transmit the IuCS CP traffic to the CO through an atm interface then a MSS15k is
required on the CO to convert the Transport layers: ATM -> Ethernet.

ALU confidential

UMT/IRC/APP/11676

07.02 / EN

Standard

11/06/2009

Page 75/162

Iu Transport Engineering Guide


RNC-IN
RNC-IN
1/ MGw Shelf without AN, case Nb VoIP:
MGW
MGW

Ranap VPI
Transport VPI
UNI
STM1

Port
Port

MGW
MGW

I/O
Saal
I/O
I/OSaal
Saal
I/O
Saal

MGw Shelf

4pGigE
4pGigE
4pGGigE
4pGGigE

ERS8600
ERS8600

USP
USP

MGC
MGC

100 BT

IuCS CP

Nb

MSS15
MSS15

USP
USP

MGC
MGC

IuCS CP

Ranap VPI between the


MGW and the MGC

Atm

IuCS RANAP Vpi,


IuCS ALCAP + UP Vpi
IuPS Vpi
Iur Vpi

UNI

16p OC3/STM1
OC3/STM1 FP
FP
16p

AtmBackbone
Backbone
Atm
VPswitching
switching
VP

2/ MGw Shelf without AN, Atm connection to the CO:

UNI
STM1

MGW
MGW

I/O
Saal
I/O
I/OSaal
Saal
I/O
Saal

MGw Shelf

4pGigE
4pGigE
4pGGigE
4pGGigE

MGW
MGW

Ranap Iu VPI
Transport Iu VPI
VPI for RANAP atm
dialogue between the
MGW and the MGC.

IU

Port
Port

100 BT

IUR

UNI

Nb

Figure 3-57 NSS19 BICN supported IuCS configurations

3.9.4

UTRAN WITH BICN:


Description:
On RNC side, one physical link is shared by Iu and Iur traffic.
Another physical link is in charge of carrying Iub traffic. The UTRAN RNC interfaces go through an ATM
backbone.
This configuration is the most restrictive when defining Vpi values.

ALU confidential

UMT/IRC/APP/11676

07.02 / EN

Standard

11/06/2009

Page 76/162

Iu Transport Engineering Guide


VCC OAM
VCC UP DS & NDS
VCC CP & CCP

Up to 200 NodeBs per RNC.

NodeB
NodeB

NodeB
NodeB

E1
VCC OAM
VCC UP DS & NDS
VCC CP & CCP

RNC-IN
RNC-IN

UNI

Per NodeB:
- 1 VCC OAM
- 1 VP per NodeB

UNI

UNI

UNI

STM1
Ranap VPI
Alcap + UP VPI
RanapSigtran Vpi

IuCS Ranap Vpi,


IuCS Alcap + UP Vpi
IuPS Vpi
Iur Vpi
UNI
STM1

IU
IUR

16p OC3/STM1
OC3/STM1 FP
FP
16p

Ranap VPI
Alcap + UP VPI
RanapSigtran Vpi

STM1

Port
Port

MGw
MGw Shelf
Shelf

AN (Optional)
(Optional)
AN

100 BT

UNI

bonee
back
Atmbac
kbon
Atm
chingg
Swit
VPSwi
tchin
VP

MSS15
MSS15

USP
USP

MGC
MGC

RanapSigtran VPI

Port
Port

E1
Up to 64 RNC per MGw shelf
RNC connected to up to 10 MGws.

IUB

UNI

MGw
MGw Shelf
Shelf

STM1
UNI
UNI

Iur Vpi

Port
Port

USGSN
USGSN

VPI PS = 25 < 48

STM1

STM1
IUR
Up to 20 neighbor RNCs per RNC.

DriftRNC
DriftRNC

VPI 3 < 22

DriftRNC
DriftRNC

UNI
STM1

VPI 3 < 22

UNI
STM1

Figure 3-58 IuCS and Iur sharing the same link, Iub is carried on another link.

3.9.5

TWO STM1/OC3 LINKS ON RNC:


Description:
On RNC side different physical links are dedicated to different UTRAN interfaces. Moreover on RNC IU
interface, CS and PS are carried on two STM1 links; an ATM backbone is inserted on the Iu interface:
On RNC- IU interface:
Two STM1 links are configured per RNC on the IU interface.
The set of CS UP Vccs is split in two VPs, one VP on the first STM1 link, and one VP on the
second STM1 link. Both VPC have same VPI.
The set of PS UP Vccs is duplicated on both STM1, on each STM1 PS UP Vccs are grouped
within a VPC. PS UP traffic is loadshared on the two links.
Moreover CS CP Vccs are grouped within the CS VP, and PS CP Vccs are grouped within the PS
VP.
On CoreNetwork IU interface:
Number of STM1 or STM4 links depends on results of Dimensioning activities
VP_CS and VP_PS are distributed through ASP on different WG links in such a way as to comply
with the link engineering limits and port capacity.
VP_CS and VP_PS from different links on the RNC must egress ASP on different WG links in
such a way as to keep only two VP values per RNC. Else 4 VP values are required per RNC on the
IU.

ALU confidential

UMT/IRC/APP/11676

07.02 / EN

Standard

11/06/2009

Page 77/162

Iu Transport Engineering Guide

Core
Core
Network
Network

Iu
VP switching
VP_PS = 2
VP_CS = 1

RNC
RNC
RncId=1
RncId=1

2 VPs
STM1

VP_PS = 2
VP_CS = 1

2 VPs
STM1
2 VPs

VP_PS = 2
VP_CS = 1

2 VPs

STM1_2
Or STM4_2

ATM Backbone

VP_PS = 2
VP_CS = 1

RNC
RNC
RncId=24
RncId=24

AggregationNode
AggregationNode

RNC
RNC
RncId=2
RncId=2

STM1_1
Or STM4_1

VP_PS = 2
VP_CS = 1

STM1_x
Or STM4_x
2 VPs

VP_PS = 2
VP_CS = 1

2 VPs

Figure 3-59 VP configuration, 2 IuCS links per RNC

3.9.6

PP15K-POC:
All UTRAN RNC interfaces go through a PP15k-POC or a PP7k-pOC.
RNC-SDH
RNC-SDH

RNC-SDH
RNC-SDH

16pOC3
16pOC3
16pOC3
16pOC3

16pOC3
16pOC3
16pOC3
16pOC3

-Iub PNNI sPVCs


-Iu PVCs
-Iur PVCs

4pDS3 (10)
(10)
4pDS3

Iub

IuPS&PC & Iur PVCs

AN
AN

4pOC3
4pOC3
4pOC3
4pOC3

IuCS PVCs

PP15k-POC
PP15k-POC
16pOC3
16pOC3
16pOC3
16pOC3

4pDS3 (1)
(1)
4pDS3

Iub

4pOC3
4pOC3
4pOC3
4pOC3

16pOC3
16pOC3
16pOC3
16pOC3

AN
AN

MGw
MGw

MGw
MGw

USGSN
USGSN

IuPC, over Ethernet

SAS
SAS

Figure 3-60 Topology with PP15k-POC, CS and PS on dedicated links


ALU confidential

UMT/IRC/APP/11676

07.02 / EN

Standard

11/06/2009

Page 78/162

Iu Transport Engineering Guide

RNC-SDH
RNC-SDH

RNC-SDH
RNC-SDH

16pOC3
16pOC3
16pOC3
16pOC3

16pOC3
16pOC3
16pOC3
16pOC3

AN
AN

4pOC3
4pOC3
4pOC3
4pOC3

IuCS PVCs

4pOC3
4pOC3
4pOC3
4pOC3

-Iub PNNI sPVCs


-Iu PVCs
-Iur PVCs

AN
AN

MGw
MGw

MGw
MGw

16pOC3
16pOC3
16pOC3
16pOC3

PP15k-POC
PP15k-POC
16pOC3
16pOC3
16pOC3
16pOC3

4pDS3 (1)
(1)
4pDS3

Iub

4pDS3 (10)
(10)
4pDS3

Iub

IuCS PVCs +
IuPS&PC & Iur PVCs

USGSN
USGSN

IuPC, over Ethernet

SAS
SAS

Figure 3-61 Topology with PP15k-POC, CS and PS on same link


RNC-SDH is populated with one pair of 16pOC3 FP configured in APS mode.
Four OC3 concatenated links are configured on the RNC / POC interface, all UTRAN traffic flows go through this
interface.
Two kinds of ATM connection are configured between RNC-IN and POC:
- PNNI sPvcs for Iub atmConnections,
- The Pvcs for IU and Iur atmConnections, as long as PNNI is not available on IU/Iur.

POC is connected to 1 or several RNC-SDH nodes.


It is used as a concentrator point for all UTRAN ATM connections.
POC is populated with:
- Two pairs of APS protected 16pOC3 FPs configured with Concatenated OC3 links:
- One pair of 16pOC3 FP is dedicated to connections with RNC,
- The other is dedicated to connections toward USGSN and UMGW.
Either one OC3c link dedicated to IuCS traffic, and One OC3c link dedicated to IuPS+IuPC traffic, or
One OC3c carries IuCS + IuPS + IuPC.
Up to ten 4pDS3 Channelized FPs.
Two DS1 links reserved per nodeB.
OCAN doesnt cover POC configuration.

3.9.7

AAL2 SWITCH:
On IuCS interface is inserted one or several AAL2 Switch node(s).
Moreover as an option, RNC handles IuCS and Iur traffic on the same STM1 (s), adjacent AAL2 switch is in charge
of switching Iur traffic to drift RNCs and IuCS traffic to MGW(s).

ALU confidential

UMT/IRC/APP/11676

07.02 / EN

Standard

11/06/2009

Page 79/162

Iu Transport Engineering Guide


Drift
Drift
RNC
RNC

RNC
RNC

Drift
Drift
RNC
RNC

ALCAP
Port
Port

MTP3 MTP3
Saal

Saal

ATM

ATM

Transport CP

16p OC3/STM1
OC3/STM1 FP
FP
16p

IuCS & IuR common resources:


- ALCAP SL,
- Paths.

ATM

MTP3 RouteSet configuration:


- OPC = RNC PC,
- DPC = AAL2 Switch PC.

Port
Port

IuCS ( & Iur )

AAL2

ATM
Backbone
IuR:
- ALCAP SLs,
- Paths,
- (CP VCCs).

(option)

ATM

UP

ATM
Backbone

AAL2
AAL2
Switch
Switch

CS
CS coreNetwork
coreNetwork

IuCS:
- ALCAP SL,
- Paths,
- (CP VCCs )

(from
(from
otherVendor)
otherVendor)

(option)
STM1
IuR Translation table: IuCS Translation table:
Per RNC:
Per MGw:
- PC,
- PC,
- A2EA,
- A2EA,
- Paths.
- Paths.

Figure 3-62 Topology with AAL2 switch

3.9.8

QUASI ASSOCIATED MODE:


On Iu PS, IuCS or/and Iur interface may be inserted one or two STP(s).
The STP is a MTP3 interconnection node, then is involved in the UMTS ControlPlane.
By inserting STP within a semaphore network, the meshed topology becomes a star topology.
Such a semaphore network is called quasiAssociated mode.

CallServer
CallServer
PC1
PC1

routeSet

LS

SL

STP
STP
PC6
PC6

MGw
MGw
MGw
MGw
PC2
PC2
PC
PC

RNC
RNC
SGSN
SGSN

SL

PC10
PC10
LS

STP
STP
PC7
PC7

PC3
PC3

driftRNC
driftRNC

-routeSet RANAP CS, PC10-PC1,

PC4
driftRNC
PC4
driftRNC

-routeSet ALCAP, PC10-PC2,


-routeSet RANAP PS, PC10-PC3,

PC5
PC5

-routeSet RNSAP, PC10-PC4,


-LinkSet 1, PC10-PC6,
-LinkSet 2, PC10-PC7.
ALU confidential

UMT/IRC/APP/11676

07.02 / EN

Standard

11/06/2009

Page 80/162

Iu Transport Engineering Guide


Figure 3-63 quasi associated mode topology
See the Erreur ! Source du renvoi introuvable. and 3.8.2.1
topology.

3.9.9

for the Alcatel nodes compliancy with this

IUFLEX
The IuFlex feature provides the RNC the ability to connected to several MSC (MGC in the context of BICN) and
several SGSN. This feature impacts the CS and PS coreNetwork nodes and the RNC. All the coreNetwork node
belonging to the same UMTS network.
12 UP vcc, 16 alcap vcc, 8 CP vcc
12 UP vcc, 16 alcap vcc, 8 CP vcc

MGw1

Mc

MGw2

MGC1

12 UP vcc, 16 alcap vcc

MGw10

NRI = 1

Nc

PC

MGC17

PC

Nc
Nc
12 UP vcc, 16 alcap vcc, 8 CP vcc
12 UP vcc, 16 alcap vcc, 8 CP vcc

MGw1

Mc
Nc

MGw2

MGC2

12 UP vcc, 16 alcap vcc

MGw10

PC

Nc

Nc

NRI = 2

RNC
RNC

MGC18

PC

MGC Pool
12 UP vcc, 16 alcap vcc, 8 CP vcc
12 UP vcc, 16 alcap vcc, 8 CP vcc

MGw1

Mc

MGw2

12 UP vcc, 16 alcap vcc

MGw10

PC

Nc

MGC5

MGC18

PC
NRI = 5

SGSN Pool
PC
4 UP vcc, 4 CP vcc

4 UP vcc, 4 CP vcc

SGSN 1

NRI = 1

SGSN 2

NRI = 2

PC

3.9.10 HYBRID IUPS

ALU confidential

UMT/IRC/APP/11676

07.02 / EN

Standard

11/06/2009

Page 81/162

Iu Transport Engineering Guide


R99 traffic,
Signaling,
Oam
Hspa stream.
Hspa I/B (opt)

RNC
RNC
CS
CS core
core

RNC
RNC
ATM
ATM
Backbone
Backbone

OC3
OC3
ATM
ATM
Backbone
Backbone

OC3
OC3

OC3
OC3

SGSN
SGSN

OC3
OC3

NodeB
NodeB

GE
GE GE
GE

SGSN
SGSN

IP
IPBackHaul
BackHaul
Hspa I/B

ATM
ATM
Backbone
Backbone

SGSN
SGSN

SGw
SGw
TEG

Figure 3-64, Hybrid IuPS


The backhaul must be composed of at least 1 IP router.

3.10

RNC ATM

3.10.1 FP
The RNC-IN is populated with the 16pOC3/Stm1 FP.
The Aggregation Node is populated either with the 4pOC3/Stm1 FP or the 16pOC3/Stm1 FP.
The UMGW and USGSN are populated with 4pOC3/Stm1 FP for interface connectivity.
Summary of ATM Card characteristics:

4pOC3/STM1 FP
#OC3/STM1 links

16

AQM

APC

Single & Dual rate

SingleRate

TrafficManagement ASIC
TrafficShaping

16pOC3/STM1 FP

For USGSN intershelf communication, up to three STM1 links are configured per shelf populated with USD cards.
The limitation of 3 STM1s on USD shelf interfaces is a result of a Passport ECMP limitation.
For each working line STM1, one protection line STM1 is configured.

3.10.1.1

16POC3/STM1 FP

The 16pOC3/STM1 FP is based on APC TrafficManagement ASIC. Its ATM characteristics are those of the APC:

3.10.1.1.1

TRANSMISSION

The 16pOC3 manages 16 STM1/OC3 ports.


-

APS:
ALU confidential

UMT/IRC/APP/11676

07.02 / EN

Standard

11/06/2009

Page 82/162

Iu Transport Engineering Guide


1+1 inter-card line APS. APS configured for lines physically connected to different cards, also called dualFP APS.

Rule: IuTEG_16pOC3/Stm1 FP_1


When configuring dual-FP line APS on the 16-port OC-3 ATM FP, configure a pair of ports on
two adjacent FPs and the pair shares the same port number.

Line switching time in case of a fault(SF/SD on the line) within 50 ms.


The APS is compliant either with [R80] or [R33 annexe B].
When setting the 16pOC3/Stm1 FP, the reference recommendation is specified thanks to the attribute:
ATTRIBUTE Laps protocol.
The value standard indicates that 1+1 linear APS is used, as described in the recommendation [R80] and
[R33 7]
The value g841AnnexB indicates that optimized 1+1 bidirectional switching is used, as described in the
standard [R33 Annex B].

Rule: IuTEG_16pOC3/Stm1 FP_2


When the 16pOC3/Stm1 FP is configured in such a way to behave according to [R33 annexe B]
then:
- ATTRIBUTE Laps mode must be set bidirectional,
- ATTRIBUTE Laps Revertive must be set noRevertive,
- ATTRIBUTE Laps holdOffTime (hoTime) must be set with 0.

The APS is configured through the LAPS component in the PP15k.

3.10.1.1.2
-

ATM
ATM VPI/VCI range:

VPI range:
- For UNI interface, VPI field is coded on 8 bits, therefore range of VPI values is [0, 255].
- For NNI interface, VPI field is coded on 12 bits, therefore range of VPI values is: [0, 4095].

VCI range:
- For APC based card, VCI range is [32, 16 383].
AtmConnection capacity:
The 16pOC3/STM1 FP capacity is set thanks to connCap&protConnCap parameters. See [R200] 5.
ATM TrafficManagement:

16pOC3 FP characteristics
#OC3/STM1 links

16

TrafficManagement ASIC
TrafficShaping

3.10.1.2

APC
- SingleRate,
- Available on EP0 only
- Basic VPT

16POC3/STM1 ATM/POS FP

The 16pOC3/Stm1 ATM/POS FP is supported since Passport release PCR5-2 on PP15k , PP20K and RNC1500.
Prerequisite:
Before replacing the 16pOC3/Stm1 ATM FP by the 16pOC3/Stm1 ATM/POS FP,
- The Pnni Hairpin Removal feature is applied,
- QOS Information mapping: The IpCos parameter has been migrated to the DiffServ parameter before
introduction of the new FP .
ALU confidential

UMT/IRC/APP/11676

07.02 / EN

Standard

11/06/2009

Page 83/162

Iu Transport Engineering Guide

The FP supports 16 ATM over SONET/SDH links or 16 POS links.


16pOC3/Stm1 ATM/POS FP is also known as 16pOC3/Stm1 MS3 FP, while MS3 is the new architecture name for the FP.

3.10.1.2.1
-

3.10.1.2.2
-

TRANSMISSION CHARACTERISTICS:
16 ports OC3 / Stm1 not channelized,
SONET and SDH transmission types; Sonet ports and SDH ports may be configured on one FP,
Plugable optics: long, intermediate and short reach single mode optics
APS/MSP: 1+1 interCard APS, with options: Unidirectional/Bidirectional, revertive or not revertive, equivalent
to the 16pOC3/Stm1 FP APC based. The APS SD BER threshold is configurable.
Equipment Protection (EP): the EP switching is triggered by a card removal, card hardware failure, software
failure (causing a card crash) or card manually reset.
OAM Signals: LOS, LOF, LOP, AIS, MS-RDI, FEBE, P-RDI, BIP, Header Error Check Sequence,
Clock: the clockingSource attribute does not support the value Line.

ATM CHARACTERISTICS:
atmInterface type: UNI, NNI,
Vpi range: [0, 256] for UNI, [0, 4095] for NNI,
Vci range for user traffic: [32, 65535],
The Vpi and Vci ranges are more limited by the conmap parameters,
The FP resources:

Rule: IuTEG_16pOC3/Stm1 atm/Pos_01


- Up to 45000 atmConnections per FP,
- Up to 32000 perVc queues,
- Since only perVc queues are used in the context of the UMTS, set the Lp/Eng/Arc
component to 32000.
- Up to 16000 atmConnections per port,
Remark:
- If more than 32000 atmConnections are configured, then there are not enough perVc queues, either the
atmConnections are assigned to common queues or to a mix of common and perVc queues.
-

AtmConnection types: Pvc, Pvp, Svc, Svp, sPvc, and sPvp,


VPT:
- BasicVpt. StandardVpt not supported. As a result the Vpd/vptType attribute must be set to basic.
- The VPT CAC is optional.
- FP VPT capacity:

Rule: IuTEG_16pOC3/Stm1 atm/Pos_02


Up to 512 basicVPT per FP, up to 256 basicVPT per port .

AtmSignaling: Pnni and Hpnni v1.0, Uni 3.0/3.1/4.0,


EFCI: One EFCI threshold is set per CC (CongestionControl level).
Not Supported: CES.

3.10.1.2.2.1 QOS/QOSINFORMATION:
-

serviceCategories: Cbr, rtVbr, nrtVbr, Ubr, Ubr with Mdcr,


EP (EmissionPriority) range: [0, 7],

Rule: IuTEG_16pOC3/Stm1 atm/Pos_03


Mapping between ServiceCategory and EmissionPriority:

Two different SC can not me mapped to one EP. Unless they are both shaped.

EP CBR EP rtVBR EP nrtVBR EP UBR

ALU confidential

UMT/IRC/APP/11676

07.02 / EN

Standard

11/06/2009

Page 84/162

Iu Transport Engineering Guide


Not supported: ABR serviceCategory.

3.10.1.2.2.2 QOS/SCHEDULING:
The QOS Scheduling is provided by the GQM ASIC:
- Connection Scheduler (same mechanism as for the AQM based FP):
- WFQ:
The connection queue weight is either proportional to ECR (default), PCR or SCR, or
The connection queue weight is explicitly configured; weight range [1, 4095],
- SFQ:
The shaping rate is either the PCR or both PCR and SCR,

ClassScheduler:
- AbsolutePriority for EP0 and EP1,
- MBG available for EP2 to EP7,

Since the absolute EP (EP0 and EP1) have been served, the 16pOC3PosAtm FP divides the total residual bandwidth
among the EP proportional to the MBG configured against each EP. This is referred to as Weighted Fair Queuing
(WFQ) where the MBG represents the weight for the EP. The priority of the EP has no impact on the amount of
bandwidth it receives.
As a consequence the MBG has a major impact on the QOS: cell loss, delay and jitter.
Remark:
The 16pOC3/STM1 ATM FP first allocates to each EP its minimum bandwidth guaranteed, and then
allocates the remaining bandwidth among the EPs according to a strict priority such that a lower EP may
not receive any extra bandwidth if the higher EPs have consumed it all.
The MBG has precedence on EP0 and EP1.
This type of scheduling is referred to as Priority Guaranteed Queuing (PGQ).

3.10.1.2.2.3 MBG:
The MBG parameter configured against nonAbsolute EP is taken into consideration by the ClassScheduler.
The MBG is set through the command:
ATTRIBUTE AtmIf Ep minimumBandwidthGuarantee (minBw).
The parameter is set with either with:
- a MBG value in the range [0, 100] or
- The value Priority.
The mix of priority and MBG provisioning is not supported on the 16pOC3/Stm1 Atm/Pos FP. What is supported
is either specifying the priority option for all EPs, or the explicit configuration of the MBG value for all used EPs
used on one atmInterface.

Rule: IuTEG_16pOC3/Stm1 atm/Pos_04


On one16pOC3/Stm1Atm/Pos FP atm interface:
- if a MBG value is configured against an EP, then a MBG value must be configured against
each non absolute EP (EP2 to EP7),
- Else, the minimumBandwidthGuarantee is set with Priority value for each non absolute
EP,
A mix of explicit MBG values and Priority is not allowed on atm interface.

The explicit configured MBG values must satisfy the following rules:

Rule: IuTEG_16pOC3/Stm1 atm/Pos_05


If each non absolute EP is configured with an explicit MBG value,
On one atm interface, the MBG values must be chosen in such a way to satisfy:

EP2 to EP7 MBG = 100%


ALU confidential

UMT/IRC/APP/11676

07.02 / EN

Standard

11/06/2009

Page 85/162

Iu Transport Engineering Guide

MBG EP2 > MBG EP3 > MBG EP4 > MBG EP5 > MBG EP6 > MBG EP7.
Expected QOS behavior according to the explicit configured MBG value or with Priority:
1/ Case of the MBG = explicit configured MBG value in the range [0, 100]:
The MBG configured against the non absolute EPi (EP2 to EP7), specifies the proportion of the bandwidth
assigned to the EPi after absolute priority EP have been served.
Eg, case of 4 non absolute EP:
GuaranteedBw for EP3 =
[linkRate (EP0+EP1) traffic] * MBGEP3 / (MBGEP2 + MBGEP4 + MBGEP7)
The MBG does not protect low priority traffics against traffic Starvation resulting from the two absolute
Priorities.

Rule: IuTEG_16pOC3/Stm1 atm/Pos_06


On GQM based FP, the MBG assigned against an EP has no precedence over absolute
EmissionPriority (EP0, EP1).

Remark:
- On APC based card, the guaranteed bandwidth = MBG * link bw, in other word the guaranteed
bandwidth does not take into consideration the AbsolutePriorities: EP0 and EP1.
- If real time traffic (eg: CBR or rtVbr) is mapped to EP2, then the MBG assigned to EP7 add potential
Delay and Jitter on the real time traffic.
- An EP class queue is able to transmit beyond its guaranteed bandwidth, if the bandwidth is available,
- If the two absolute EP (EP0 and EP1), dont consume any bandwidth, whatever the MBG configured
against the non absolute EP, the non absolute EP may transmit up to the linkRate.
See MBG value in 5.

2/ Case of MBG = Priority:


A value of priority means the EP is scheduled according to a strict priority scheme without providing any
additional bandwidth guarantee to the EP.
The system assigns a weigh to each non absolute EP, in such a way to expect a Priority Guaranteed
Queuing behavior. The weigh assigned to the non absolute EP, depends on how many EP are used.
See MBG value in 5.

3.10.1.2.2.4 QOS/DISCARD MECHANISM:


Same mechanism as for the AQM based FP.
Four discardPriority thresholds are set on a connection queue, Default values:

DP0

Threshold
100%

DP1

90%

DP2

75%

DP3

35%

The discard thresholds apply to the common and the perVc queues.
The four DP thresholds are not configurable.
The mapping of (SC+CLP) to DP (DiscardPriority) is hardcoded:

Rule: IuTEG_16pOC3/Stm1 atm/Pos_07


DP
Cbr

Clp0
DP1

Clp1
DP3
ALU confidential

UMT/IRC/APP/11676

07.02 / EN

Standard

11/06/2009

Page 86/162

Iu Transport Engineering Guide


rtVbr

DP1

DP3

nrtVbr

DP2

DP3

Ubr

DP3

DP3

WiseDiscard:
Within each 4 CC (congestionControl level), are set the EPD, PPD, and EFCI thresholds:

Rule: IuTEG_16pOC3/Stm1 atm/Pos_08


The CC, EPD, PPD and EFCI thresholds are hardcoded with the following values:

perVc Queue
Thresholds
CCO

DP
100%

PPD
99%

EPD
90%

CC1

90%

89%

80%

CC2

75%

74%

65%

CC3

35%

34%

25%

EFCI

35%

Remark:
On the AQM based card, the EPD thresholds are configured through the epdOffset attribute, which is not
supported by the GQM.

3.10.1.2.2.5 TRAFFICMANAGEMENT:
The trafficManagement features are provided by the GQM:
- Policing:
- dual rate GCRA, discarding, tagging,
- TrafficShaping:
- SingleRate and dualRate trafficShaping,
- The ShapeFairQueueing may apply to up 4 EP, with EP in the range [0, 7],
- Unshaped atmConnections are permitted on a shaped EP.

Rule: IuTEG_16pOC3/Stm1 atm/Pos_09


The minimum shaping rate is 100 cells/s

3.10.1.2.2.6 AAL5:
- EPD, PPD, LPD and W-RED supported,
(Only EPD supported in the 16pOC3/Stm1 APC based FP)
3.10.1.2.2.7 OAM:
-

3.10.1.2.3

According to I610,
PM and CC not supported,

IP CHARACTERISTICS:

The FP IP forwarding capacity: 2,5 Gb/s .


FP characteristics:
The FP supports:
- Static routing,
ALU confidential

UMT/IRC/APP/11676

07.02 / EN

Standard

11/06/2009

Page 87/162

Iu Transport Engineering Guide


- MPE,
- inverseARP,
- LocalMedia.
- ECMP ,
The FP does not support:
- DHCP relayAgent,.
QOS:
The IPCos parameter is removed and replaced by the DSCP parameter.
As a result the DSCP is mapped directly to a SC. The removal of IpCos parameter must be done before
introducing the 16pOC3/Stm1 POS/Atm FP in the RNC.
VR/n
|--------DiffServ/I
|--------IP
|

(new)

ECMP:
The FP supports flow based ECMP with up to 3 nextHops.
To activate the IP route traffic loadSharing amongst several paths:
- At least 2 nextHops and up to 3 nextHops are configured under the IP route,
- The ecmpMode parameter is set to perFlowEnh.

Rule: IubTEG_16pOC3/Stm1 atm/Pos_12


ecmpMode = perFlowEnh when ECMP is required.

The AtmMpe Pnni sPvc Src/Dest carrierGrade.

3.10.1.2.4
-

3.10.1.2.5
-

TRAFFICMEASUREMENT

All the Sonet and SDH spooled statistics from the Passport are supported,
All the Atm interface and Vpt spooled statistics from the Passport are supported.

HARDWARE:

The FP is based on GQM ASIC,


Carrier Grade: Hitless Software Migration (HSM),
Y-Splitter is supported,
Atm-Spy supported,
GQM:
- Provides both perSC common queue and perVc queues,
- The conmap parameters are no more configured, the FP supports a dynamic management of the atmConnection
identifier resources.

3.10.1.3
3.10.1.3.1

4POC3/STM1 FP
TRANSMISSION

The 4pOC3/Stm1 FP manages 4 OC3/Stm1 ports.


- APS:
Supports SONET or SDH line APS (single-FP or dual-FP) between predesignated pairs of ports.
When configuring dual-FP line APS on the 4-port OC-3 ATM FP, configure a pair of ports on two adjacent
FPs.
ALU confidential

UMT/IRC/APP/11676

07.02 / EN

Standard

11/06/2009

Page 88/162

Iu Transport Engineering Guide

Rule: IuTEG_4pOC3/Stm1 FP_1


When configuring dual-FP line APS on the 4-port OC3/Stm1 FP, configure a pair of ports on two
adjacent FPs and the pair shares the same port number.

For example, configure port 0 on the FP in slot 2, and port 0 on the adjacent FP in slot 3.

The 4pOC3/Stm1 FP is compliant with [R33] section 7.1 and [R80] 5.3. The 4pOC3/Stm1 FP is not compliant
with [R33 annexe B].

The APS is configured through the LAPS component in the PP15k.

3.10.1.3.2

ATM
4pOC3 FP characteristics
#OC3/STM1 links
TrafficManagement ASIC
TrafficShaping

3.10.1.4

4
AQM
Single & Dual rate,
Available on two EPs
Standard & Basic VPT

PSFP/DCPS

The UA6 RNC is populated either with PSFP or DCPS FP.


The PSFP/DCPS FP handles the userPlane traffic and the controlPlane traffic (NBAP, RANAP, RNSAP, ALCAP,
SCCP, MTP3, SAAL-NNI and SCTP).
A PSFP/DCPS FP is composed of 6 PMC cards and one PDC.
A PMC-Role is assigned to each PMC. Six different PMC roles are specified: PMC-PC, PMC-RAB, PMC-M,
PMC-NI, PMC-TMU and PMC-OMU.
The PMC Role is driven by the the PMC position in the PSFP and the PSFP position in the shelf:
- The PMC-M are hosted by the DCPS FP cards in the first and the second slots,
- The PMC-OMU are hosted by the third and forth PSFP cards,
- The PMC-NI are hosted by the third and forth PSFP cards,
- There is one PMC-PC per PSFP card,
- There is one or two PMC-TMU per PSFP card. There are 14 PMC-TMU on a RNC composed of 12 DCPS FP
and 12 PMC-TMU on a RNC composed of 10 PSFP. The PMC-TMU is in position 1 on each PSFP card. The
PSFP cards with two PMC-TMU are located in slot 10 and 11 and the second PMC-TMU is in PMC position 5,
- The PMC-RAB role is assigned to all the remaining PMC,
Beside:
- The active and standby CP are in slot 0 and 1 respectively
- The active and standby 16p/OC3 FP are located in slot 8 and 9 respectively.
- The active and standby 4pGE FP are located in slot 14 and 15 respectively.

3.10.1.4.1

THE PDC AND PMC-ROLE DESCRIPTION:


-

PDC:
Each PDC provides a management functions for that one DCPS FP card only.
ALU confidential

UMT/IRC/APP/11676

07.02 / EN

Standard

11/06/2009

Page 89/162

Iu Transport Engineering Guide


Moreover the PDC is in charge of handling the SaalNNI and the Sctp.
The SaalNNI and Sctp traffic is distributed over all the PDC with exception of those running PMC-NI.
Each PDC is configured with one external IP@ which is used as the RNC CP IP@ on the IuPS interface.

PMC-OMU:
The OAM functionality is implemented on 2 PMC-OMU components located on different PSFP cards.
The PMC-OMU handles also the CBS traffic (IuBC).
The sparing scheme is 1+1. The PMC-OMU switch produces no interruption of service except in case of
double fault scenarios.
PMC-M:
The PMC-M hosts the CID management and the Alcap for Iub Iur and IuCS interfaces.
The RNC is populated with two PMC-M components, working in 1+1 redundancy scheme.
Two PMC-M components reside on different PSFP cards.
The PMC-M switch produces no interruption of existing calls except in case of double fault scenarios. New
calls cannot be initiated for approximately 5 seconds.
PMC-NI:
The MTP3b and SCCP protocols are implemented on two PMC-NI components located on different
DCPS FP cards.
Moreover the PMC-NI handles the locationServices traffic (IuPC).
The RNC is populated with two PMC-NI components, working in 1+1 redundancy scheme.
The sparing scheme is 1+1. The PMC-NI switch produces no interruption of existing calls or service except
in double fault scenarios.
The PMC-NI is PSFP cards 2 and 3.
PMC-TMU:
The PMC-TMU is in charge of processing UMTS protocols: RANAP, RNSAP, and NBAP (Supports
RRM, RRC, and aal2Link CAC).
Moreover PMC-TMU is in charge of the UMTS heartBeat.
Up to 14 PMC-TMU components in the ATM RNC, up to 12 PMC-TMU in the Hybrid RNC are spread
over all the DCPS FP.
The sparing scheme is N+1 if 7 or fewer PSFP and otherwise N+2 spared.
If a PMC-TMU fails, the control planes for the NodeBs and cells managed by that TMU are automatically
taken over by one of the spare PMC-TMU within 30 seconds but individual Radio Links are lost. Individual
calls are managed by all the PMC-TMU in load sharing mode. If a PMC-TMU fails the calls on that PMCTMU are lost.

PMC-RAB:
The UMTS user plane is handled in up to 40 PMC-RAB within an ATM RNC, in up to 32 PMC-RAB
within an Hybrid RNC spread over all the PSFP in load sharing mode.
Cell common channel user plane functionality is spared so if one PMC-RAB fails its cell common channel
user plane load is taken over by up to 5 other PMC-RAB on other DCPS FP cards with no loss of common
channel service.
Individual call user planes are managed by all the PMC-RAB in load sharing mode.
If a PMC-RAB fails the calls on that PMC-RAB are lost.
Each PMC-RAB is configured with one external IP address used as the RNC IuPS UP IP@.

PMC-PC:
Up to 12 PMC-PC within an ATM RNC and up to 10 PMC-PC within an Hybrid RNC
PMC-PC sparing scheme: N+1.
RNC ATM Interface, PMC-PC function:
The PMC-PC is the aal2 Path point of termination.
The PMC-PC achieves the aal2 to UDP conversion.
The Aal2 paths and the PMC-PC functionality is spared so if one PMC-PC fails its paths and load
is spread over up to 5 other PMC-PC located on other DCPS FP cards without interruption of cell
common channels or calls in progress.
RNC Iub UP IP Interface, PMC-PC function:
ALU confidential

UMT/IRC/APP/11676

07.02 / EN

Standard

11/06/2009

Page 90/162

Iu Transport Engineering Guide


The PMC-PC is the IP route point of termination.
One external IP@ is assigned to each of the N PMC-PC. No IP@ against the redundant PMCPC.
The PMC-PC IP@ policyAssignment is: movable.
The PMC-PC IP@ is used as the RNC Iub UP IP@.

3.10.1.4.2

RNC PSFP/DCPS FP COMPOSITION:

Two cases are taken into consideration: atm RNC and Hybrid RNC.
A Hybrid RNC is populated with both 16pOC3 FP and 4pGE FP interface card whereas the ATM RNC is populated
with only the 16pOC3 FP interface card.
Case of atm RNC:
Since 2 RNC slots are consumed by the interface card, the RNC hardware composition becomes:
- Up to 12 PSFP,
- Up to 12 PMC-PC,
- Up to 14 PMC-TMU,
- Up to 10 PDC-saalNNI,
- Up to 40 PMC-RAB.

RNC
0&1

8&9

10

11

12

13

14

15

DCPS
DCPS 66 DCPS
DCPS 77 DCPS
DCPS 88 DCPS
DCPS 99 DCPS
DCPS 10
10 DCPS
DCPS 11
11

Pdc
Pdc
saalNni
saalNni

Pdc
Pdc
saalNni
saalNni

Pdc
Pdc
saalNni
saalNni

Pdc
Pdc
saalNni
saalNni

Pdc
Pdc
saalNni
saalNni

Pdc
Pdc
saalNni
saalNni

Pdc
Pdc
saalNni
saalNni

Pdc
Pdc
saalNni
saalNni

Pdc
Pdc
saalNni
saalNni

Pdc
Pdc
saalNni
saalNni

Pdc
Pdc
saalNni
saalNni

Pdc
Pdc
saalNni
saalNni

RAB
RAB

RAB
RAB

OMU
OMU

OMU
OMU

RAB
RAB

RAB
RAB

TMU
TMU

TMU
TMU

RAB
RAB

RAB
RAB

RAB
RAB

RAB
RAB

RAB
RAB

RAB
RAB

RAB
RAB

RAB
RAB

RAB
RAB

RAB
RAB

RAB
RAB

RAB
RAB

RAB
RAB

RAB
RAB

RAB
RAB

RAB
RAB

RAB
RAB

RAB
RAB

RAB
RAB

RAB
RAB

RAB
RAB

RAB
RAB

RAB
RAB

RAB
RAB

RAB
RAB

RAB
RAB

RAB
RAB

RAB
RAB

RAB
RAB

PMC-M
PMC-M PMC-M
PMC-M

16pOC3 FP
16pOC3 FP

CP3
CP3

DCPS
DCPS 00 DCPS
DCPS 11 DCPS
DCPS 22 DCPS
DCPS 33 DCPS
DCPS 44 DCPS
DCPS 55

RAB
RAB

RAB
RAB

RAB
RAB

RAB
RAB

RAB
RAB

NI
NI

NI
NI

RAB
RAB

RAB
RAB

TMU
TMU

TMU
TMU

TMU
TMU

TMU
TMU

TMU
TMU

TMU
TMU

TMU
TMU

TMU
TMU

TMU
TMU

TMU
TMU

TMU
TMU

TMU
TMU

PC
PC

PC
PC

PC
PC

PC
PC

PC
PC

PC
PC

PC
PC

PC
PC

PC
PC

PC
PC

PC
PC

PC
PC

TEG

Figure 3-65, ATM RNC DCPS FP composition


Case of hybrid RNC:
Since 4 RNC slots are consumed by the interface card, the RNC hardware composition becomes:
- Up to 10 DCPS FP,
- Up to 10 PMC-PC,
- Up to PMC-TMU,
- Up to 8 PDC-SCTP/SaalNNI, the PDC handles both SCTP and SaalNNI.
- Up to 32 PMC-RAB.

ALU confidential

UMT/IRC/APP/11676

07.02 / EN

Standard

11/06/2009

Page 91/162

Iu Transport Engineering Guide

RNC
0&1

8&9

DCPS
DCPS 00 DCPS
DCPS 11 DCPS
DCPS 22 DCPS
DCPS 33 DCPS
DCPS 44 DCPS
DCPS 55

10

11

12

13

14 & 15

DCPS
DCPS 66 DCPS
DCPS 77 DCPS
DCPS 88 DCPS
DCPS 99

Pdc
Pdc

Pdc
Pdc

Pdc
Pdc

Pdc
Pdc

Pdc
Pdc

Pdc
Pdc

Pdc
Pdc

Pdc
Pdc

Pdc
Pdc

RAB
RAB

RAB
RAB

OMU
OMU

OMU
OMU

RAB
RAB

RAB
RAB

TMU
TMU

TMU
TMU

RAB
RAB

RAB
RAB

RAB
RAB

RAB
RAB

RAB
RAB

RAB
RAB

RAB
RAB

RAB
RAB

RAB
RAB

RAB
RAB

RAB
RAB

RAB
RAB

RAB
RAB

RAB
RAB

RAB
RAB

RAB
RAB

RAB
RAB

RAB
RAB

RAB
RAB

RAB
RAB

RAB
RAB

4pGE FP
4pGE FP

PMC-M
PMC-M PMC-M
PMC-M

16pOC3 FP
16pOC3 FP

CP3
CP3

Pdc
Pdc

0
3

RAB
RAB

RAB
RAB

RAB
RAB

RAB
RAB

RAB
RAB

NI
NI

NI
NI

RAB
RAB

RAB
RAB

TMU
TMU

TMU
TMU

TMU
TMU

TMU
TMU

TMU
TMU

TMU
TMU

TMU
TMU

TMU
TMU

TMU
TMU

TMU
TMU

PC
PC

PC
PC

PC
PC

PC
PC

PC
PC

PC
PC

PC
PC

PC
PC

PC
PC

PC
PC

TEG

Figure 3-66, hybrid RNC DCPS FP composition


Rule: IuTEG_PSFP
The RNC supports up to 10 PSFP/DCPS FP since populated with two 16pOC3 FP and two 4pGE FP.

3.10.2 SS7 PROTOCOL STACK:


Initially the SS7 protocol stack was implemented on the RNC-CN. The SS7 has been migrated to the RNC-IN
- MTP3/SCCP layers and SAAL-NNI layers are handled in the RNC-IN, while
- RANAP, ALCAP and RNSAP are still handled by RNC-CN.
RNC-IN is the termination point of the following layers on the IU and Iur connection control plan:
- MTP-3B
- SCCP
- SAAL-NNI
- AAL5/ATM/L1
The following SS7 limitations apply to RNC:

Rule: IuTEG_RNC-SS7_1
Amount of SS7 resources in RNC are limited to:
- The RNC is identified by one single own PC,
- Up to 1024 signalingLinks,
- Up to 64 DPC,
- Up to 64 linkSets,
- Up to 64 routeSets,
- Up to 10 IuCS Alcap PointCodes,
- Up to 10 Iur Alcap PointCodes per driftRNC.

ALU confidential

UMT/IRC/APP/11676

07.02 / EN

Standard

11/06/2009

Page 92/162

Iu Transport Engineering Guide

ALCAP

RNC-CN
RNC-CN

RNSAP

TMU
TMU

RANAP

Icn
Q2150-1
Q2150-1
MTP3B
MTP3B

L1
L1

FP
FP

ATM
ATM

RNC-IN
RNC-IN

AAL5
AAL5

PSFP
PSFP
PDC
PDC

SAAL-NNI
SAAL-NNI

PSFP
PSFP
AP-NI
AP-NI

SCCP
SCCP

Figure 3-67 CP Protocol Stack on RNC


Remark:
MTP3B and ALCAP communicate through STC (AAL type 2 SignallingTransportConverter on broadband MTP
also called Q2150.1).

3.10.2.1

ICN INTERFACE, SS7 MIGRATION FROM RNC-CN TO RNC-IN IMPACT:


The SSCOP connections terminate in RNC-IN.
The MSU (MTP frames) received from the Iu and the Iur interface on the CP vccs, are treated in RNC-IN.
Moreover the SCCP frames are also treated in RNC-IN.
The ALCAP, RANAP and RNSAP messages extracted from MTP and SCCP userField are transmitted to
RNC-CN within the still configured Icn internal vccs.
Therefore the Iu and Iur CP vccs are no more switched in the RNC-IN but only terminate on RNC-IN.
The Icn interface ATM configuration becomes:
- Iub CP and CCP vccs: are still VC switched in the RNC-IN to the RNC-CN, therefore still
configuration of Icn vcc dedicated to Iub CP and CCP vccs
- TMU Icn internal vccs:
These vccs carry traffic between:
- RNC-IN MTP3 and RNC-CN MTP3-User (Alcap),
- RNC-IN SCCP and RNC-CN SCCP-Users (Ranap and Rnsap),
- OMU Icn internal vccs:
The vccs are loaded with Internal Management traffic terminating on OMU.

Rule: IuTEG_RNC-SS7_3
- The Iu and Iur CP Vccs are no longer VC switched on Icn interface.
- The Icn internal Vccs are still configured:
- 1 Internal Vccs per OMU (2 OMUs within a RNC-CN), and
- 1 internal Vccs per existing TMU (up to 14 TMUs within a RNC-CN).
- The Iub CP & CCP vccs are still VC switched on Icn dedicated vccs.

3.10.2.2

RNC ARCHITECTURE IMPACT:


Two new elements are configured in RNC-IN for handling MTP3 and SaalNNI:
- A pair of PSFP PMC are reserved for handling SCCP/MTP3B layers, called AP-NI,
ALU confidential

UMT/IRC/APP/11676

07.02 / EN

Standard

11/06/2009

Page 93/162

Iu Transport Engineering Guide


-

Software in PSFP PDC for handling SaalNNI layers.

AP-NI (MTP3b, SCCP endPoint):


A pair of PMC processors has been allocated for SS7 layers. The AP-NIs are allocated to fixed PMC,
depending on the card availability.
These processors operate in a hot spared (1+1) configuration with one elected as active and the other as
passive. Failure of the active AP-NI results in the processes on the passive AP-NI taking activity without
the loss of any ongoing calls.
When the system is initialized, the loader controller ensures that the AP-NI processors are allocated to:
- different PSFP cards,
- PSFP with no PMC-M.
(i.e.: In a system with four PSFP cards these master processors are distributed so that each PSFP contains
one of the PMC-M processors or one of the AP-NI processors).
MTP3 and SCCP protocol timers are specified per routeSet.
PSFP PDC (SaalNNI endPoint):
The saalNNI layers are deployed on the PDC processors of some PSFP cards:
- A RNC with four or more PSFP cards, the PDC processors on the same PSFP cards as the AP-NI
processors do not handle SaalNNI connection.
- In RNC with fewer than four PSFP cards no exclusion is made, the saalNNI connections are spread
over all the PSFP.

Rule: IuTEG_RNC-SS7_4
Since the RNC is populated with at least 4 PSFP, the SaalNNI connections are spread over the:
[#PSFP 2] PSFPs.
Up to 10 PSFPs involved in the SaalNNI traffic.
One saalNNI connection is established per MPT3 signalingLink.
A set of protocol timers is associated per SaalNNI instance.

RNC-IN
PS
PS FP
FP 00

PS
PS FP
FP 11

PS
PS FP
FP 22

PS
PS FP
FP 33

PS
PS FP
FP 33

PS
PS FP
FP 11
11

PDC
PDCwith
with
saalNNI
saalNNI

PDC
PDCwith
with
saalNNI
saalNNI

PDC
PDC

PDC
PDC

PDC
PDCwith
with
saalNNI
saalNNI

PDC
PDCwith
with
saalNNI
saalNNI

PMC-M
PMC-M
PMC-RB
PMC-RB
PMC-RB
PMC-RB
PMC-RB
PMC-RB
PMC-RB
PMC-RB

PMC-M
PMC-M
PMC-RB
PMC-RB
PMC-RB
PMC-RB
PMC-RB
PMC-RB
PMC-RB
PMC-RB

AP-NI
AP-NI
PMC-RB
PMC-RB
PMC-RB
PMC-RB
PMC-RB
PMC-RB
PMC-RB
PMC-RB

AP-NI
AP-NI
PMC-RB
PMC-RB
PMC-RB
PMC-RB
PMC-RB
PMC-RB
PMC-RB
PMC-RB

PMC-RB
PMC-RB
PMC-RB
PMC-RB
PMC-RB
PMC-RB
PMC-RB
PMC-RB
PMC-RB
PMC-RB

PMC-RB
PMC-RB
PMC-RB
PMC-RB
PMC-RB
PMC-RB
PMC-RB
PMC-RB
PMC-RB
PMC-RB

PMC-PC
PMC-PC

PMC-PC
PMC-PC

PMC-PC
PMC-PC

PMC-PC
PMC-PC

PMC-PC
PMC-PC

PMC-PC
PMC-PC

ATM

Figure 3-68 Example of PSFP component role assignment.

PSFP Shelf A/B:


At the initial PSFP activation, the software assigns each PSFP to one of the two logical shelves called A
and B shelf.
The system assigns the active AP-NI in a PSFP belonging to one type of shelf, whereas the standby AP-NI
is assigned to a PSFP belonging to the other type of shelf.

ALU confidential

UMT/IRC/APP/11676

07.02 / EN

Standard

11/06/2009

Page 94/162

Iu Transport Engineering Guide


Furthermore, the SaalNNI connections supporting the SL from one LS are spread across the PSFPs
belonging to the different shelves; at least one SaalNNI connection in one LS is assigned to each logical
shelf.
E.g.:
Assume a RNC configured with:
- 12 PSFPs, 10 PSFPs are available for SaalNNI PDC,
- 3 LinkSets are configured: LS1, LS2 and LS3,
- Each LS is populated with 4 SL.
The 10 PSFP are first evenly distributed in shelf A and shelf B. Then the SL within one LS are alternatively
assigned to a shelf A PSFP and a shelf B PSFP.
Remark:
For the even LS, the SL assignment is started on the A side, for the odd LS, the SL assignment is started on
the B side.

PSFP#

ShelfA/B

LS# - SL#

1-0, 3-2

1-1, 3-3

1-2, 4-1

1-3, 4-0

2-1, 4-3

2-0, 4-2

2-3

2-2

3-0

10

3-1

Abnormal cases:
- In case of sscop connection failure, due to PS-FP or PDC-saalNni failure, the RNC does not reestablish the sscop connection on another PSFP or PDC-saalNni. To cover such an event, the RNC
diverts the traffic from the failed sscop connection to a still alive sscop connection belonging to the
same linkSet.
- The saalNni connections are not rebalanced after deleting one saalNni connection or removing one
PSFP hosting a saalNni connection.
- The saalNni connections are not rebalanced after adding one saalNni connection or one more PSFP.

3.10.2.3

AMOUNT OF SL PER ROUTESET:


Determination of amount of SLs per routeSet must satisfy several criterions:
- At least 2 SLs must be configured per routeSet, indeed the traffic is secured thanks to MTP3 COO
mechanism,
- Amount of SL per routeSet should equal to "2 to the power of n" (i.e.: 2, 4, 8, 16), these SL numbers
will assure an even traffic distribution over all SLs,
- The SaalNNI traffic must be distributed over all PSFPs in charge of SaalNNI, such a condition
depends on RNC configuration,
- Moreover in case of interworking, amount of SLs per routeSet must satisfy otherVendor constraints.
Remark: Amount of SL doesnt result from interface bandwidth, since each SL is carried over an ATM vcc
(ATM vcc bandwidth is configurable through trafficDescriptor).

ALU confidential

UMT/IRC/APP/11676

07.02 / EN

Standard

11/06/2009

Page 95/162

Iu Transport Engineering Guide


Summary of the different RNC SS7 routeSets:

RouteSet
IuCS RANAP

#RouteSets
1

IuCS ALCAP

Up to 10

IuPS RANAP

Iur RNSAP

Up to 24

Iur ALCAP

Up to 10 per drift RNC (note)

Note: Case of interworking with otherVendor RNC.


Moreover, taking into consideration that:
- only one IuCS RANAP routeSet and one IuPS RANAP routeSet, whereas up to 20 Iur RNSAP
routeSets,
- moreover CS and PS routeSet traffic volume is higher than Iur routeSet traffic volume,
then in such a way to achieve a well distribution of signaling traffic on PSFP, more SLs will be assigned to
IuCS routeSet and IuPS routeSet than for Iur routeSet.
In following table is suggested amount of SLs per routeSet, target being to distribute the signaling traffic
among SaalNNI PDC:

Rule: IuTEG_RNC-SS7_5
#PSFP / RNC

10

12

#PDC SaalNNI

10

IuCS RANAP

IuCS ALCAP

IuPS RANAP

IuR RNSAP

2 or 4
(note 1)

2 or 4
(note 1)

IuR ALCAP (note2)

Note1:
Target is to balance Iur signaling traffic on each PDC-SaalNNI. A well balanced depends on amount of
PDC-SaalNNI and amount of driftRNC.
Suggestion:
If amount of driftRNC is lower than half the amount of PDC SaalNNI, then configure 4 SLs per drift
RNC.
Else configure 2 SLs per driftRNC.
Note2:
Case of interworking with otherVendor driftRNC, Iur Alcap Vccs may be requested. Iur Alcap Vccs are not
configured in case of Alcatel driftRNC.
Remarks:
- In case of quasi-associated mode, each linkSet associated with a specific routeSet should include an
identical number of links
- These values may be re-evaluated in case of otherVendor node constraints.

3.10.3 AAL2 PATHID


The RNC supports up to 810 aal2 paths per configured PSFP or DCPS.

ALU confidential

UMT/IRC/APP/11676

07.02 / EN

Standard

11/06/2009

Page 96/162

Iu Transport Engineering Guide


UA6

4 PSFP/DCPS

6 PSFP/DCPS

7 PSFP/DCPS

8 PSFP/DCPS

10 PSFP/DCPS

12 PSFP/DCPS

Max # aal2 paths

3240

4860

5670

6480

8100

9720

This capacity is shared between all the aal2 interfaces of the RNC.
The pathId must be unique within an aal2If, even if the PathId belongs to different bandwidthPool.
The Pathid is coded on 2 bytes in the range [0, 65535].
Rule: IuTEG_RNC_aal2Path_1
The RNC supports up to 9720 paths, identified by a pathId in the range [0, 65535].

3.10.4 AAL2 RNC IDENTIFIERS:


One IucsIf instance is assigned per CS coreNetwork node. The IucsIf identifies one CS coreNetwork userPlane the
RNC is connected to:
Application to R4 MGW:
One IucsIf instance identifies the group of MGWs (1 MGW = 1 VSP4) which composes the Cs
coreNetwork the RNC is connected to, whatever the presence and the amount of aal2 switches on IuCS
interface.
Application to IuFlex:
The RNC being connected to up to 16 CS coreNetwork nodes, in the RNC is configured one iucsIf instance
per CS coreNetwork node.
Application to R99 MGW:
One IucsIf instance identified the group of UMGW (UMGW = the entire set of VSP2); whatever the
presence and the amount of aal2 switches on IuCS interface.

Rule: IuTEG_RNC-aal2-Identifier_1
One IucsIf instance is configured in the RNC per CS coreNetwork node, whatever the amount
of MGW within the CS coreNetwork node.
Up to 16 IucsIf instances may be configured in the RNC allowing connection to up to 16 CS
coreNetwork nodes.
Beside, all the aal2 Paths configured between the RNC and one remote aal2 endPoint node are configured under an
aal2If instance.
Application to R4 MGW:
All the aal2 Paths terminating on one single MGW are configured under one aal2If instance.
In case of aal2Switch(s) inserted on IuCs interface, the remote aal2 endPoint node being the aal2 switch,
then all the paths terminating on one aal2 switch are configured under one RNC aal2If instance.
Application to R99 MGW:
All the aal2 Paths terminating on the UMGW (the entire set of VSP2) are configured under one aal2If
instance.
In case of aal2Switch(s) inserted on IuCs interface, the remote aal2 endPoint node is the aal2 switch, and
then all the paths terminating on one aal2 switch are configured under one aal2If instance.

Rule: IuTEG_RNC-aal2-Identifier_2
One RNC aal2If instance is configured per peer aal2 endPoint node (a MGW, an aal2Switch
or a neighbor RNC).
Furthermore all the aal2If instances identifying one CS coreNetwork node are grouped under one single IucsIf
instance which identifies the CS coreNetwork node.
Application to R4 MGW:
One or several aal2If instances configured under one IucsIf instance.
Application to IuFlex:
ALU confidential

UMT/IRC/APP/11676

07.02 / EN

Standard

11/06/2009

Page 97/162

Iu Transport Engineering Guide


One IucsIf instance is configured per CS coreNetwork node. Under each IucsIf instance are gathered all the
aal2if instances identifying the MGW belonging to this CS coreNetwork node.
Application to R99 MGW:
One single aal2If instance configured under one IucsIf instance.

Rule: IuTEG_RNC-aal2-Identifier_3
All the aal2If identifying the MGW from a specific coreNetwork node are gathered under the
IucsIf instance identifying the CS coreNetwork node.
Up to 10 aal2If instances under one IuCsIf instance.
On Iur interface, one IurIf instance is created per neighboring RNC; one or several aal2If instances is/are assigned to
the iurIf instance, according to the presence of aal2 switch(s) on Iur interface.
Case of aal2 switching of combined IuCS/Iur:
One or several aal2Switches may be placed in front of the RNC.

Rule: IuTEG_RNC-aal2-Identifier_4
Case of aal2Switch on IuCS (and as an option on Iur): one aal2If instance is configured per
adjacent aal2Switch.
One aal2If instance assigned to an adjacent aal2Switch may be shared as an option between IucsIf and
IurIf.
The aal2If instances assigned to the adjacent aal2Switches are grouped under the IuCsIf instance, and as an
option under one or several IurIf instances.
The aal2 Paths configured between the RNC and the adjacent all2Switches carry IuCS aal2 traffic and as an
option Iur aal2 traffic.
Range of RNC the aal2 components:

Rule: IuTEG_ RNC aal2 Component _5


Up to 16 IucsIf instances, range: [0, 15],
Up to 24 IurIf instances.
Up to 200 IubIf instances .
Up to 10 aal2If instances under an IucsIf instance,
Up to 10 aal2If instances under an IurIf instance,
Up to 10 aal2If instances per remote aal2 endPoint node (A2EA).
Representation of the RNC aal2 configuration:
First mapping table: aal2if instance assignment to the A2EA

ALU confidential

UMT/IRC/APP/11676

07.02 / EN

Standard

11/06/2009

Page 98/162

Iu Transport Engineering Guide


RNC
Qaal2 Parameters
Originating A2EA
Qaal2 timers
Qaal2endPoints:
Aal2endPoint/1 = A2EA 1
Qaal2Route/1 = aal2If 1, cost 1

Aal2endPoint/2 = A2EA 2
Qaal2Route/1 = aal2If 2, cost 1

Figure 3-69 RNC aal2 mapping table2


Second mapping table: aal2Paths and ALCAP DPC assignment to the aal2if instance
aal2If i
Paths
Alcap bearer: Adjacent aal2 node.
AlcapConn: DPC, NI
aal2If j
Paths
Alcap bearer: Adjacent aal2 node.
AlcapConn: DPC, NI

iucsIf 1
UserPlane
aal2If i
aal2If j
iurIf 1, 2,
UserPlane
aal2If i
aal2If j

Figure 3-70 RNC aal2 mapping table1

3.10.5 AAL5 CONNECTIONS


The amount of aal5 connection supported by the RNC depends on the amount of PSFP/DCPS within the RNC:
4 PSFP/DCPS

6 PSFP/DCPS

8 PSFP/DCPS

10 PSFP/DCPS

12 PSFP/DCPS

Service groups

10

12

Max # aal5 vcc

1800

3000

4200

6000

7200

UA6

This capacity is shared between the 3 utran interfaces.

ALU confidential

UMT/IRC/APP/11676

07.02 / EN

Standard

11/06/2009

Page 99/162

Iu Transport Engineering Guide

3.10.6 QOS
3.10.6.1

UMTS QOS INFORMATION

The Umts qos information taken into consideration by the RNC are:
- The TC (trafficClass): Conversational, Streaming, Interactive and Background,
- The ARP (AllocationPriorityPriority):
The ARP specifies the relative importance compared to other UMTS bearers for allocation and retention
of the UMTS bearers.
The ARP attribute is a subscription attribute which is not negotiated from the mobile terminal.
In situations where resources are scarce, the relevant network elements can use the ARP to prioritize
bearers with a high ARP value over bearers with a low ARP value when performing admission control.
ARP range value: [1, 2, 3]
- The THP (TrafficHandlingPriority):
The THP specifies the relative importance for handling of all the SDU belonging to the radio access
bearer compared to the SDU of other bearers.
The THP applies only to the interactive trafficClass umts flows.
Within the interactive trafficClass, there is a definite need to differentiate between bearer qualities. This is
handled by using the THP attribute, to allow the RAN to schedule traffic accordingly.
THP range value: [1, 2, 3], with THP=1 being the highest priority while THP= 3 being the lowest priority.
Moreover the ALU RNC introduces one more qos information taken into consideration in the qos mapping table:
- RbSetQos:
Traffic class
Conversational
Streaming

IuCS interface

RbSetQos

Conversational,
Streaming

Since the Transport qos information are correctly mapped between each others, the final objective is to mapped each
different UMTS flows to a specific DSCP value in such a way each umts flow received the expected qos treatment
and is correctly marked.
On the IuCS interface, the UMTS qos information is mapped to the Transport Qos information through the
configurable TransportMap RNC component.
On the IuPS interface, the qos mapping is configured as in the previous release without transportMap.

3.10.7 TRANSPORT MAP


The transportMap applies to the IuCS interface and doesnt apply to the IuPS interface.
The transportMap realizes Classification for atm and ip interfaces and moreover Marking for the ip interface:
- Classification:
- Up to 4 UMTS streams are identified called: qos0, qos1, qos2 and qos3,
- The classification is achieved based on the Umts qos information: trafficClass, RbSetQos, ARP-PL, THP,
The congestionManagement applies per umts stream.
- Marking:
- A DSCP value is assigned per Umts stream identified by the Umts qos information: trafficClass,
RbSetQos, ARP-PL and THP.
Remark: The transportMap is the way to configure the Umts to Transport qos mapping table within the RNC.
Several transportMap tables may be configured within the RNC.

ALU confidential

UMT/IRC/APP/11676

07.02 / EN

Standard

11/06/2009

Page 100/162

Iu Transport Engineering Guide


Rule: IuTEG_RNC-Atm_ TMap _1
The RNC may be configured with up to 64 transportMap components.
A transportMap is linked either to an aal2if bwPool or an aal2If (as long as no bwPool configured under the aal2if,
case of IucsIf or IurIf). One TransportMap may be shared between several instances of aal2If bwPool and aal2If,
may also be shared between different utran interfaces.

Rule: IuTEG_RNC-Atm_ TMap _2


Each BwPool (or Aal2If if no BwPool declared) must be linked to exactly one TransportMap.

A transportMap is configured with the following parameters:

RNCIN
TransportMap i
interfaceType:
Iub, Iur, IuCS, ( not iuPS)
preference:
sharedForAllTrafficTypes, or
primaryForTrafficType(only Iub case).
transportServiceEntry 1:
[TC, RbSetQos, ARP PL, Thp] => qos i
ulDlPreference.
transportServiceEntry n:
[TC, RbSetQos, ARP PL, Thp] => qos i
ulDlPreference.
TEG
Figure 3-71, transportMap parameters
- InterfaceType:
Values: [Iub, iuCs, Iur, (not iuPS)]
One transportMap may be configured with different interfaceType values, in other words a
transportMap may be allocated to different Utran interfaces.
- Preference:
Values: [sharedForAllTrafficTypes, primaryForTrafficType]
The setting of the Preference parameter determines the nature of the bwPool the transportMap is
connected to.
When several bwPools serving one nodeB, e.g.: two bwPools under one aal2if, or one
aal2if/bwPool and one ipIf/bwPool, and several transportMap tables mapping with the requested
trafficType (trafficType=TC+RbSetQos) then when selecting the transport bearer is given higher
priority to the transportMap set with Preference=primaryForTrafficType than to the transportMap
set with Preference=sharedForAllTrafficTypes.

Rule: IuTEG_RNC-Atm_ TMap _3


The interfaceType and Preference are mandatory parameters within the transportMap.
Rule: IuTEG_RNC-Atm _ TMap _4
The primaryForTrafficType Preference value is available only on the Iub interface.
ALU confidential

UMT/IRC/APP/11676

07.02 / EN

Standard

11/06/2009

Page 101/162

Iu Transport Engineering Guide

- Transport serviceEntry:
- Inputs: TC, rbSetQos, ARP-PL, THP.
Each input may be set with a specific value or with the ignore value when the input parameter
doesnt apply to the UMTS flow.
- Output:
 qos i: with i in the range [0, 1, 2, 3], one associated transport qos value involved in
both the atm and ip interfaces,
 The DSCP value is not taken into consideration on the ATM interface.
INPUT values
TC

rbSetQos

ARP-PL

THP

OUTPUT Values
UlDlPreference

DSCP

Qos i

- ulDlPreference:
Values: [noPreference, preferredForUl, preferredForDl, preferredForUlAndDl],
This parameter should be set for the UMTS traffic requesting different qos treatments on uplink
and downlink directions, e.g. hs-dsch/e-eDch. This parameter should be ignored for the UMTS
traffic requiring same qos treatment on both uplink and downlink direction.
The received RNL radioBearerType value is compared to the ulDlPreference for the selection of
a transportServicesEntry.
The ulDlPreference allows the transport admissionControl to select different bandwidth pools for
call uplink legs and downlink legs. E.g: eDch versus hs-dsch.
When two transportEntries matching the UMTS qos information, one transportEnty set with
ulDlPreference=preferredForUlAndDl and the second set with ulDlPreference=noPreference
then when selecting the transport bearer is given higher priority to the transportEntry set with
ulDlPreference=preferredForUlAndDl than to the transportEntry set with ulDlPreference=
noPreference.

Rule: IuTEG_RNC-Atm _ TMap _5


The ulDlPreference is supported only on the Iub, for both the atm and the ip interfaces.
Remark: one or several transportServiceEntry instances are configured under one transportMap instance.

Rule: IubTEG_RNC-Atm _ TMap _6


A specific set of transportServiceEntry input values must be unique under one
transportMap instance.
Nevertheless a specific set of transportServiceEntry input values may be replicated
into different transportMap instances with different output values.
The UMTS traffic qos requirement is compared to the combination of the transportServiceEntry and ulDlPreference
by the transport admissionControl for selecting the transport bearer; a transport bearer being either a qox vcc within
aal2 bwPool or an ipFlow within an ip bwPool.

ALU confidential

UMT/IRC/APP/11676

07.02 / EN

Standard

11/06/2009

Page 102/162

Iu Transport Engineering Guide


RNCIN

RNCIN

IubIf
TransportMap 1

aal2If i

interfaceType: Iub/IuCS, Iur


BwPool/x

Preference: sharedForAllTrafficTypes

qos/i Paths
qos/j, Paths

TransportMap 2
interfaceType: Iub

BwPool/y

Preference: primaryForTrafficType

Qos/k, Paths
IucsIf

Congestionmanagement i

aal2If i

qosBackPressureThreshold (qosBPt) 0 0 1 x 2 y 3 z

Qos/i, path/m

qosDiscardThreshold, qosDt 0 0 1 x 2 y 3 z

Congestionmanagement j

IurIf

qosBackPressureThreshold (qosBPt) 0 0 1 x 2 y 3 z

aal2If i

qosDiscardThreshold, qosDt 0 0 1 x 2 y 3 z

Qos/i, paths
Qos/j, paths

TEG
Figure 3-72, transportMap exemple

3.10.7.1

TRANSPORTMAP TABLES:

Two transportMap tables are specified for the IuCS interface, which also apply to the Iub and Iur interfaces.
The distinction between them is related to the Iub Hspa Streaming activation or not.
It is expected that only the conversational trafficClass applies on the IuCS interface, therefore the tse 3, 4 and 5 from
TM/1 are taken into consideration on the IuCS interface.
- TransportMap/1:
ApplicationContext:
- UA5 backward compatibility
- InterfaceType: Iub, IuCS and Iur,
- Hspa streaming: with hspa NOT streaming supported,
- Preference: SharedForAllTrafficTypes,
- Transport: atm, enhancedQos not activated.

ALU confidential

UMT/IRC/APP/11676

07.02 / EN

Standard

11/06/2009

Page 103/162

Iu Transport Engineering Guide


Transport MAP /1
Tse
TC
1 Common
2 signaling
3 conversational
4 conversational
5 conversational
6 streaming
7 streaming
8 streaming
9 streaming

ARP
ignore
ignore

THP
ignore
ignore

ignore

ignore

ignore

ignore

ignore

2
0
1

ignore
ignore

ignore

12 streaming
13 streaming
14 streaming
15 interactive

ignore

ignore

ignore
0

16 interactive
17 interactive

10 streaming
11 streaming

18 interactive
19 interactive
20 interactive
21 interactive
22 interactive
23 interactive
24 interactive
25 interactive
26 interactive
27 interactive
28 interactive
29 interactive
30 interactive
31 interactive
32 interactive
33 background
34 background
35 background
36 background
37 background
38 background

qos
0
0

noPreference
noPreference

nopreference
noPreference

default

noPreference

default

noPreference

default

noPreference

default

noPreference

default
default

noPreference
noPreference

default

noPreference

default

noPreference

default

noPreference

default

noPreference

default

noPreference

default

noPreference

default

noPreference

default
default

noPreference
noPreference

default

noPreference

default

noPreference

default

noPreference

default

noPreference

default

noPreference

default

noPreference

default

noPreference

default
default

noPreference
noPreference

default

noPreference

default
default

noPreference
noPreference

default

noPreference

default

noPreference

2
0
1

2
0

0
1

default
default

default
default

noPreference
noPreference

nopreference

default
default

nopreference

uldlpreference
noPreference

default

DSCP
default

default

ignore

rbsetQos
0
0

2
ignore
ignore

2
0
1

ignore
ignore

ignore

ignore

Table 3-1, ATM sharedForAllTrafficTypes withOut Iub hspa Streaming


- TransportMap/3:
ApplicationContext:
- InterfaceType: Iub, IuCS and Iur,
- Hspa streaming: hspa streaming supported,
- Preference: SharedForAllTrafficTypes,
- Transport: atm, enhancedQos activated.

ALU confidential

UMT/IRC/APP/11676

07.02 / EN

Standard

11/06/2009

Page 104/162

Iu Transport Engineering Guide


Transport MAP /3
Tse
TC
1 Common
2 signaling
3 conversational
4 conversational
5 conversational
6 streaming
7 streaming
8 streaming
9 streaming

ARP
ignore
ignore

THP
ignore
ignore

ignore

ignore

ignore

ignore

ignore

2
0
1

ignore
ignore

ignore

12 streaming
13 streaming
14 streaming
15 interactive

ignore

ignore

ignore
0

16 interactive
17 interactive

10 streaming
11 streaming

18 interactive
19 interactive
20 interactive
21 interactive
22 interactive
23 interactive
24 interactive
25 interactive
26 interactive
27 interactive
28 interactive
29 interactive
30 interactive
31 interactive
32 interactive
33 background
34 background
35 background
36 background
37 background
38 background

qos
0
0
0

noPreference
noPreference

nopreference
noPreference

default

noPreference

default

noPreference

default

noPreference

default

noPreference

default
default

noPreference
noPreference

default

noPreference

default

noPreference

default

noPreference

default

noPreference

default

noPreference

default

noPreference

default

noPreference

default
default

noPreference
noPreference

default

noPreference

default

noPreference

default

noPreference

default

noPreference

default

noPreference

default

noPreference

default

noPreference

default
default

noPreference
noPreference

default

noPreference

default
default

noPreference
noPreference

default

noPreference

default

noPreference

2
0
1

2
0

0
1

default
default

default
default

noPreference
noPreference

nopreference

default
default

nopreference

uldlpreference
noPreference

default

DSCP
default

default

ignore

rbsetQos
0
0

2
ignore
ignore

2
0
1

ignore
ignore

ignore

ignore

Table 3-2, ATM sharedForAllTrafficTypes with Iub hspa Streaming

3.10.8 TRANSPORT ADMISSION CONTROL


Ref: [R1].
The transport admissionControl applying to the RNC Iur interface is described in [R1, 3/RNC atm].
The RNC must be configured with equivalentBitRate and maxBitRate parameter values dedicated to the IuCS
interface.
The configured Iu equivalentBitRate and maxBitRate are involved in the admissionControl regulation and
sent in the Alcap ERQ linkCharacteristics to the remote aal2 endPoint node.
ALU confidential

UMT/IRC/APP/11676

07.02 / EN

Standard

11/06/2009

Page 105/162

Iu Transport Engineering Guide

Each IuCS interface is configured with cacm parameter specifying the granularity of the transport admissionControl
regulation.

Rule: IuTEG_admissionControl_01
It is recommended to regulate the aal2 traffic at aal2 interface bandwidth, in other word
cacm= aal2if.
The Cacm=Qos or Path is reserved for specific contexts, e.g.: case of interworking with some
otherVendor nodes.
If cacm= Path on the IuCS or Iur interfaces, it is necessary to modify the default CID selection algorithm (see
3.10.13). Indeed, on the IuCS and Iur interfaces, the parameter LoadBalancingMethod is set per default with the
value PC which assures a well balanced load of each PMC-PC with no guarantee of well balanced traffic on each
connection of either the IuCS or Iur interface.
Since the aal2linkCac is configured to achieve a trafficRegulation at Path level it is recommended to set the
LoadBalancingMethod to Link.

Rule: IuTEG_ admissionControl _02


If Cacm=Path then set RNC/LoadBalancingMethod = Link, else keep LoadBalancingMethod
= PC.

3.10.9 CONGESTIONMANAGEMENT
The congestionManagement currently doesnt apply to the IuCS traffic, nevertheless the aal2If under
iuCsIf are linked to the table CM/1 (congestionManagement /1).

3.10.10 ALCAP
Reference: [R45 & 46]
The Alcap protocol is enhanced in the RNC in such a way to improve the interworking with the aal2Switch on the
UTRAN.
The RNC includes the aal2Qos information in the transmitted ERQ message.
The aal2Switch now aware about the requested aal2Qos, can seize downstream a path with the same aal2Qos.
The current RNC currently doesnt allow the bandwidth modification via Q.2630.2; this is the reason why the RNC
Alcap is partly compliant with [R46].

3.10.11 Q-AAL2 ALTERNATE ROUTING


The qaal2AlternateRouting feature is an optional RNC feature. It impacts IuCS, Iur or both interfaces, when one or
more aal2 switches are inserted on these interfaces.
The qaal2AlternateRouting feature is not supported on the Iub interface.
Remark:
In case of one aal2Switch inserted on the IuCS interface, the alternateRouting feature allows to divert the
traffic from/to a direct RNC/MGW path to/from a RNC/aal2Switch/MGW path.
The objective of the qaal2AlternateRouting feature is:
- Protection of Iu/Iur interfaces against adjacent AAL2 switch failure,
- Load balancing over aal2 routes.

ALU confidential

UMT/IRC/APP/11676

07.02 / EN

Standard

11/06/2009

Page 106/162

Iu Transport Engineering Guide


A remote aal2 endPoint node is either a MGW or a neighbor RNC. An aal2 endPoint node is identified by one
A2EA.
The feature consists in filling the RNC AAL2 Address translation table with at least two routes.

Rule: IuTEG_RNC-Qaal2 alternateRouting_1


The RNC Qaal2 AlternateRouting feature enables the RNC to reach the remote aal2 endPoint nodes
through up to 10 different routes, in other words up to 10 Alcap DPC, up to 10 adjacent aal2 switches.

3.10.11.1 TRANSPORT USER PLANE ASPECT:


Here below typical network architecture where the feature may be configured in the RNC:

IucsIf 1

ths

A2EA b

Pa

Pa

ths

RNC
1

IurIf 1
Paths

aal2If 1

Path

,
s 1, 2

Path

RNC

AAL2
switch
1

MGw
1
A2EA c

s 1, 2
,

aal2If 2

AAL2
switch
2

A2EA a

Paths

MGw
2
A2EA d

Figure 3-73 Q-Aal2 AlternateRouting network architecture, Transport UserPlane


Between the adjacent aal2 nodes (either aal2endPoint or aal2switch), are configured the aal2 paths.
In the RNC, all the paths terminating in one adjacent node are grouped within an aal2If component instance.

Rule: IuTEG_RNC-Qaal2 alternateRouting_2


In the RNC, as many aal2If instances are configured as amount of adjacent aal2 nodes.
The interface between the RNC and the remote aal2 endPoint node (MGW or neighboring RNC) is identified in the
RNC by an IucsIf or IurIf component instance:
- One IucsIf instance is configured per MGC (a.k.a.: callServer) in other words one IucsIf instance for the pool of
CS coreNetwork MGW.
- One IurIf instance is configured per neighboring RNC.

Rule: IuTEG_RNC-Qaal2 alternateRouting_3


In the RNC is configured:
- One IucsIf instance to identify the CS coreNetwork,
- As many IurIf instances as amount of neighboring RNC to identify each aal2 interface to
the neighboring RNC.
ALU confidential

UMT/IRC/APP/11676

07.02 / EN

Standard

11/06/2009

Page 107/162

Iu Transport Engineering Guide

Within the RNC, each IucsIf and IurIf instance is linked to one or up to 10 aal2If instance(s).
A cost value is configured against each aal2If linked to an A2EA.
The cost parameter range is [0, 5].
One aal2If may be linked to several A2EA; different cost values may be configured against one aal2If linked to
different A2EA.
This cost value is used by the RNC to determine the aal2 traffic load distribution over the different aal2 routes
available for reaching the remote aal2 endPoint node.
The aal2 route selection algorithm is specified as follows:
1/ the aal2If with the lower cost will be chosen before the aal2If with higher costs.
2/ If multiple aal2If are provisioned with the same cost, then the routing function will load balance between
Aal2Ifs in a round-robin fashion.
The cost parameter configuration determines the RNC behavior in terms of aal2 route seizure:
1/ Preferred/FallBack:
The Preferred/Fallback route selection is implemented by provisioning the preferred route for an A2EA at a
lower cost, and the fallback route(s) at higher costs.
2/ RoundRobin:
Round-robin route selection is implemented by provisioning all routes for an A2EA at the same cost.
To handle Qaal2 alternateRouting feature, at least two aal2if instances are linked to each IucsIf / IurIf. Indeed in
case of defection of the selected aal2If, an alternate aal2If is seized.
Since the aal2if instance has been selected, the Path selection within the aal2If, is still based on PMC-PC load
balancing, aal2 PC-CAC and aal2LinkCac.
Each Aal2if instance can support one or several UMTS aal2 interface(s).

3.10.11.2 TRANSPORT CONTROL PLANE ASPECT:


Here below typical network architecture where the feature may be configured in the RNC:

RNC
PC n

LS / SL

AAL2
switch
1
S
LS /

RNC

LS /

MGw

PC l

PC o

SL

PC i

AAL2
switch
2

LS / SL

PC m
associated mode

MGw
PC p

associated mode

Figure 3-74 Transport ControlPlane, associatedMode


ALU confidential

UMT/IRC/APP/11676

07.02 / EN

Standard

11/06/2009

Page 108/162

Iu Transport Engineering Guide

The RNC is configured with the peer aal2 endPoint A2EA: either full 20-bytes A2EA or just an addressing prefix.
For each remote aal2 endPoint node identified by its own A2EA, the RNC Aal2 address translation table is filled
with the PCs of the adjacent aal2 switches which allow reaching the remote aal2 endPoint nodes.

Rule: IuTEG_RNC-Qaal2 alternateRouting_4


As many Alcap DPC are configured as amount of adjacent aal2 nodes (an adjacent
aal2 node being either an aal2Switch or an aal2endPoint node).
In other words, as many Alcap DPC are configured as amount of aal2If configured
for IuCS and Iur interfaces.
The RNC AAL2 address translation table is filled with up to 10 DPC for each remote
aal2 endPoint address.
As many routeSets are configured as amount of adjacent aal2 nodes (an adjacent aal2 node being either an
aal2Switch or an aal2endPoint node). These routeSets carry ALCAP signaling.
The SS7 network may be either an associated mode or a quasi associated mode network or mixed of both topologies.

RNC
PC n

LS / SL

LS

STP
STP
11

/ SL

PC j

LS / SL

AAL2
switch
1

MGw

PC l

PC o

routeSet

RNC

LS

/ SL

PC i

STP
STP
22

LS / SL

PC k

AAL2
switch
2

LS / SL

PC m

quasiAssociated mode

MGw
PC p

associated mode

Figure 3-75 Transport ControlPlane, quasiAssociatedMode


Path selection algorithm:
On reception of RAB AssignmentRequest including the A2EA from the remote aal2 endPoint node, the RNC:
- Identifies the aal2If instance(s) associated to the adjacent aal2 node(s),
- Selects the aal2If with the lowest cost, and the associated PC,
- Selects a Cid within a path belonging to the selected aal2If, a path connected to the less loaded PMC-PC; if
several Paths under the PMC-PC, selection of the less loaded path,
- Invokes Aal2linkCac,
- Invokes Aal2PC-CAC,
- Sends to the adjacent aal2 node identified by the selected PC, an ERQ including the selected Path, the
selected Cid and the remote aal2 endPoint A2EA.
ALU confidential

UMT/IRC/APP/11676

07.02 / EN

Standard

11/06/2009

Page 109/162

Iu Transport Engineering Guide

Qaal2 alternateRouting invokation triggers:


The Qaal2alternateRouting is invoked if:
- The Aal2linkCac fails,
- The adjacent aal2 node PC is down,
- Within the selected aal2If, all Paths are down,
- Within the selected aal2If, no available CID,
- Reception of ATM OAM defect signal for the VCCs supporting Paths within an aal2If ,
Remark: the qaal2 alternateRouting is not invoked on PC-CAC failure.
Configuration model:
The IuxIf component is removed and replaced by IucsIf/IurIf (/IubIf) and aal2If components:
Mapping between A2EA and aal2If :
RNC
Qaal2 Parameters
Originating A2EA
Qaal2 timers
Qaal2endPoints:
Aal2endPoint/1 = A2EA 1
Qaal2Route/1 = aal2If 1, cost 1
Qaal2Route/2 = aal2If 2, cost
2

Aal2endPoint/2 = A2EA 2
Qaal2Route/1 = aal2If 1, cost 3
Qaal2Route/2 = aal2If 2, cost 1

Figure 3-76 Qaal2 alternateRouting, Mapping between A2EA and aal2If


Mapping between aal2If and ( Paths and Alcap DPC):

ALU confidential

UMT/IRC/APP/11676

07.02 / EN

Standard

11/06/2009

Page 110/162

Iu Transport Engineering Guide


aal2If 1
Paths

aal2If i
Paths
Alcap bearer: Adjacent aal2 node.
AlcapConn: DPC, NI

iubIf 1
SignalingBearer

aal2If j
Paths

userPlane
aal2If 1

Alcap bearer: Adjacent aal2 node.


AlcapConn: DPC, NI
aal2If 2
Paths

iucsIf 1
UserPlane
aal2If i
aal2If j

iubIf 2
SignalingBearer

iurIf 1, 2,
UserPlane
aal2If i

userPlane
aal2If 2

aal2If j

Figure 3-77 Qaal2 alternateRouting Mapping between aal2If and ( Paths and Alcap DPC)
Remark:
The Alcap and qaal2AlternateRouting features are not supported under the IubIf.

3.10.12 AAL2 PATH ASSIGNMENT TO PMC-PC


The RNC/PMC-RB (radioBearer) does not handle aal2 traffic. The PMC-PC component is in charge of converting
the UTRAN Aal2 traffic into SCUDP format supported by the PMC-RAB (IP over Aal5).
The PMC-PC is a sub component of the PSFP. There is only one PMC-PC per PSFP and up to 12 PSFP per RNC.
At RNC initialization, an algorithm within TBM (TransportBearerManagement) is responsible for the assignment of
the already configured UTRAN aal2 Vccs to the PMC-PCs.
The Iu-CS/Iub/Iur aal2 vcc originate and terminate in a PMC-PC. The aal2 vcc cells are carried by Passport from
16pOC3/STM1 FP to one PMC-PC to be processed. The processing consists on:
- The AAL2 multiplexing/demultiplexing
- The AAL2 SARing
- The conversion aal2 cells <-> IP packets.

ALU confidential

UMT/IRC/APP/11676

07.02 / EN

Standard

11/06/2009

Page 111/162

Iu Transport Engineering Guide

RNC-IN
PS
PS FP
FP 00
PMC-M

PS
PS FP
FP 11
PMC-M

PS
PS FP
FP 22

PS
PS FP
FP 33

PS
PS FP
FP 11
11

PMC-RB
PMC-RB
PMC-RB
PMC-RB
PMC-RB
PMC-RB
PMC-RB
PMC-RB

PMC-RB
PMC-RB
PMC-RB
PMC-RB
PMC-RB
PMC-RB
PMC-RB
PMC-RB

PMC-RB
PMC-RB
PMC-RB
PMC-RB
PMC-RB
PMC-RB
PMC-RB
PMC-RB
PMC-RB
PMC-RB

PMC-RB
PMC-RB
PMC-RB
PMC-RB
PMC-RB
PMC-RB
PMC-RB
PMC-RB
PMC-RB
PMC-RB

PMC-RB
PMC-RB
PMC-RB
PMC-RB
PMC-RB
PMC-RB
PMC-RB
PMC-RB
PMC-RB
PMC-RB

PMC-PC
PMC-PC

PMC-PC
PMC-PC

PMC-PC
PMC-PC

PMC-PC
PMC-PC

PMC-PC
PMC-PC

ATM

Aal2 Path
Aal2 Path

Aal2 Path

aal2If i

Aal2 Path

Aal2 Path

Aal2 Path
Aal2 Path

Aal2 Path

Aal2 Path

Aal2 Path

Aal2 Path
Aal2 Path
aal2If c

aal2If d

RNC
Iur

VSP

VSP

UMGw

NodeB
Iub

Figure 3-78 RNC PS-FP aal2 path assignment


RNC concepts definitions:
- PathGroup:
A pathGroup is a set of aal2 Paths with the same aal2Qos, configured under one aal2If instance.
Aal2If
Aal2If ii
PathGroup 1
Qos 0
Path 1
Path 2

PathGroup 2
Qos 1
Path 3
Path 4
PathGroup 3
Qos 2
Path 5

Qos 3

Figure 3-79 pathGroup definition


-

LPS (Logical Path Set):


A LPS is a set of aal2 Paths. A LPS is assigned to two PMC-PC: the LPS is active on one PMC-PC and standby
on the second PMC-PC; in other words, all the paths within a LPS are assigned to the same active and standby
PMC-PC.
On PMC-PC failure, all the paths within a LPS are switched together to the standby PMC-PC.
One or several LPS is/are assigned to one PMC-PC.
One LPS is composed of paths belonging to one or several pathGroups.

ALU confidential

UMT/IRC/APP/11676

07.02 / EN

Standard

11/06/2009

Page 112/162

Iu Transport Engineering Guide

The different LPS active on one PMC-PC, are evenly distributed over different PMC-PC as standby LPS in
order to still assure an even distribution of the traffic on the PMC-PC in case of one PMC-PC failure.
LPS Group:
The LPSGroup concept applies only to the IuCS and Iur interfaces, it doesnt apply to the Iub interface.
A LPSGroup is always composed of 12 LPS.
There are 7 LPSGroups within the RNC.
The pathGroups are distributed over the different LPSGroups, with the goal to have the same cumulative ECR
on each LPSG; a pathGroup is assigned to the LSPG with the least cumulative ECR.
One or several pathGroups may be assigned to one LPSGroup.
Then the paths within the pathGroup are evenly distributed over the 12 LPS within the assigned LPSG,
Remark: If #Paths within a PathGroup < 12, then some LPS will be populated with one path from this
pathGroup whereas other LPS will not be populated with path from this group.

Algorithm:
On Iub interfaces:
- The aal2If are sorted according to their cumulative ECRgcac,
The paths from the aal2If with the highest cumulative ECR are assigned first to the LPS with the least
cumulative ECR; in case of identical cumulative ECR on two LPS, the LPS with the lowest amount of Paths is
selected.
All the paths from a nodeB, whatever the aal2Qos, are assigned to one single LPS.
Aal2If
Aal2If ii
PathGroup 1
Qos 0
Path 1
Path 2

PathGroup 2
Qos 1
Path 3
Path 4

LPS 1

PMC-PC i

PathGroup 3
Qos 2
Path 5

Qos 3

Figure 3-80 Iub Path assignment to PMC-PC


On IuCS and Iur interfaces:
On IuCS is configured one aal2 Qos, and there are as many aal2 interfaces on the RNC as amount of connected
MGW, then the RNC is configured with as many pathGroups as amount of connected MGW.
On Iur interface, two aal2Qos are configured then the RNC is configured with two pathGroups per Iur interface.
- PathGroup to LPSG assignment:
The pathGroups are sorted according to their cumulative ECRgcac.
From the PathGroup with the highest cumulative ECR up to the pathGroup with the lowest cumulative
ECR, each pathGroup is assigned to the LPSG with the lowest cumulative ECR.
- Path to LPS assignment:
Within each pathGroup assigned to one LPSG, the paths are sorted according to their ECRgcac.
PathGroup per pathGroup, the paths are assigned in a round robin fashion to the 12 LPS within the LPSG
from the path with the highest ECR up to the path with the lowest ECR.
Application: assuming a RNC composed of 12 PMC-PC, an IuCS pathGroup composed of 12 paths, then
the IuCS pathGroup is assigned to one LPSG, and the IuCS paths are evenly distributed over the 12 LPS
within the selected LPSG.
- Finally the LPS are assigned to the PMC-PC:
The # PMC-PC # LPS per LPSG = 12 then one or several LPS may be assigned to one PMC-PC.
ALU confidential

UMT/IRC/APP/11676

07.02 / EN

Standard

11/06/2009

Page 113/162

Iu Transport Engineering Guide


Since all the paths are assigned to the different LSP, each LSP is assigned to a PMC-PC, with the goal to
have the same amount of LPS per PMC-PC (+/ 1 LPS), in other words with the goal to have same
cumulative ECR on each PMC-PC.

c/s
c/s

4
5

c/s
c/s

2
3

1
1

order

c/s

LPSG1
LPS 1 LPS 2 LPS 3 LPS 4 LPS 5 LPS 6 LPS 7 LPS 8 LPS 9 LPS 10 LPS 11 LPS 12

LPSG2
LPS 1 LPS 2 LPS 3 LPS 4 LPS 5 LPS 6 LPS 7 LPS 8 LPS 9 LPS 10 LPS 11 LPS 12

order

Assuming:
Same TD for all path within a pathGroup.
1 LPSG (1)
1 IuCS pathGroup qos0
8 Iucs paths qos0 cum ECR = 158720
2 Iur pathGroup qos0
1 Iur paths qos0
cum ECR = 19840
1 Iur paths qos0
cum ECR = 19840
2 Iur pathGroup qos1
1 Iur paths qos1
cum ECR = 37200
1 Iur paths qos1
cum ECR = 37200

order

Exemple:
It is assumed that 5 pathGroups have been assigned to the LSPG1; the figure represents the distribution of the paths
to the LSP within the LPSG1. The LSPG 2 to 7 are not shown on this figure.

LPSG7
LPS 1 LPS 2 LPS 3 LPS 4 LPS 5 LPS 6 LPS 7 LPS 8 LPS 9 LPS 10 LPS 11 LPS 12

Conclusion:
The LPS1 is populated with 1 Iucs path and 1 Iur qos1 path,

The LPS12 is populated with 1 qos0 Iur path from one RNC and 1 Iur qos1 path from a second RNC

3.10.13 AAL2 CID SELECTION


Since Iub, Iur and IuCS aal2 vcc are assigned to PMC-PC, a call will be managed by one or two different PMC-PC
based on whether the Iub aal2 vcc and the IuCS aal2 vcc are assigned to the same or different PMC-PC.
Moreover in the presence of SHO (softHandover) occurs more PMC-PC may be involved in the call.
Furthermore, the PMC-RAB involved in the call is selected amongst the least loaded PMC-RAB. Therefore the
selected PMC-RAB is not always located on the same PSFP which is hosting the PMC-PC involved in the call.
Therefore several PSFP may be involved in a call.
Here below an example of the RNC-IN resources involved in a call. The PMC-PC within PSFP-0 is assigned to Iub
aal2 vcc, the PMC-PC within PSFP-2 is assigned to IuCS aal2 vcc, the PMC-RAB within PSFP-1 is selected for
managing the call:

ALU confidential

UMT/IRC/APP/11676

07.02 / EN

Standard

11/06/2009

Page 114/162

Iu Transport Engineering Guide


RNC-IN
RNC-IN
PS FP 0

PS FP 0

PS FP 2

PMC-M

PMC-M

PMC-RB
PMC-RB
PMC-RB
PMC-RB
PMC-RB
PMC-RB
PMC-RB
PMC-RB

PMC-RB
PMC-RB
PMC-RB
PMC-RB
PMC-RB
PMC-RB
PMC-RB
PMC-RB

PMC-RB
PMC-RB
PMC-RB
PMC-RB
PMC-RB
PMC-RB
PMC-RB
PMC-RB
PMC-RB
PMC-RB

PMC-PC

PMC-PC

PMC-PC
ATM

Aal2 Path

Aal2 Path

aal2If c

NodeB
Iub

UMGw aal2If

MGw

Figure 3-81 PS-FP involved in a call


The IuCS coreNetwork is identified by one IucsIf value at the RNC level. One aal2If is populated with at least two
aal2 Paths. The IuCS Aal2 Vccs / Paths are assigned to different PMC-PCs (see 3.10.12 Path Assignment).
Since an Aal2If is populated with several Paths, on call establishment a Path has to be elected to handle the call.
The PMC-PC loadSharing is handled by TBM (Transport Bearer Manager) which resides on PSFP PMC-M
component. TBM talks to Aal2If entities on ATM FPs and receives information about the aal2 Vccs load on these
FPs.
Remarks:
- All the aal2 CIDs in a single Vcc are processed by the same PMC-PC.
- With respect to the actual traffic processing, the traffic from a single Vcc handled by a single PMC-PC can
be spread on multiple PMC-RAB, even located on different PSFP, even on a PSFP where the PMC-PC is
not involved in the connection.

Criteria for electing an IuCS aal2 Path:


The setting of the loadBalancingMethod parameter determines the CID selection:
- loadBalancingMethod = PC: a CID is seized on the path assigned to the less loaded PMC-PC,
- loadBalancingMethod = link: a CID is seized on the less loaded path within the aal2If in such a way to
have a well balanced traffic over all the paths serving one direction.
Rule: IuTEG_CID-Distribution_1
It is recommended to configure the loadBalancingMethod (lbm) with PC on IuCS and Iur interfaces in
such a way the RNC PMC-PC are equally loaded with exeception of the context: cacm= Path.
In the case of interworking with otherVendor CS CoreNetwork, it may be expected by the CS
coreNetwork that the traffic is well balanced over the different paths, in such a case the
loadBalancingMethod (lbm) will be set with Link.
Remark: the loadBalancingMethod is set with link on Iub interface.
-

PMC-PC Selection:
The PMC-PC selection is invoked at call setup time.
It consists in
- identifying all available IuCS aal2 paths and associated PMC-PC, based on received A2EA and RNC
configuration,
ALU confidential

UMT/IRC/APP/11676

07.02 / EN

Standard

11/06/2009

Page 115/162

Iu Transport Engineering Guide


-

Selecting the least loaded PMC-PC from the set of PMC-PCs which host available Paths for the
requested Aal2If/QOS.
The current PMC-PC load is estimated based on the already established calls and their associated
qaal2equivalentBitRate and EquivalentSDUSize.
Qaal2equivalentBitRate and EquivalentSDUSize values are provisioned against each RB on the RNC.

PC-CAC:
The PC-CAC is invoked after PMC-PC selection.
Based on the requested Qaal2equivalentBitRate and EquivalentSDUSize, the PC-CAC determines if the
new request can be accommodated by the selected PMC-PC without exceeding the PMC-PC CPU load
threshold.
If the PMC-PC CPU load threshold is exceeded, the call will be rejected (by the RNC); otherwise the call is
accepted and the current estimated load on the PMC-PC is updated.

AAL2 Link CAC:


Chooses the least loaded Path from the set of Paths assigned to the selected PMC-PC (Path load is
estimated based on Qaal2equivalentBitRate for already established calls).
Check that there's an available CID, and update the current CID usage count.
Checks per Aal2If that the bandwidth reserved for already established calls and the bandwidth requested for
new call is lower than ACR assigned to the Aal2If. If this check fails, the new call is rejected.

Criteria for electing a CID since the Path is chosen:


The RNC provides two different algorithms for electing a CID within a chosen path, either is chosen the first
free CID or the least recently used CID.
The cidSelectMethod parameter value determines the algorithm:
- cidSelectMethod = RoundRobin: the first free CID is seized.
- cidSelectMethod = standardQ2630: the least recently used CID is seized.
Rule: IuTEG_CID-Distribution_2
-

Set cidSelectMethod = standardQ2630 for the aal2If assigned to Iur interface, or assigned to
both Iur+Iucs (case of aal2Switch),
Else set cidSelectMethod = roundRobin.

Abnormal Cases:
If an AAL2 Path goes disabled (e.g.: reception of ALCAP BLO message), its contribution to the Aal2If
ACR is removed.
No new CID allocation/setup is allowed on an AAL2 Path while it is being out of service.
In case of PMC-PC failure, the PMC-PC is no more selected by the RNC.
Note:
Depending on the interface, the RNC-CN may ask TBM to loadshare first based on PMC-PC load or Path
load.
In the case of the Iub it asks for Path based loadSharing; in the case of IuCS it asks for PMC-PC based
loadSharing.

3.10.14 AAL5 CONNECTIONS


The amount of aal5 connection supported by the RNC depends on the amount of PSFP/DCPS within the RNC:

ALU confidential

UMT/IRC/APP/11676

07.02 / EN

Standard

11/06/2009

Page 116/162

Iu Transport Engineering Guide


4 PSFP/DCPS

6 PSFP/DCPS

8 PSFP/DCPS

10 PSFP/DCPS

12 PSFP/DCPS

Service groups

10

12

Max # aal5 vcc

1800

3000

4200

6000

7200

UA6

Table 3-3, #aal5 connections


This capacity is shared between the 3 utran interfaces.

3.10.15 IUFLEX
References: [R19, FRS 29417]
The IuFlex feature allows a RNC to be connected to several CS coreNetwork nodes and several PS coreNetwork nodes
belonging to one single operator.
It is achieved by a RNC routing function which selects one CN node in a pool of CN nodes based on information derived
from UE identifiers (TMSI, P-TMSI, IMSI, IMEI).
Objectives:
- Allows the RNC to be connected to several MSC, several SGSN,
- Possibility of capacity upgrades by additional CN nodes in the pool-area or the increased service availability as
other CN nodes may provide services in case one CN node in the pool-area fails,
- Robustness, case of MSC/SGSN failure,
- Reduction of inter CN node updates, handovers, relocations and the HLR update traffic.
Remark:
In the IuFlex context, a MSC is either a R99 MSC (one Node handling both signaling and bearers) or a MGC and
the collection of MGw under control of the MGC.
Definitions:
- Pool area:
A pool area is a collection of RNC all connected to the same group of MSC or the same group of SGSN:
- A CS poolArea is a collection of RNC nodes all connected to the same group of MSC nodes.
- A PS poolArea is a collection of RNC nodes all connected to the same group of SGSN nodes.
Beside:
- A MSC pool: is a collection of MSC that serves one or several CS poolAreas.
- A SGSN pool: is a collection of SGSN that serves one or several PS poolAreas
-

PoolArea overlapping: one RNC may belong to multiple CS poolAreas, or multiple PS poolAreas.

The whole operator network may be configured as one poolArea or may be configured as multiple poolAreas.
The change of a pool-area is not visible to the MS.

ALU confidential

UMT/IRC/APP/11676

07.02 / EN

Standard

11/06/2009

Page 117/162

Iu Transport Engineering Guide


M SC 3
M SC 2

M SC 6
M SC 5

M SC 1

M SC 4

M SC 7

C S p o o la re a 2

C S p o o larea 1
RAN
node

RAN
node

A re a 1

RAN
node

A re a 2

RAN
node

A rea 3

RAN
node

A re a 5

A rea 4

RAN
node

A re a 6

RAN
node

A rea 7

P S p o o l-a re a 1

A rea 8

P S p o o l-a re a 2

SGSN 1

SG SN 3

SG SN 2

RAN
node

SG SN 6

SG SN 4
SGSN 5

NetworkResourceIdentifier (NRI):
Each CN node which supports the Intra Domain Connection of RAN Nodes to Multiple CN Nodes a.k.a IuFlex is
configured with its own NRI or several own NRI (Network Resource Identifier),
Rule: IuTEG_IuFlex-01
One or several NRI must be assigned to each coreNetwork.
A NRI value must not be shared between two different coreNetwork nodes within an umts domain.
Remark: More than one NRI may be assigned to a CN node.

Beside, the 3Gpp has specified a new field called NRI within the temporary UE identitiers: TMSI (CS domain) or PTMSI (PS domain). The NRI field within the UE identifiers is set by the serving CN node.
The NRI has a flexible length between 0 and 10 bits. The NRI is coded in bits 23 to 14 of TMSI or P-TMSI. Regardless
of the NRI length the most significant bit of the NRI is always in bit 23 of TMSI or P-TMSI.
Assuming a NRI length of 10 bits, the NRI shall be coded in the TMSI bits 14 to 23.
Assuming a NRI length of 5 bits, the NRI shall be coded in the TMSI bits 19 to 23:
31 30 29 28 27 26 25 24 23 22 21 20 19 18 17 16 15 14 13 12

11 10

NRI

Within the RNC, the NRI is used by the routing function to select one CN node in a MSC pool or in the SGSN pool.
The NRI length is configured in the RNC.
Rule: IuTEG_IuFlex-02
When IuFlex supported, the RNC must be configured with a NRI length > 0.
A NRI length=0 indicates that the feature is not activated in the RNC. The calls are then all sent to one and the same
core network node (the core network with the lowest Cs/PsCoreNetworkAccess instance identifier).

Rule: IuTEG_IuFlex-03
The NRI length must be the same in all the RNC within one CS pool area.
The NRI length must be the same in all the RNC within one PS pool area.
In case of overlapping poolArea, the NRI length must be the same in all the RNC within the multiple
CS/PS pool areas involved in the overlapping.
ALU confidential

UMT/IRC/APP/11676

07.02 / EN

Standard

11/06/2009

Page 118/162

Iu Transport Engineering Guide

The NRI of the CS and the PS domain are independent of each other as the PS and the CS domain CN nodes are
addressed independently.
Then the same NRI value may be used to point both on one MSC and/or one SGSN.
-

Intra Domain NAS Node Selector (IDNNS):


(NAS: NonAccessStratum)
The IDNNS is an IE of the Initial direct transfer message sent by the UE.
The IDNNS contains the routingParameter. The routingParameter is a bit string of 10 bits.
The routingParameter value can be derived either from:
- The TMSI or P-TMSI if available, the routing parameter is equal to the NRI which is extracted from TMSI or
P-TMSI; or
- The IMSI or IMEI if the P/TMSI not available, the routingParameter = DecimalToBinary [(IMSI div 10) mod
1000] in the range [0, 999].
Beside, the IDNNS contains also an indication of which UE identity the routing parameter is derived from (TMSI, IMSI,
IMEI).

NNSF: NAS Node Selection Function:


The NNSF is the RNC routing function allowing the selection of one CN node based on the routingParameter value
received from the UE.
The RNC is configured with two translation tables per UMTS Domain (CS and PS):
- The NRImapping tables:
- The CS NRImapping table assigns a MSC node to each received routingParameter value derived from a
TMSI or P-TMSI,
- The PS NRImapping table assigns a SGSN node to each received routingParameter value derived from a
TMSI or P-TMSI,
- The Vmapping tables:
- The CS Vmapping table assigns a MSC node to each received routingParameter value derived from a
IMSI or IMEI,
- The PS Vmapping table assigns a SGSN node to each received routingParameter value derived from a
IMSI or IMEI,

Rule: IuTEG_IuFlex_04
Within the RNC, the whole Vmapping table range [0, 999] needs to be assigned to the
existing core network nodes of a given domain.
On reception of the RRC/Initial Direct Transfer message from an UE, the RNC extracts:
- the routingParameter from the IDNNS IE,
- The UE identifier type used to derive the routingParameter.
If the received routingParameter value is derived from the TMSI or P-TMSI, based on the configured NRILength, the
RNC is able to determine the received NRI value. The RNC then selects a CN node based on the received NRI and the
NRImapping table.
If the received routingParameter value is derived from the IMSI or IMEI, the RNC selects a CN node based on the
received routingParameter value and the Vmapping table.
Exemple of NRImapping table and Vmapping table for 24 MGC/MSC and 24 SGSN:

ALU confidential

UMT/IRC/APP/11676

07.02 / EN

Standard

11/06/2009

Page 119/162

Iu Transport Engineering Guide


#

Vvalue

CS CN address

000-199

csCoreNetwAccess/0

iucsIf/0

200-399

csCoreNetwAccess/1

iucsIf/1

800-999

csCoreNetwAccess/23

iucsIf/23

MSC23 PC

24

24

Vvalue

PS CN address

MSC1 PC

000-199

psCoreNetwAccess/0

iupsIf/0

SGSN1 PC

MSC2 PC

200-399

psCoreNetwAccess/1

iupsIf/1

SGSN2 PC

800-999

psCoreNetwAccess/23

iupsIf/23

SGSN23 PC

RNC Vvalue mapping tables

NRI

CS CN address

NRI

NRI = x

csCoreNetwAccess/0

iucsIf/0

NRI = y

csCoreNetwAccess/1

iucsIf/1

24

NRI = z

csCoreNetwAccess/23

iucsIf/23

MSC23 PC

PS CN address

MSC1 PC

NRI = x

psCoreNetwAccess/0

iupsIf/0

SGSN1 PC

MSC2 PC

NRI = y

psCoreNetwAccess/1

iupsIf/1

SGSN2 PC

24

NRI = z

psCoreNetwAccess/23

iupsIf/23

SGSN23 PC

RNC NRI mapping tables

The number of entries in the NRI table depends on the NRI length, the RNC allows up to 1024 entries.
The number of entries in the V table is 1000 entries. The OAM will provide means to fill this easily by ranges etc
Beside for both the NRI and the V table, the RNC allows up to 24 routing combinations, since each parameter
csCoreNetw/Access and psCoreNetw/Access are in the range [0, 23]; as a consequence each RNC may be connected to
up to 24 MGC and up to 24 SGSN:

Rule: IuTEG_IuFlex_05
One RNC may be connected to up to 24 MSC and up to 24 SGSN.

MML
- RNC/CsFlexService/n, with n in the range [0, 3]:
- NRIparams /NRI length:
class3 parameter,
Definition:
Length of the NRI field within the TMSI.
Value: [0, 10 digits].

Rule: IuTEG_IuFlex_06
The NRI length can be set to a non zero value, only if the NRImapping table and
Vmapping table are configured.
-

NRIparams /NRI mapping table:


class3 parameter,
Definition: Assignment of the CS CN nodes to the NRI values. Some NRI value may not be assigned to a
specific CN node.
The CS CN node is represented in the table by the associated CsCoreNetworkAccess instance.
Value: range [0, 1023] or Null.

NRIparams /NullNRI:
class3 parameter,
Definition: The RNC applies load balancing between the CN nodes to the calls when NRIvalue equals
defined nullNRI value.
Value: range [0, 1023].
This is only needed if the core network supports core network offload procedures.

Vmapping table:
class3 parameter,
Definition:
Assignment of a CS CN node to each routingParameter value derived from an IMSI or IMEI.
The CS CN node is represented in the table by the assigned CsCoreNetworkAccess instance.
Value: [0, 999].
ALU confidential

UMT/IRC/APP/11676

07.02 / EN

Standard

11/06/2009

Page 120/162

Iu Transport Engineering Guide

Rule: IuTEG_IuFlex_07
Each value from the Vmapping table must be mapped to a CS CN node.
-

RNC/PsFlexService/n, with n in the range [0, 3]:


- NRIparams /NRI length:
class3 parameter,
Definition:
Length of the NRI field within the TMSI.
Value: [0, 10 digits].
Rule: IuTEG_IuFlex_08
The NRI length can be set to a non zero value, only if the NRImapping table and
Vmapping table are configured.
-

NRIparams /NRI mapping table:


class3 parameter,
Definition: Assignment of PS CN node to a NRI value. Some NRI value may not be assigned to a specific
CN node.
The PS CN node is represented in the table by the assigned PsCoreNetworkAccess instance.
Value: range [0, 1023] or Null.

NRIparams /NullNRI:
class3 parameter,
Definition: The RNC applies load balancing between the CN nodes to the calls when NRIvalue equals
defined nullNRI value.
Value: range [0, 1023].
This is only needed if the core network supports core network offload procedures.

Vmapping table:
class3 parameter,
Definition:
Assignment of a PS CN node to each routingParameter value derived from an IMSI or IMEI.
The PS CN node is represented in the table by the assigned PsCoreNetworkAccess instance.
Value: [0, 999].

Rule: IuTEG_IuFlex_09
Each value from the Vmapping table needs to be mapped to a CS CN node.
-

RNC/PsCoreNetworkAccess/n with n: [0, 23]: => 24 instances means up to 24 SGSN


- Plmn Id.
- IupsIf:
Class0 parameter,
Definition:
The IupsIf instance identifies one SGSN in the RNC.
Value: [0, 16383].
-

PC:
Definition:
PC of one SGSN.
Value: [0, 16383].

RNC/CsCoreNetworkAccess/n with n: [0, 23]: => 24 instances means up to 24 MSC


- Plmn Id.
- IucsIf:
Class0 parameter,
Definition:
The IucsIf instance identifies one MSC in the RNC.
Value: [0, 16383].
-

PC:
Definition:
ALU confidential

UMT/IRC/APP/11676

07.02 / EN

Standard

11/06/2009

Page 121/162

Iu Transport Engineering Guide


PC of one MGC.
Value: [0, 16383].

CS Domain:
#

NRI

NRI=x

csCoreNetwAccess/0

NRIlength
NRIlength

NRI=y

csCoreNetwAccess/1

nullNRI
nullNRI val
val

NRImapping
NRImapping tab
tab

24

NRI=z

csCoreNetwAccess/23

csFlexService/0
csFlexService/0

CS CN address
#

CS CN address

000-199

csCoreNetwAccess/0

200-399

csCoreNetwAccess/1

Vmapping
Vmapping tab
tab

24

800-999

csCoreNetwAccess/23

PS Domain:
#

NRI

NRI=x

psCoreNetwAccess/0

NRIlength
NRIlength

NRI=y

psCoreNetwAccess/1

nullNRI
nullNRI val
val

000-199 psCoreNetwAccess/0

24

NRI=z

psCoreNetwAccess/23

200-399 psCoreNetwAccess/1

psFlexService/0
psFlexService/0

NRImapping
NRImapping tab
tab

PS CN address
V

Vmapping
Vmapping tab
tab

24

CS Domain:

psCoreNetw/Access/0
psCoreNetw/Access/0

IucsIf/0
IucsIf/0

DPC
DPC
IuPsIf
IuPsIf /0
/0

csCoreNetw/Access/23
csCoreNetw/Access/23

aal2If/i
aal2If/i
Path
Path

800-999 psCoreNetwAccess/23

PS Domain:

csCoreNetw/Access/0
csCoreNetw/Access/0
DPC
DPC

PS CN address

UPlane
UPlane

DPC
DPC
IucsIf/23
IucsIf/23

Path
Path

psCoreNetw/Access/23
psCoreNetw/Access/23
DPC
DPC
IuPsIf
IuPsIf /23
/23

aal2If/i
aal2If/i

UPlane
UPlane

Path
Path

aal2If/j
aal2If/j

Path
Path

Path
Path

Path
Path

aal2If/j
aal2If/j
Path
Path

Path
Path

3.10.16 UTRAN SHARING


References: [FRS 18855]
The objective of the UTRAN sharing feature is to allow the UTRAN network to be shared between several UMTS
operators:
ALU confidential

UMT/IRC/APP/11676

07.02 / EN

Standard

11/06/2009

Page 122/162

Iu Transport Engineering Guide

Rule: IuTEG_utranSharing_1
The UTRAN is shared between up to 4 operators.
The RNC equipment is then composed of as many logical RNC as amount of Plmn sharing the utran.
The RNC is partly or completely shared between the Plmn.
The set of allowed radioBearers is the same for all the Plmn.
The transportMap configured in the RNC applies to all the Plmn.
The admissionControl equivalentBitRate values configured in the RNC applies to all the Plmn.

3.10.16.1 UTRAN INTERFACE IMPACT:


-

Iub interface impact:


- Both RNC and NodeB may be shared between all the Plmn,
- A shared RNC may have Iub interfaces with both shared and not shared NodeB,
- The Iub userPlane Transport resources are either shared between all the PLMN sharing the utran or
dedicated per PLMN sharing the utran (see [1] per Plmn bwPool ):
- One or several bwPool(s) is/are configured per Plmn. Within these bwPools are grouped the
userPlane resources (aal2 Path, ipFlow) dedicated to the Plmn.
- The cp vcc, oam vcc and the 1 6 ccp vcc are common to all the Plmn.
Iur interface impact:
- A shared RNC may have Iur interfaces with both shared and not shared RNC
- The Iur user and control plane Transport resources are common to all the PLMN sharing the utran,
- No Iur interface configured between the two logical RNC.
Iu interface impact:
- Each Plmn has its own CS and PS coreNetwork nodes,
- The Iu user and control plane Transport resources are dedicated per Plmn. On the RNC is configured one
IuCS and one IuPS interfaces per Plmn.
- The IuBC vcc is shared between the different PLMN sharing the RNC.
- The UtranSharing feature is compatible with the BICN and IuFlex features.
- IuPS: utranSharing applies to both the atm and the ip interfaces.

3.10.16.2 TRANSPORT IDENTIFIERS:


The RNC equipment is composed of one logical RNC per PLMN sharing the RNC equipment.
One logical RNC is identified by its global RNC Id:
Global RNC Id = PLMN Id + RNC Id = MCC + MNC + RNC Id
One RNC Id is configured in the RNC equipment; it is common to all the logical RNC.

Rule: IuTEG_utranSharing_2
The RNC Id numbering plan of all the Plmn sharing the utran operators must be coordinated.
The RNC equipment is identified by its own single ss7 PC within one networkIndicator (NI) common to all the
Plmn sharing the utran.

Rule: IuTEG_utranSharing_3
The ss7 PC must be coordinated between all the Plmn sharing the utran.
The ss7 nodes from all the Plmn sharing the utran must be identified within the same NI.
The RNC is identified by one A2EA, and traffics with utran aal2 nodes (NodeB, neighbor RNC and MGw):

Rule: IuTEG_utranSharing_4
ALU confidential

UMT/IRC/APP/11676

07.02 / EN

Standard

11/06/2009

Page 123/162

Iu Transport Engineering Guide


The A2EA numbering plan must be coordinated between all the Plmn sharing the utran.
The Plmn sharing the UTRAN are responsible for spreading between each other the vpi.vci, pathid ranges assigned
to the Iu interface in the 5.

3.10.16.3 ROUTING TABLE:


Within the RNC, the routing to the appropriate coreNetwork is based on the operator PLMN Id (MCC and MNC)
identifying the PLMN.
One csFlexService instance and one psFlexService instance is assigned to each Plmn sharing the utran.
Remark: Up to 4 instances of each csFlexService and psFlexService component in the range [0 3].
Each csFlexService and psFlexService instance is configured with the subComponent operator Plmn Id identifying
the operator.

Rule: IuTEG_utranSharing_5
Different Plmn Id must be assigned against different csFlexService instances.
Different Plmn Id must be assigned against different psFlexService instances.
Each MSC/MGC is identified by one csCoreNetwAccess instance in the range [0 23], each SGSN is identified by
one psCoreNetwAccess instance in the range [0 23].

Rule: IuTEG_utranSharing_6
Different csCoreNetwAccess instance values must be assigned to MSC/MGC whatever they belong or not
to the same Plmn.
Different psCoreNetwAccess instance values must be assigned to SGSN whatever they belong or not to
the same Plmn.
Remark: the RNC supports up to 24 MSC/MGC and up to 24 SGSN, this capacity is spread over the different Plmn
sharing the utran.
Per Plmn sharing the utran, the csCoreNetwAccess component instance(s) is/are assigned to one csFlexService, and
the psCoreNetwAccess component instance(s) is/are assigned to one psFlexService.

Rule: IuTEG_utranSharing_7
The csCoreNetwAccess component must be configured with the same PLMN Id as the
csFlexSercice component it is assigned to.
The psCoreNetwAccess component must be configured with the same PLMN Id as the
psFlexSercice component it is assigned to.

Rule: IuTEG_utranSharing_8
One csCoreNetwAccess instance must not be assigned to two different csFlexSercice instances.
One psCoreNetwAccess instance must not be assigned to two different psFlexSercice instances.

For each PLMN sharing the RNC, under the associated csFlexService and psFlexService components are configured
the vvalue and the NRI mapping tables
The RNC allows configuration of one vvalue translationTable, one NRI translationTable and NRI length per PLMN
sharing the RNC.
Example: Four PLMN sharing the RNC, one CS coreNetwork node and one PS coreNetwork node per PLMN, RNC
configuration (no IuFlex):

ALU confidential

UMT/IRC/APP/11676

07.02 / EN

Standard

11/06/2009

Page 124/162

Iu Transport Engineering Guide


PLMN CsFlexServ

CS CN Domain

PLMN A

csCoreNetwAccess/0 IucsIf/0

MSC PC

PLMN B

csCoreNetwAccess/1 IucsIf/1

MSC PC

PLMN C

csCoreNetwAccess/2 IucsIf/2

MSC PC

PLMN D

csCoreNetwAccess/3 IucsIf/3

MSC PC

PLMN psFlexServ

PS CN Domain

PLMN A

psCoreNetwAccess/0 IupsIf/0

SGSN PC

PLMN B

psCoreNetwAccess/1 IupsIf/1

SGSN PC

PLMN C

psCoreNetwAccess/2 IupsIf/2

SGSN PC

PLMN D

psCoreNetwAccess/3 IupsIf/3

SGSN PC

Example: Four PLMN sharing the RNC, several CS coreNetwork nodes and several PS coreNetwork nodes per
PLMN, RNC configuration (with IuFlex):
CS Domain:

Operator A:
# NRI
CS CN address
1 X csCoreNetwAccess/0 iucsIf/0
2 Y csCoreNetwAccess/1 iucsIf/1
3
Z csCoreNetwAccess/2 iucsIf/2

csFlexService/0
csFlexService/0
NRIlength
NRIlength

Msc1 PC
Msc2 PC
Msc3 PC

PlmnId
PlmnId AA
NRImapping
NRImapping tab
tab
Vmapping
Vmapping tab
tab

#
v
CS CN address
1 000-299 csCoreNetwAccess/0 iucsIf/0
2 300-699 csCoreNetwAccess/1 iucsIf/1
3 700-999 csCoreNetwAccess/2 iucsIf/2

Operator D:

Msc1 PC
Msc2 PC
Msc3 PC

# NRI
CS CN address
1 X csCoreNetwAccess/13 iucsIf/21 Msc21 PC
2 Y csCoreNetwAccess/14 iucsIf/22 Msc22 PC
3
Z csCoreNetwAccess/15 iucsIf/23 Msc23 PC

csFlexService/3
csFlexService/3
NRIlength
NRIlength
PlmnId
PlmnId D
D
NRImapping
NRImapping tab
tab
Vmapping
Vmapping tab
tab

#
v
CS CN address
1 000-299 csCoreNetwAccess/13 iucsIf/21 Msc21 PC
2 300-699 csCoreNetwAccess/14 iucsIf/22 Msc22 PC
3 700-999 csCoreNetwAccess/15 iucsIf/23 Msc23 PC

PS Domain:

ALU confidential

UMT/IRC/APP/11676

07.02 / EN

Standard

11/06/2009

Page 125/162

Iu Transport Engineering Guide

Operator A:
# NRI
PS CN address
1 X psCoreNetwAccess/0 iupsIf/0
2 Y psCoreNetwAccess/1 iupsIf/1
3
Z psCoreNetwAccess/2 iupsIf/2

PsFlexService/0
PsFlexService/0
NRIlength
NRIlength

Sgsn1 PC
Sgsn2 PC
Sgsn3 PC

PlmnId
PlmnId AA
#
v
PS CN address
1 000-299 psCoreNetwAccess/0 iupsIf/0
2 300-699 psCoreNetwAccess/1 iupsIf/1
3 700-999 psCoreNetwAccess/2 iupsIf/2

NRImapping
NRImapping tab
tab
Vmapping
Vmapping tab
tab

Operator D:

Sgsn1 PC
Sgsn2 PC
Sgsn3 PC

# NRI
PS CN address
1 X psCoreNetwAccess/13 iupsIf/21 Sgsn21 PC
2 Y psCoreNetwAccess/14 iupsIf/22 Sgsn22 PC
3
Z psCoreNetwAccess/15 iupsIf/23 Sgsn23 PC

PsFlexService/3
PsFlexService/3
NRIlength
NRIlength
PlmnId
PlmnId D
D

#
v
PS CN address
1 000-299 psCoreNetwAccess/13 iupsIf/21 Sgsn21 PC
2 300-699 psCoreNetwAccess/14 iupsIf/22 Sgsn22 PC
3 700-999 psCoreNetwAccess/15 iupsIf/23 Sgsn23 PC

NRImapping
NRImapping tab
tab
Vmapping
Vmapping tab
tab

Example of utran network with utranSharing activated:

Operator A, own MNC

UE
UE

On the Iub, the Transport


resources are common to
both operators unless
PLMN bwPools are
configured per PLMN.

MGw1
MGw2

CP & UP vcc

MGw10
Mc

Operator B:

NRI = 1

MGC1

Operator A:

UE
UE

Mc

Nc

MGw1

CP & UP vcc

MGC2

NRI = 2

UE
UE
UE
UE

Iub
atm
atm

Shared
resources

RNC
RNC

UE
UE

NODE B
B
NODE

MGw10

Iu Plmn 1
Iu Plmn2
dedicated
resources

ATM
ATM

NRI = 1

SGSN 2

NRI = 2

MSC

NRI = 1

MSC

NRI = 2

SGSN 1

NRI = 1

SGSN 1

NRI = 2

Iur
atm
atm

Shared RNC
RNC
Shared

Plmn2, RNC
RNC
Plmn2,

UE
UE

Plmn1, RNC
RNC
Plmn1,

SGSN 1

Operator B, own MNC


ALU confidential

UMT/IRC/APP/11676

07.02 / EN

Standard

11/06/2009

Page 126/162

Iu Transport Engineering Guide


Figure 3-82 Utran network with utranSharing activated

3.11

RNC HYBRID
The hybrid RNC is populated with both the 16pOC3/Stm1 FP and the 4pGE FP and provides:
- Both atm and ip on the Iub interface,
- Only atm available on the IuCS and Iur interfaces,
- Both atm and ip on the IuPS interface.
- The IuCBS and IuLS are transmitted over atm.
The features related to the atm interfaces are described in the section 3.10 RNC ATM.
This section covers the hybrid RNC features related over the IP interface.
In the current release, the IuPS interface may as an option uses the services from ip.
The RNC supports simultaneously IuPS over IP interface to some SGSN nodes and IuPS over atm interface to some
others SGSN nodes:

Rule: IuTEG_RNC IP_1


Per peer SGSN, both the userPlane and the controlPlane traffic must used the same Transport,
either IP or Atm.
The IuCS interface still uses the services from atm.

3.11.1 FP
3.11.1.1

4PGE FP

See [R1].

3.11.2 VIRTUAL ROUTER RNC COMPOSITION


Reference: [R161]
Two RNC virtualRouter compositions are considered: the segmentedVR and consolidated VR.
The RNC segmented VR composition is considered as the default one whereas the consolidated VR composition is
considered as an alternative.
- The segmented VR composition consists in handling the Iub userPlane traffic and IuPS traffic in different VR.
Furthermore the IuPS CP and IuPS UP traffic may be as an option handled by different VR.
The RNC is configured with 6 VR:
- OAM VR: 1 VR dedicated to the OAM traffic,
- Iub UP: 1 VR dedicated to the Iub userPlane traffic,
- IuPS: either
- 1 VR dedicated to both the IuPS userPlane and controlPlane traffic, or
- 2 VR: 1 VR dedicated to the IuPS userPlane and 1 VR dedicated to the controlPlane traffic,
- LS: 1 VR dedicated to the locationServices traffic,
- CBS: 1 VR dedicated to the cellBroadcast traffic,
- InternalTraffic: 1 VR dedicated to the RNC internal traffic.
Exemple of VR instance number:
- VR0 dedicated to the OAM traffic,
ALU confidential

UMT/IRC/APP/11676

07.02 / EN

Standard

11/06/2009

Page 127/162

Iu Transport Engineering Guide


-

VR5 dedicated to the Iub userPlane traffic,


VR1 dedicated to the IuPS userPlane and controlPlane traffic,
VR3 dedicated to the locationServices traffic,
VR4 dedicated to the cellBroadcast traffic,
VR2 dedicated to the RNC internal traffic.

- The consolidated VR composition consists in assigning one single VR for both Iub userPlane traffic and IuPS
traffic.
Indeed the RNC is configured with 5 VR:
- 1 VR dedicated to the OAM traffic,
- 1 VR dedicated to the IuPS userPlane, controlPlane traffic and the Iub userPlane traffic,
- 1 VR dedicated to the locationServices traffic,
- 1 VR dedicated to the cellBroadcast traffic,
- 1 VR dedicated to the RNC internal traffic.
Exemple of VR instance number:
- VR0 dedicated to the OAM traffic,
- VR1 dedicated to the IuPS userPlane, controlPlane traffic and the Iub userPlane traffic,
- VR3 dedicated to the locationServices traffic,
- VR4 dedicated to the cellBroadcast traffic,
- VR2 dedicated to the RNC internal traffic.
Note: this VR instance number example is used within the TEG to identify each virtualRouter.
Remark: For the both VR compositions, the oam, the internal, the locationServices and cellBroadcast traffic are
served by their own separate VR.

Rule: IuTEG_RNC IP VR_1


If segmentedVR configuration, one or two VR is/are dedicated to the IuPS IP traffic.
If consolidatedVR configuration, one common VR routes for both the Iub userPlane and the
IuPS traffic.
Beside, one VR is dedicated to the Iub OAM traffic.

ALU confidential

UMT/IRC/APP/11676

07.02 / EN

Standard

11/06/2009

Page 128/162

Iu Transport Engineering Guide

RNC,
RNC, segmented
segmented VR
VR (default)
(default)
Hybrid Iub

IuPS

UP VR 5

IuPS VR 1

OAM VR 0 LS VR 3

4pGE

4pGE

CBS VR 4

16pOC3

int VR 2

16pOC3

RNC,
RNC, segmented
segmented VR
VR (Alternative)
(Alternative)
Hybrid Iub

IuPS

UP VR 5

IuPS UP VR IuPS CP VR

4pGE

OAM VR 0 LS VR 3

4pGE

16pOC3

CBS VR 4

int VR 2

16pOC3

RNC,
RNC, consolidateded
consolidateded VR
VR
IuPS & Iub UP VR 1

4pGE

OAM VR 0

4pGE

LS VR 3

CBS VR 4

16pOC3

int VR 2

16pOC3

TEG
Figure 3-83, RNC VR composition

3.11.2.1

SEGMENTED VS CONSOLIDATED VR:

Segmented VR Advantages:
- Provides more flexibility to operators,
- Makes easier node maintenance and debugging.
- Allows separation of the IuPS CP and IuPS UP traffic in two different VLAN
Segmented VR drawback:
- May consume more IP addresses,
- Due to the limitation of a maximum of 2 VRs can support PDR, it may be preferable to reduce the number
of VRs.

3.11.2.2

TWO VR DEDICATED TO THE IUPS:

When RNC PDR is activated for the IuPS traffic and separation of IuPS CP and IuPS UP traffic into different Vlan
is required, then 2 different VR must be configured in the RNC:
- one VR in charge of handling the IuPS CP and
- a second VR in charge of handling the IuPS UP traffic.

Rule: IuTEG_RNC IP VR_2


Two IuPS VR must be configured when PDR and IuPS CP and UP traffic separation into two
different Vlan.
Assuming a PDR set with two IP paths, the PDR active path is associated to one GE link, the PDR backup path is
associated to a second link. On each GE link, two Vlan are configured: one Vlan associated to the IuPS CP traffic
and a second Vlan associated to the IuPS UP traffic:

ALU confidential

UMT/IRC/APP/11676

07.02 / EN

Standard

11/06/2009

Page 129/162

Iu Transport Engineering Guide

RNC IuPS CP VR routingTable:


VR /x, IP, Static

RNC IuPS UP VR routingTable:


VR /x, IP, Static

Route = 0.0.0.0, protected=y


NextHop = IP@ 3-2 /30
metric = 1
NextHop = IP@ 4-2 /30
metric = 2

Route = 0.0.0.0, protected=y


NextHop = IP@ 1-2 /30
metric = 1
NextHop = IP@ 2-2 /30
metric = 2

IuPS
IuPS CP
CP VR
VR
PP1

PP1

8
x

IP@ 3-2

vlan

PP2
IP@ 1-1

IP@ 2-1

vlan

1
IP@ 1-2

PP2
IP@ 3-1

IP@ 4-1

vlan

R
R 11

IuPS
IuPS UP
UP VR
VR

vlan vlan

vlan

GE
FP0
FP0
GE 0

R
R 22

GE 3

RNC
RNC

GE 2

vlan vlan

GE 3

GE

8
x

Path 2

GE 1

2
IP@ 4-2

backUp

IP@ 2-2

FP1
FP1
GE 0

SGSN
SGSN

active

GE 2

Backhaul
Backhaul

GE 1

Path 1

TEG
Figure 3-84 IuPS CP and UP separation into 2 Vlan

3.11.3 LOCALMEDIA
Reference: [R161]
The localMedia is a RNC IP media functionality which provides a functional interface to the PMC that are unaware
of the virtualRouter. It provides IP forwarding capability to the PMC on the PSFP/DCPS FP.
The localMedia is centrally located on the CP.
There is one instance of the localMedia within the RNC.
The localMedia supports up to 32 interfaces each identified by an instance in the range [0, 31].
Moreover a trafficType value is assigned to the LocalMedia interface which identifies the nature of the traffic
supported by the interface.
TrafficType values:
- trafficType = iubUPlane: LocalMedia interface dedicated to Iub User Plane traffic,
- trafficType = iubUPinternal: LocalMedia interface dedicated to the PMC-TMU, application: UMTS
Proprietary loopBack on Iub routes,
- trafficType = ss7CPlane: LocalMedia interface dedicated to IuPS Control Plane traffic,
- trafficType = rnc: LocalMedia interface dedicated to the userPlane traffic (pmc-rab),
- trafficType = oam: LocalMedia interface dedicated to the RNC Oam traffic.
- trafficType = ls: locationService,
- trafficType = cbs: cellBroadcastService,

Rule: IuTEG_localMedia_1
- TrafficType = rnc must be configured against the localMedia interface assigned to UMTS
ALU confidential

UMT/IRC/APP/11676

07.02 / EN

Standard

11/06/2009

Page 130/162

Iu Transport Engineering Guide


IuPS UP flow.
- TrafficType = ss7CPlane must be configured against the localMedia interface assigned to
UMTS IuPS CP IP based flow.
There is no constraint on the LocalMedia interface instance numbering. The following LocalMedia interface
instance is given as an example:
- The LocalMedia interface 0, trafficType = rnc is assigned to UMTS IuPS UP flow,
- The LocalMedia interface 7, trafficType = ss7CPlane is assigned to UMTS IuPS CP IP based flow.
Furthermore a localMedia interface is linked to a VR protocolPort that has an IpLogicalInterface defined.
The localMedia supports 3 IP address assignment policies:
fixed IP@ assignment policy,

Movable IP@ assignment policy,

Configurable IP@ assignment policy,


They are described in the addressingTEG [R4].
An IP subnet is configured against the VR PP linked to a localMedia interface supporting fixed IP@ or movable
IP@ assignment policy. The IP@ from the VR PP subnet are assigned to each PMC involved in the traffic handle by
the localMedia interface trafficType according to the assignment policy.
The two figures below represent the IP and ATM paths and localMedia parameters within an hybrid RNC for
segmented VR on one side and consolidated VR on other side:

RNC
RNC
TMU0
TMU0 NI
NI 00 RAB
RAB
TMU13
TMU13 NI1
NI1 RAB
RAB
RAB
RAB
M
PDC0
M 00 PDC0
M
PDC1 RAB
M 11 PDC1
PDC2
PDC2 RAB
PDC3
OMU0
OMU0 PDC3
OMU1
OMU1 PDC9
PDC9 PC0
PC0

RAB
RAB
RAB
RAB
RAB
RAB

Option1:
1 LAN/VLAN for IuPS CP & UP.

RAB
RAB

tT = CBS (cellBroadcast Service)


tT = LS (Location Service)
- CBS traffic is handle by the OMU.
- LocationService (IuPC) traffic is handle
by the NI.

PC9
PC9

PP

VR0
UP /atm

CP /IP
UP /IP

PP

PP

VR2

VR3

PP

MPE

PP

SS7

MPE

tT: CBS
If/4

tT: LS
If/3

If/1

tT: internal

If/2

PP

PP

VR1
PP

oam

If/0

tT: rnc

If/7

tT: ss7CP

localMedia
localMedia

PP

VR4
PP

IuPS

MPE

(V)LAN

ATM
ATM

16pOC3
16pOC3
16pOC3
16pOC3
GigE
x
GigE
GigE
GigExxx

SGSN
SGSN
SGSN
SGSN

IP
IP

TEG
Figure 3-85 Segmented VR

ALU confidential

UMT/IRC/APP/11676

07.02 / EN

Standard

11/06/2009

Page 131/162

Iu Transport Engineering Guide

RNC
RNC
TMU0
RAB
TMU0 NI
NI 00 RAB
RAB
RAB
TMU13
RAB
TMU13 NI1
NI1 RAB
RAB
RAB
RAB
RAB
RAB
RAB

PDC0
M
0
PDC0
M0
M
PDC1 RAB
M 11 PDC1
RAB
RAB
PDC2
PDC2 RAB
PDC3
PDC3
OMU0
OMU0
OMU1
OMU1 PDC9
PDC9 PC0
PC0 PC9
PC9

eth
eth

PP

PP

PP

PP

PP

PP

PP

MPE

MPE

PP

VR0

VR4
PP

If/2

oam

tT: CBS
If/4

PP

VR3
PP

SS7

PP

MPE

IuPS, Iub, IuPC, IuBC

PP

(V)LAN

OMC
OMC

CP
CP
tT: LS
If/3

If/1

PP

VR2

& Iub

IuPS CP & UP

VR1

tT: internal

tT: rnc
If/0

If/7

tT: ss7CP

tT: iubUP
If/5

If/6

tT: iubUPint

localMedia
localMedia

IP
IP

ATM
ATM

SGSN
SGSN

MPE

NodeB
NodeB

16pOC3
16pOC3
16pOC3
16pOC3
IP
IP

GigE
GigE xx
GigE
GigE xx

SGSN
SGSN

TEG
Figure 3-86 Consolidated VR
The IuPS UP over IP traffic is routed by the VR1 to the PMC-RAB whereas the IuPS CP over IP traffic is routed by
by the VR1 up to the PDC-Sctp and by the VR2 up to the PMC-NI.

3.11.4 QOS
Reference: [R161]

QOS mapping table applying to IuPS interface over IP:

ALU confidential

UMT/IRC/APP/11676

07.02 / EN

Standard

11/06/2009

Page 132/162

Iu Transport Engineering Guide

UMTS

UP
CP

TrafficClass

Conversational
Signaling
Streaming

ARP

1, 2, 3
na
1
2
3
1

UP
Interactive

3
Backgroud

1,2,3

THP

DSCP

CS6
EF
na

1
2
3
1
2
3
1
2
3
na

trafficClass

schedulingClass

EP

MBG%

critical

na

Queue
Size
100 K

network
premium

6
5

1
2

na
50

300 K
300 K

platinium

25

300 K

gold
silver
bronze
gold
silver
bronze
gold
silver
bronze
Standard

3
2
1
3
2
1
3
2
1
0

4
5
6
4
5
6
4
5
6
7

10
7
5

7000 K
4200 K
3000K
7000 K
4200 K
3000K
7000 K
4200 K
3000K
2200 K

AF41
AF42
AF43
AF31
AF21
AF11
AF32
AF22
AF12
AF33
AF23
AF13
DE

3
100

DSCP
dropPrecedence
EF, AFx1, CS6
Low
AFx2
Medium
AFx3, DE
High
with x = (1, 2, 3, 4)

Table 3-4, IuPS qos mapping

3.11.5 PDR
Reference: [R161]
The Protected default route (PDR) is a layer3 sparing mechanism which guarantees a traffic disruption less than one
second; this time includes the failure detection and the traffic diversion.
The PDR is a protection against port and line failure.
The PDR feature is supported for IP flows over the 4pGE FP.
Rule: IuTEG_PDR_1
The PDR is available only for IP route with nextHops over the 4pGE FP.
The PDR may be used for protection as long as:
- No routing protocol running on the RNC (Vr Ip Ospf ecmpStatus = disabled),
- The ECMP is deactivated (Vr Ip Static maxEcmpNextHops = 1)

Rule: IuTEG_PDR_2
When PDR is enabled, maxEcmpNextHops must be set to 1, Vr Ip Ospf ecmpStatus must be
disabled.
Only the VR default route (IP@=0.0.0.0, 0.0.0.0) can be used for the Protected Default Route.
Rule: IuTEG_PDR_3
If PDR is used, only the static default route must be the protection route.
The VR default route acts as the protection default route since the protected attribute is set to yes (Vr Ip Static
Route protected).
Rule: IuTEG_PDR_4
For enabling the PDR, the RNC attribute Vr Ip Static Route protected must be set to yes
against the default route.

ALU confidential

UMT/IRC/APP/11676

07.02 / EN

Standard

11/06/2009

Page 133/162

Iu Transport Engineering Guide


The RNC may be configured with up to 2 VR with protected default route:
Rule: IuTEG_PDR_5
The RNC supports up to two VR with PDR enable.
The VR routing table may consist of both: protected default routes and more specific routes, the more specific
route is selected first when forwarding IP packets. On more specific route failure, its traffic is diverted to the
protected default route (if available).
A protected default route provides a protection against more specific route failure and against protected default route
nextHop failure:
- Case of more specific route failure, its traffic is diverted to the protected default route more
preferred nextHop,
- Case of protected default route nextHop failure, its traffic is diverted to a less preferred protected
default route nextHop (if present).
For that several nextHops should be assigned to the default protected route.

RNC Iub VR routingTable:


VR /x

R
R 33
Y

IP@ 3-2

IP
Static
Route = 0.0.0.0, protected=y
NextHop = IP@ 1-2 /30,

lan

RNC
RNC

Path 3

GE

VR
VR

NodeB
NodeB

PP3

metric=1

PP2

PP1

IP@ 3-1 IP@ 2-1

R
R 22

NodeB
NodeB

Backhaul
Backhaul

IP@ 2-2

lan

IP@ 1-1

lan

metric=2
Route = X.X.X.X,
NextHop = IP@ 3-2 /30

lan
GE

Path 2

GE
GE

FP1
FP1
GE

FP2
FP2

SGSN
SGSN

lan

NextHop = IP@ 2-2 /30,

R
R 11
Y

IP@ 1-2

lan
GE

Path 1

TEG
Figure 3-87 PDR with 3 routes and 3 routers
Comments:
Assuming metric=1 assigned to the RNC VR PP1 and metric=2 assigned to the RNC VR PP2,
Under normal condition:
- The RNC forwards the IP packets with DA = x.x.x.x to the router 3,
- The RNC forwards the IP packets with DA x.x.x.x to the router 1,
Under Path1 failure:
- The RNC forwards the IP packets with DA x.x.x.x to the router 2,
Under Path3 failure:
- If the RNC is aware about the path3 failure, it forwards the IP packets with DA x.x.x.x to the
router 1.
In this case the ICMP Heartbeat is not a trigger for PDR since the ICMP heartbeat is not supported
on the more specific IP route ( 0.0.0.0).

ALU confidential

UMT/IRC/APP/11676

07.02 / EN

Standard

11/06/2009

Page 134/162

Iu Transport Engineering Guide

A protected static default route should be configured with several nextHops in such a way to be able to divert the
traffic to a PDR less preferred nextHop in case of failure of the PDR most preferred nextHop.
Whatever amount of nextHop(s) assigned to the protected default route, only one nextHop, the most preferred, is
active for forwarding the IP traffic. On failure of the more preferred nextHop, the IP traffic is diverted on one of the
less preferred nextHop(s).

Rule: IuTEG_PDR_6
Up to 4 nextHops may be assigned to a protected default route.
Each PDR nextHop belongs to different subnets as per the VR PP definition.
To provide maximum traffic protection, the different PDR nextHops should transmit over different paths when
available:
Rule: IuTEG_PDR_7
Each PDR nextHop should be configured over different Ethernet links from different 4pGE FP
and should be connected to different adjacent IP routers.
The more specific route nextHop and the PDR nextHop should be configured over different
Ethernet links from different 4pGE FP and should be connected to different adjacent IP routers.
The duration of the failure detection and PDR IP traffic diversion from the failed nextHop to an active nextHop is
less than 1 second (with exception of the ICMP heartbeat trigger).
Preferred nextHop selection:
The metric assigned to the different protected default route nextHops (Vr Ip Static Route Nexthop) are used
to determine the preferred nextHop.
The highest preference is given to the nextHop with the lowest metric value.
Revertive:
- If all the protected default route nextHops with same metric, the protection is non-revertive.
When the previously failed nextHop comes back into service, the traffic is not switched back to
the initial nextHop.
- If the protected default route nextHops with different metric values, the protection is revertive.
When the previously failed nextHop comes back into service, the traffic is switched back to the
initial nextHop set with the lowest metrics.
Remark:
If several protected default route nextHops with same metric, the port that comes into service
earlier will have the higher preference. The nextHop preference remains unchanged as long as the
associated port status remains in service (unlocked, enabled).
Triggers:
- Layer 1 failure (fiber cut, FP failure),
- administrative locking of: GE port, VLAN, VR PP, IpPort (near and far End)
- The ICMP HeartBeat is a trigger for PDR, nevertheless this trigger doesnt guarantee a traffic disruption
less than 1 second.
PDR/ARP:
In such a way to reduce the outage duration, the RNC may achieve the ARP operation for both protected
and unprotected routes present in the VR routing table before they are involved in the traffic forwarding, at
the RNC setup.
Such a behavior occurs when the RNC attribute Vr Ip preConfigFwdPath is set to enable.

Rule: IuTEG_PDR_8
The preConfigFwdPath attribute must be enabled for both protected and unprotected routes to
resolve ARP prior to route installation.
ALU confidential

UMT/IRC/APP/11676

07.02 / EN

Standard

11/06/2009

Page 135/162

Iu Transport Engineering Guide


General Remarks on PDR:
- In case of consolidated VR mode, both Iub and IuPS traffic routed through the same VR, then the
protected default route provides IP flow protection for both Iub and IuPS traffic.
- Even if one single PDR path is active for transmission, the RNC handles the traffic received on all the
PDR paths (active and alternate paths). Therefore the adjacent IP router is allowed to distribute the
upLink traffic on the different PDR paths according to its own policies.

3.11.6 ICMP HEARTBEAT


Reference: [R161]
As long as no IP routing protocol running on the RNC, the way to detect failure on the interface consists in
activating the ICMP Heartbeat.
The ICMP heartbeat checks the continuity of the IP path between two adjacent IP nodes transparently through the
layer2 nodes.
Rule: IuTEG_ICMP_0
The RNC supports the ICMP Heartbeat on the default route (0.0.0.0) only. Not supported on
more specific route.
Since the ICMP Heartbeat is a trigger for RNC PDR:
Rule: IuTEG_ICMP_1
The ICMP heartbeat should be activated in the RNC.
Beside should be activated in the adjacent IP router if supported.
The ICMP HeartBeat is activated per IP route through the RNC attribute: VR Ip Static Route heartbeat.
This attribute is taken into consideration by the RNC since the heartBeat timeout is set (RNC attribute: Vr Ip Static
heartbeatDeadInterval).
The minimum heartBeat deadInterval should be set in such a way to reduce the failure detection time:

Rule: IuTEG_ICMP_2
The ICMP heartbeat deadInterval must be set to 3 seconds.
The ICMP Heartbeat enabled on an IP route applies to all the nextHops involved in the IP route traffic forwarding.
The ICMP Heartbeat packet Dscp value is configured (RNC attribute: phbGeneralSource).
Failure time detection:
The ICMP echo request is sent every 1second.
Since the ICMP echo reply is received, the heartbeatState of the nextHop is declared up.
If no ICMP response has been received for heartbeatDeadInterval seconds (range: [3, 60 seconds], the
heartbeatState of the nextHop is declared down. As a result, the heartbeat failure time detection is at least 3 seconds
since the ICMP echo is sent.
- When no ICMP echo reply for a nextHop used by an unprotected staticRoute, the nextHop is still
considered reachable, but is less preferred over an alternate next hop with a valid heartbeat.
- When no ICMP echo reply for a nextHop used by a protected staticRoute (PDR Route), the nextHop is
considered unreachable. The traffic destinated to the failed nextHop is diverted to another nextHop used
by the protected static route.
Once the heartbeat status is declared down, a single successful ICMP poll/response must be completed before the
heartbeat status of the next hop router is considered up again:

ALU confidential

UMT/IRC/APP/11676

07.02 / EN

Standard

11/06/2009

Page 136/162

Iu Transport Engineering Guide

RNC NH

IP router
ICMP EchoRequest
ICMP EchoReply

Periodicity = 1 s

ICMP EchoRequest
ICMP EchoReply

Periodicity = 1 s

ICMP EchoRequest

ICMP EchoRequest
ICMP EchoRequest
ICMP EchoRequest

deadInterval = 3 s

NH down

ICMP EchoRequest
ICMP EchoReply

NH up

TEG
The ICMP Heartbeat allows the RNC to be aware about a failure occurring on a not-directly-connected-to-RNC
layer 2 interface:

ICMP Heartbeat, fault detection:


ICMP Echo request

RNC
RNC

ICMP Echo reply

Router

GE
GE
GE
GE

Eth
Eth
Bridge
Bridge
TEG

Remark: The preferred PDR nextHop is the one with up heartbeat state, the lowest metric assigned.
Miscellaneous:
- TTL: The RNC inserts the ICMP hearbeat within an IP packet set with TTL=64,
- DSCP: The RNC marks the IP packet containing the ICMP hearbeat according to the attribute value: Vr Dsd
phbGeneralSource (phbG).

3.11.7 SIGTRAN
The RNC supports simultaneously:
SIGTRAN interfaces to some SGSN and
SS7over ATM interfaces to some other SGSN.
The RNC doesnt support a SGSN providing both SIGTRAN and SS7 over ATM interfaces.

Rule: IuTEG_hybridRNC_Sigtran_1
ALU confidential

UMT/IRC/APP/11676

07.02 / EN

Standard

11/06/2009

Page 137/162

Iu Transport Engineering Guide


Each RNC PDC (SCTP endPoint) must be configured with the same DSCP value.
Default DSCP=AF41.
As long as the different SCTP associations are distributed over different PDC SCTP endPoint and since the DSCP is
configured per PDC SCTP endPoint, it is suggested to configure each PDC SCTP endPoint with the same DSCP
value. Indeed each SGSN received IP packets marked with the same DSCP value for the different SCTP
associations initiated on different PDC.
Remark: All the IuPS signaling packets transmitted by the RNC are marked with the same DSCP value. Even if
several SGSN (IuFlex/utranSharing) connected to the RNC, the IuPS signaling packets sent to the different SGSN
are marked with the same DSCP value.

3.11.7.1

M3UA

M3UA endPoint:
The RNC PMC-NI is a M3UA endPoint node within the SS7 over IP domain.
The M3UA endPoint is identified by one NI/PC value.
The RNC is configured with one single own PC involved in the IuPS over IP interface and IuCS and Iur over atm
interfaces.
ASP/IPSP:
The RNC acts either as an:
- ASP (ApplicationServerProcess) as long as the SGSN is connected to the classical SS7 domain. In this case
the IuPS traffic transits through a SGw (signallingGatewayProcess), or
- IPSP (IP serverProcess) as long as the SGSN is connected to the SS7 over IP domain.
The RNC may act as either: an IPSP, an ASP or both roles at one moment according to the setting of the MML
peerM3UAEntity/peerType.

RNC,ASP
RNC,ASP
SCCP
SCCP
M3UA
M3UA
SCTP
SCTP
IP
IP

SGW
SGW

M3UA
M3UA
SCTP
SCTP
IP
IP

PC3

MTP3
MTP3
MTP2
MTP2
L1
L1

SEP
SEP

RNC,
RNC, IPSP
IPSP

IPSP
IPSP

SCCP
SCCP
MTP3
MTP3
MTP2
MTP2
L1
L1

SCCP
SCCP
M3UA
M3UA
SCTP
SCTP
IP
IP

SCCP
SCCP
M3UA
M3UA
SCTP
SCTP
IP
IP

PC2 or PC3

SS7 over IP domain

PC1

Classical SS7 domain

PC1
PC2
SS7 over IP domain

TEG
Figure 3-88, RNC M3UA endPoint roles
SGw PC:
When the IuPS uses IP services on the RNC side whereas uses atm services on the SGSN side, a SGw is inserted
between the RNC and the SGSN and the RNC acts as ASP.
The RNC and the SGw may be identified either by the same PC or different PC within the same NI:
- If both the RNC and the SGw are identified by the same PC, then the RNC can be connected to one single
SGw, whatever amount of peer SGSN. The network works in associatedMode.

ALU confidential

UMT/IRC/APP/11676

07.02 / EN

Standard

11/06/2009

Page 138/162

Iu Transport Engineering Guide


PC10

SGSN
SGSN
PC1

PC1

RNC
RNC

SGw
SGw

IP
IP

PC11

ATM
ATM

SGSN
SGSN

ASP
PC12

Associated mode

SGSN
SGSN

TEG
Figure 3-89, associatedMode
-

If the RNC and SGw are identified by two different PC, then the SGw node provides both functions: SGw
and a STP. The RNC is therefore inserted in a quasiAssociated mode SS7 network. The RNC may be
connected to up to two SGw/STP:
PC2

SGw/STP
SGw/STP

PC10

SGSN
SGSN
PC11

PC1

RNC
RNC

IP
IP

SGSN
SGSN

ASP

PC3

SGw/STP
SGw/STP

PC12

SGSN
SGSN

quasiAssociated mode with two SGw.

TEG
Figure 3-90, quasi associatedMode
The SGw PC is not configured in the RNC.
The RNC supports up to two routes to reach the SGw:
Rule: IuTEG_hybridRNC_Sigtran_2
The RNC acting as an ASP supports up to 2 SGw on the SIGTRAN interface.

The amount of SCTP associations on the ASP/SGw interface is independent of the amount of MTP3 SL on the
SGw/SEP interface.
M3UA HeartBeat:
The M3UA heartBeat is supported by the RNC. It is activated as an option.

Rule: IuTEG_hybridRNC_Sigtran_3
As long as the SCTP heartBeat is enabled on the RNC, it is not usefull to enable the RNC M3UA
heartbeat.
Whatever the M3UA heartBeat is activated or not in the RNC, the RNC responds to the M3UA heartBeat request
from the peer M3UA endPoint.
M3UA LoadSharing:
The M3UA loadSharing is based on multiple SCTP associations running on RNC different processors (PDC SCTP),
for card/board failure protection.
The M3UA traffic is loadshared over the different SCTP associations based on the SCTP association availability and
the SLS field.
The M3UA traffic from a failed SCTP association is diverted to an available SCTP association.
ALU confidential

UMT/IRC/APP/11676

07.02 / EN

Standard

11/06/2009

Page 139/162

Iu Transport Engineering Guide


The M3UA loadSharing distributes the traffic over the different associated SCTP association on a per call basis.
Remark: The M3UA loadSharing is independent from the Multi-homing.
M3UA LoadSharing
singleHoming

SCTP Association

singleHoming

RNC
RNC

SGSN
SGSN

SCTP
SCTP
IP@1
IP@1

IP
IPbackBone
backBone

SCTP
SCTP
IP@10
IP@10

SCTP
SCTP
IP@2
IP@2

IP
IPbackBone
backBone

SCTP
SCTP
IP@20
IP@20

PMC-NI
PMC-NI
M3UA
M3UA ep
ep

SCTP Association

TEG
Figure 3-91, M3UA loadSharing

3.11.7.2

SCTP

SCTP endPoint:
Within the RNC, each PDC is a SCTP endPoint. The RNC is then composed of up to 8 SCTP endPoints.
A SCTP endPoint is identified by the transport address, combination of one IP address and one SCTP port
number.

Rule: IuTEG_hybridRNC_Sigtran_4
Each RNC PDC acting as a SCTP endPoint must be configured with its own SCTP endPoint IP@ and
SCTP port.
The peer SCTP endPoint is either the SGSN if the RNC acts as an IPSP or the SGP (one entity of the SGw)
if the RNC acts an ASP.

Rule: IuTEG_hybridRNC_Sigtran_4
The RNC supports:
- Up to 19 peer SCTP endPoints.
- Up to 8 peer SCTP endPoints per peer M3UA endPoint (eg: SGSN),

SCTP Association:
Only one SCTP Association is setup between two SCTP endpoints at any time.
A SCTP Association is identified by the transport addresses, combination of source and destination IP
addresses and source and destination SCTP port numbers.
Several SCTP associations may be established between two M3UA endPoints.

Rule: IuTEG_hybridRNC_Sigtran_5
On the RNC must be configured at least four and up to 8 SCTP associations per peer M3UA
endPoint.
The RNC supports:
- Up to 8 SCTP Associations per peer SCTP endPoint,
- Up to 19 SCTP Associations configured on a local SCTP endPoint (PDC),
ALU confidential

UMT/IRC/APP/11676

07.02 / EN

Standard

11/06/2009

Page 140/162

Iu Transport Engineering Guide

Remark:
- The MML indicates up to 64 SCTP Associations per PDC, nevertheless since up to 19 peer SCTP EP and 1
single association between two SCTP EP, then each PDC can be configured with up to 19 SCTP
Associations.
- Up to 19*8=152 SCTP associations are configured on the RNC as long as up to 19 peer SCTP endPoints
and up to 8 SCTP associations per peer SCTP endPoint.
- It is suggested to configure at least four and not two SCTP Associations per peer M3UA endPoint to reduce
the traffic load per PDC.
On the RNC side, the different SCTP associations serving one peer M3UA endPoint node, must be
assigned to different PDC to provide resiliency in case of PDC failure:

Rule: IuTEG_hybridRNC_Sigtran_6
The different SCTP associations serving the same peer M3UA endPoint node must be distributed
over different PDC.
Furthermore the RNC PSFP/DCPS cards are distributed in two set of PSFP/DCPS. One set is composed of
the even numbered PSFP/DCPS whereas the second set is composed of the odd numbered PSFP/DCPS.
The even numbered and odd numbered PSFP/DCPS are paired: 2 & 3, 4 & 5, 6 & 7, 10 & 11, 12 & 13.

Rule: IuTEG_hybridRNC_Sigtran_7
The different SCTP associations serving the same peer M3UA endPoint node must be distributed
over contiguous PDC: 0, 1, 4, .
e.g.: If two SCTP associations serving one peer M3UA endPoint are going to be assigned to PDC0 and
PDC1.
Remark: the PMC-NI resides in PSFP/DCPS2 and 3 (slots 4 and 5), then no PDC-SCTP in these PSFP.

1 SCTP Association per Transport path

RNC
RNC
PMC
PMC NI
NI
PMC
PMC NI
NI
M3UA
M3UA ep
ep
M3UA
M3UA ep
ep

SGSN
SGSN
M3UA
M3UA ep
ep

PDC
PDC SCTP
SCTP
CP
CP IP@
IP@

path

IP
IP
backBone
backBone

PDC
PDC SCTP
SCTP
CP
CP IP@
IP@

path

SCTP
SCTP
CP
CP IP@
IP@
SCTP
SCTP
CP
CP IP@
IP@

PDC
PDC SCTP
SCTP
CP
CP IP@
IP@

SCTP
SCTP
CP
CP IP@
IP@

PDC
PDC SCTP
SCTP
CP
CP IP@
IP@

SCTP
SCTP
CP
CP IP@
IP@

single Homing

single Homing

TEG
Figure 3-92, SCTP associations
The SCTP Associations are initiated by the RNC (the M3UA Client), reason why the RNC is configured
with the peer M3UA node IP@ and SCTP port.

Rule: IuTEG_hybridRNC_Sigtran_8
ALU confidential

UMT/IRC/APP/11676

07.02 / EN

Standard

11/06/2009

Page 141/162

Iu Transport Engineering Guide


The RNC acting as an IPSP must be configured the SGSN IP@ and SCTP ports.
The RNC acting as an ASP must be configured the SGP IP@ and SCTP ports.
SCTP Stream:
Several unidirectional SCTP streams may be established within an SCTP Association.
The RNC currently supports up to two SCTP streams per SCTP Association for handling the M3UA-User
traffic: 1 downLink SCTP stream and 1 upLink SCTP stream.
Beside the RNC supports two more SCTP streams identified by streamID=0 for handling the M3UA
management messages in upLink and in downLink.

Rule: IuTEG_hybridRNC_Sigtran_9
The RNC support two SCTP streams for handling the M3UA-User traffic, one upLink SCTP stream
and one downLink SCTP stream.
SCTP heartbeat:
The RNC SCTP heartbeat is configured with the heartBeatInterval and the Path.Max.Retrans parameters.
On heartBeatInterval expiry the RNC sends the heartBeat signal to the peer SCTP endPoint.
Without answer from the peer SCTP endPoint, the heatBeat is repeated Path.Max.Retrans times.
HeartBeatInterval minimum value = 3 seconds.
SCTP Association

RNC,
RNC, PDC
PDC

Peer
Peer SCTP
SCTP
endPoint
endPoint

no chunk or HEARTBEAT
during
heartBeatInterval =3 seconds
HEARTBEAT
RTO
Path.Max.Retrans = 3
attempts

HEARTBEAT
RTO
HEARTBEAT
RTO

Destination
Destination inactive
inactive

Figure 3-93, SCTP HeartBeat


multiHoming:
A multiHomed SCTP endpoint is represented to its peers as one SCTP port and several IP addresses.
The RNC PDC doesnt support the multiHoming, nevertheless the RNC may interwork with a peer SCTP
endPoint supporting multiHoming.
Rule: IuTEG_hybridRNC_Sigtran_10
Even if not offering the multiHoming, the RNC must be configured with all the IP@ from the peer
multiHoming SCTP endPoint.

ALU confidential

UMT/IRC/APP/11676

07.02 / EN

Standard

11/06/2009

Page 142/162

Iu Transport Engineering Guide


SCTP Association
multiHomed SCTP endPoint

singleHomed SCTP endPoint

RNC
RNC

SGSN
SGSN

PDC
PDC SCTP
SCTP
CP
CP IP@
IP@
PMC-NI
PMC-NI

PMC-NI
PMC-NI

Primary path

IP
IPbackBone
backBone

PDC
PDC SCTP
SCTP
CP
CP IP@
IP@

se

PDC
PDC SCTP
SCTP
CP
CP IP@
IP@

yp
dar
con

SCTP
SCTP
IP@10
IP@10
IP@20
IP@20

a th

SCTP
SCTP
IP@30
IP@30

IP
IPbackBone
backBone

PDC
PDC SCTP
SCTP
CP
CP IP@
IP@

IP@40
IP@40

TEG
Figure 3-94, xHomed peer SCTP endPoint.

3.11.7.3

RESILIENCY

The resiliency is achieved on RNC side thanks to the M3UA loadSharing, the PDR, the multiHoming feature as an
option supported by the SGSN.
Context:
- Two sctp endPoint in both the RNC and the SGSN (multiHomed SGSN).
Static routing in the routers and RNC.
- The RNC IP routingTable is set in such a way:
- Association1 primary path and Association2 primary path go through different FP and Routers,
- xhomed primary and backup paths go through different FP and Routers,
- PDR primary and backup paths go through different FP and Routers.

SIGTRAN singleHoming

SIGTRAN xHoming

SCTP Association 1

SGSN
SGSN

RNC
RNC

IP@11
IP@11

ry
Prima

IP@12
IP@12

IP@62
IP@62

Prima
ry

SCTP1
SCTP1

IP@82
IP@82

Up
back

IP@61
IP@61
IP@
IP@

IP@71
IP@71

Eth link
IP@
IP@

GigE FP
FP 22
GigE

SCTP
SCTP 22
IP@21
IP@
IP@21
IP@

IP@21
IP@21

IP@11
IP@
IP@11
IP@
IuPSVR
VR11
IuPS

M3UAep
ep
M3UA

SCTP
SCTP 11

GigE FP
FP 11
GigE

IP
IP
Router
IP
IPRouter
Router
Router11

c
ba

p
kU

IP
IP Router
Router 22

backUp

IP@22
IP@22

Primary

IP@72
IP@72
IP@92
IP@92

SCTP2
SCTP2
IP@81
IP@81

Primary

IP@91
IP@91

SCTP Association 2

TEG
Figure 3-95 SIGTRAN Robustness
ALU confidential

UMT/IRC/APP/11676

07.02 / EN

Standard

11/06/2009

Page 143/162

Iu Transport Engineering Guide

Failure cases:
- IP Router 1 failure & RNC FP failure: OK, either:
o The RNC diverts traffic on the xHomed backup path or
o The RNC PDR diverts the traffic to the PDR second nextHop (if default routing).
SGSN or RNC SCTP endPoint failure: OK thanks to M3UA loadSharing between the two sctp
associations,

3.11.8 IUFLEX
The iuFlex is supported on the IuPS ip interface.
See 3.10.15.

3.11.9 UTRANSHARING
The utranSharing is supported on the IuPS ip interface.
See 3.10.16.

4 VARIATIONS BETWEEN RELEASES


4.1

RNC

4.1.1

16POC3/STM1 MS3 FP:




Release5-0:
The RNC supports the 16pOC3/stm1 MS3 FP.
FP IP:
PCR5-2: IP is not supported,
PCR6-1 or 7-1: IP over Atm,
FP MPLS:
PCR5-2: MPLS not supported.
PCR 6-1 or 7-1: MPLS supported
Rfc2547: BGP MPLS VPN,
Rfc2764: IP VPN
FP POS:

PCR5-2: POS not supported.

4.1.2

HARDWARE:
Amount of PS-FP:
 Release3 & 4:
A RNC-IN is populated with up to 12 PS-FPs.
Amount of TMU:
 Release 4&3:
Case of the RNC1000, the RNC-CN is populated with up to 14 TMU cards.
RNC 1500:
 Release4-1:
Available
ALU confidential

UMT/IRC/APP/11676

07.02 / EN

Standard

11/06/2009

Page 144/162

Iu Transport Engineering Guide




4.1.3

4.1.4

Release4-0 and previous:


Not Available

PNNI


UMTS Release4-1:
The PNNI is available on IU interface, RNC and UMGw and USGSN sides.

UMTS Release 4-0:


The PNNI is available on AggregationNode IU interface, and not available on RNC-IN IU interface. Therefore
PNNI is not allowed on IU interface in case of Alcatel RNC.
Nevertheless, if RNC is from other vendors, and PNNI is provided by RNC, PNNI may be implemented on IU
interface.
Since a few PNNI features are proprietary, IOT are essential to check if Alcatel and other vendor PNNI
protocols are compliant.

SS7
SS7, PC values:
 UA 4:
OPC=0 and DPC=0 are not allowed
 UA 3:
OPC=0 and DPC=0 are not allowed
SS7, PC amount:
 UA 5-0:
The RNC supports up to 64 PC on the UTRAN.
 UA 4-2:
The RNC supports up to 43 PC on the UTRAN.
SS7, Alcap PC:
 UA 4-1 & 4-2:
RNC supports up to 10 IuCS Alcap DPC per CS coreNetwork node.
RNC supports up to 10 Alcap DPC on Iur per driftRNC,
 UA 4-0:
RNC supports up to 2 IuCS Alcap DPC.
RNC supports up to 2 Alcap DPC on Iur per driftRNC,
SS7, amount of LS & SL:
 UA 5-1:
The RNC supports up to 1024 signalingLinks.
 UA 5-0:
The RNC supports up to 256 signalingLinks.
 UA 4 -2:
The RNC supports up to 96 signalingLinks.
SS7, protocolStack migration:
 UA 4-0:
SS7 protocol stack is handled by RNC-IN
The MTP3B the saalNni and the Alcap protocols are implemented in the RNC-IN
 UA 3-1 and previous releases:
SS7 protocol stack is handled by RNC-CN

ALU confidential

UMT/IRC/APP/11676

07.02 / EN

Standard

11/06/2009

Page 145/162

Iu Transport Engineering Guide


SS7, quasiAssociatedMode:
 UA 4-1 & 4-2:
Both the quasiAssociatedMode and AssociatedMode are supported.
 UA 4-0 and previous:
Only the Associated mode is supported.

4.1.5

AAL2
AAL2 Link CAC:
 UMTS Release4&3:
AAL2 CAC is always activated on RNC IuCS and Iur interfaces.
Aal2 link CAC, AvailableCellRate (ACR):
 UMTS Release 5-0:
As an option the ACR definition is either:
- per IuxIf ACR = ECRgcac for all the aal2 paths under one IuxIf whatever the Path aal2Qos,
- per aal2Qos ACR = ECRgcac for all the aal2 paths with the same aal2Qos,
- per aal2Path ACR = Path ECRgcac.


UMTS Release 4-1:


As an option the ACR definition is either:
- per IuxIf ACR = ECRgcac for all the aal2 paths under one IuxIf whatever the Path aal2Qos.
- per aal2Qos ACR = ECRgcac for all the aal2 paths with the same aal2Qos.

UMTS Release 4-0 and previous:


ACR = Sum ECRgcac for all aal2 path under one IuxIf whatever the Path aal2Qos.

Aal2 link CAC, EquivalentBitRate (EBR):


 UA 5-0:
- One EBR values is set per RAB and per Utran interface (FRS27083).
- The Alcap ERQ/ LinkCharacteristics/max CPS-SDU BitRate Forward/Backward fields are no more
filled with RNC static MIB hardCoded values but with the value of the Rnc configured parameters:
Iuqaal2MaxBitRate.
.UA 4-1:
- One EBR values is set per RAB, it applies to all the Utran interfaces.
- The CPS-SDU BitRate forward and Backward is hardcoded in the RNC static MIB.

PC-CAC:
 UMTS Release4&3:
PC-CAC is active on RNC-IN.
ALCAP


UMTS Release4-2 :
- Alcap is implemented in the PMC-M.
- The Alcap is partly compliant with [R46], the RNC Alcap inserts the aal2Qos information in the Alcap
ERQ.
UMTS Release4-1 & previous release:
- Alcap is implemented in the TMU.
- Alcap is based on [R45].

Aal2 Components
ALU confidential

UMT/IRC/APP/11676

07.02 / EN

Standard

11/06/2009

Page 146/162

Iu Transport Engineering Guide




UMTS Release4-2 :
The IuxIf component is replaced by the IucsIf and Aal2If components

UMTS Release4-1 & previous release:


Within the Alcatel RNC, each adjacent aal2 node (aal2Switch or MGW) is identified by one IuxIf instance.

QAAL2 AlternateRouting:
 UMTS Release4-2:
Supported
 UMTS Release4-1 and previous :
Not supported
Aal2 Path assignment to PMC-PC
 UMTS Release4-2:
The loadBalancing = [weight, spread] parameter is removed.
A new algorithm is in charge of distributed the Utran aal2 vcc over the PMC-PC.
 UMTS Release4-1 and previous releases:
- The path assignment algorithm is based on the parameter loadBalancing set to spread.

UA4-1 RNC aal2 Path assignment to PMC-PC section:


The TBM provides two algorithms for aal2 Vcc assignment to PMC-PC. The algorithm is selected per
UTRAN interface through the parameter IuxIf/loadbalancingMethod set either with Weight or Spread:
- LoadbalancingMethod = weight: means the paths from an aal2If are assigned to different
PMC-PC in such a way the PMC-PC components are equally loaded.
The Vcc ECRgcac is considered to be the cost of the aal2 Path: ECRgcac =
2*SCR*PCR/(SCR+PCR).
The load of a PMC-PC is estimated by summing ECRgcac of each Path assigned to it.
- LoadbalancingMethod = spread: means the paths within an aal2If are assigned to different
PMC-PC in a round robin fashion.
Only Spread value is available on Iu/Iur.
Rule: IuTEG_RNC-PathAssignment_1
- On IuCS and Iur, set LoadbalancingMethod = Spread.
- On Iub, set LoadbalancingMethod = Weight.
Moreover TBM assigned Paths to PMC-PC with respect to the following rules:
Rule: IuTEG_RNC-PathAssignment_2
-

4.1.6

PMC-PC are loaded equally for a given QOS,


PMC-PC are loaded equally,
Within an Aal2If, for a given QOS, the paths have to be evenly distributed among
the PMC-PCs.
Within an Aal2If, the paths with different QOS should be assigned together to the
same PMC-PC.

IUFLEX


UA6:
The IuFlex is available in the RNC; the RNC supports up to 24 coreNetwork nodes per umts domain.

UA5-0:
The IuFlex is available in the RNC; the RNC supports up to 16 coreNetwork nodes per umts domain.
ALU confidential

UMT/IRC/APP/11676

07.02 / EN

Standard

11/06/2009

Page 147/162

Iu Transport Engineering Guide

4.1.7

4.1.8

IUFLEX


UA6:
The RNC may be shared between up to four plmn.

UA5-0:
The utranSharing is supported by the RNC; the RNC may be shared between up to two plmn.

SOC
MTP3 Compliancy after SS7 Stack migration to RNC-IN:

MTP3 and MTP3-B are supported (not concurrently);

ITU-T and China variants are supported;

When changeOver occurs, non transmitted and non acknowledge MSU are discarded.

This MTP3 implementation does not support the following:

Route management

Signaling Transfer Points

Forced rerouting

Controlled rerouting

Multiple congestion thresholds

SCCP Compliancy after SS7 Stack migration to RNC-IN:

ITU-T Q.711-715 (1996) is supported;

Class 0 and Class 2 are supported;

Class1 and Class 3 are not supported;

Global Title Translation not supported;

Congestion Control is not supported (SCC is not supported);

4.2

PLANE DESCRIPTION

4.2.1

CS & PS CONTROL PLANE

4.2.1.1 TMU, SL MAPPING:

4.2.2

Release 3:
CS and PS ControlPlane traffic are no more processed by the RNC-CN, but by the RNC-IN.
Therefore are no more CS and PS CP VCCs configured on Icn.

Previous Release:
RNC-CN is populated with up to 14 TMU cards.
When RNC-CN is populated with 14 TMU cards, then two TMU cards are reserved for sparing;
nevertheless CP VCCs are distributed on 14 TMU cards.

CS USER PLANE
Amount of CS UP VCCs:
 UMTS Release4&3:
CS UP VCC naming rules allow to provision up to 384 VCCs.
RNC is populated with up to 12 PS. A VSP can manage 480 calls.
Default versus Alternative configuration:
ALU confidential

UMT/IRC/APP/11676

07.02 / EN

Standard

11/06/2009

Page 148/162

Iu Transport Engineering Guide





4.2.3

UMTS Release4&3:
Default configuration applies. Alternative configuration is no more supported by R&D.
UMTS Release2:
Either RNC Default or Alternative configuration may apply.

PS USER PLANE


UMTS Release3:
Streaming UP VCC is configured.

ALU confidential

UMT/IRC/APP/11676

07.02 / EN

Standard

11/06/2009

Page 149/162

Iu Transport Engineering Guide

5 TRANSPORT IDENTIFIERS
5.1

VPI
This section provides a suggestion for vpi numbering plan on the RNC Iu and Iur interface.
The vpi numbering plan takes into consideration:
- case of IuCS, IuPS and Iur atmConnections over one common RNC stm1,
- BICN versus R99 coreNetwork nodes,
- IuFlex,
- Neighbor RNC.
The initial objective is to assign on the RNC side, one vpi per peer umts node.
Considering the large amount of peer umts nodes that may be connected to the RNC (max #MGw/BICN, max
#BICN supported per IuFlex, max #neighbor RNC), a lot of vpi should be reserved, whereas some are not used by
customer.
Remark: the utranSharing as no impact. The maximum amount of coreNetwork nodes that may be connected to the
RNC thanks to the IuFlex feature, is shared between the PLMN sharing the Utran.
Moreover assigning a rigid vpi value per peer umts node, doesnt allow optimizing the classical 16pOC3 FP
resources. Indeed, within the classical 16pOC3 FP are reserved all the vpi range between the max vpi used and min
vpi used whatever the vpi values between max vpi and min vpi are used or not.
The suggested vpi numbering plan reserves per RNC port:
- a vpi range for the max allowed amount of CS coreNetwork,
- a vpi range for the max allowed amount of PS coreNetwork,
- a vpi range for up to 26 neighbor RNC.
If more than 26 neighbor RNC are configured, either the Iur atm connections are going to be spread over at least 2
RNC stm1, or the iur atmConnections for more than 26 neighbor RNC are going to be identified by a vpi value
already reserved for Iu but not used by Iu.

Atm interface type:


The 3Gpp recommends NNI on Iu & Iur interfaces.
Besides the public atm backbones provide UNI interfaces to their customers. Therefore crossing a public
Atm backbone has an impact on vpi provisioning rules in RNC and coreNetwork nodes.
Transmission links:
Since an atmConnection is identified by vpi, vci and the transmission link, if several transmission links are
configured on an UMTS interfaces, the same vpi, vci range may be used on the different transmission links.
On the RNC side, the Iub traffic is carried on dedicated transmission link(s), whereas IuCS, IuPS and Iur
traffic are carried on a common or dedicated links.
One set of vpi covers the case of R99 CS coreNetwork node whereas the second set of iu/iur vpi covers the case of
BICN coreNetwork node:

ALU confidential

UMT/IRC/APP/11676

07.02 / EN

Standard

11/06/2009

Page 150/162

Iu Transport Engineering Guide


RNC Vpi

3
23
34
41

54

67

80

93

106

119

132

145

158

0
1
2
33
40
51
52
53
64
65
66
77
78
79
90
91
92
103
104
105
116
117
118
129
130
131
142
143
144
155
156
157
168
169

22
32
39
50

63

76

89

102

115

128

141

154

167

Assignment
IuCS
IuPS
Iur
Reserved
Reserved
Iur UA5 ext
IuCS
Reserved
Reserved
IuPS
IuCS
Reserved
Reserved
IuPS
IuCS
Reserved
Reserved
IuPS
IuCS
Reserved
Reserved
IuPS
IuCS
Reserved
Reserved
IuPS
IuCS
Reserved
Reserved
IuPS
IuCS
Reserved
Reserved
IuPS
IuCS
Reserved
Reserved
IuPS
IuCS
Reserved
Reserved
IuPS
IuCS
Reserved
Reserved
IuPS

IuCSIf 1
IuPSIf 1
Iurif

171

184
IurIf
IuCSIf 2
197
IuPSIf 2
IuCSIf 3
210
IuPSIf 3
IuCSIf 4
223
IuPSIf 4
IuCSIf 5
236
IuPSIf 5
IuCSIf 6
249
IuPSIf 6
IuCSIf 7
262
IuPSIf 7
IuCSIf 8
275
IuPSIf 8
IuCSIf 9
288
IuPSIf 9
IuCSIf 10
301
IuPSIf 10
IuCSIf 11
314
IuPSIf 11

327

170
181
182
183
194
195
196
207
208
209
220
221
222
233
234
235
246
247
248
259
260
261
272
273
274
285
286
287
298
299
300
311
312
313
324
325
326
337
338

180

193

206

219

232

245

258

271

284

297

310

323

336

IuCS
Reserved
Reserved
IuPS
IuCS
Reserved
Reserved
IuPS
IuCS
Reserved
Reserved
IuPS
IuCS
Reserved
Reserved
IuPS
IuCS
Reserved
Reserved
IuPS
IuCS
Reserved
Reserved
IuPS
IuCS
Reserved
Reserved
IuPS
IuCS
Reserved
Reserved
IuPS
IuCS
Reserved
Reserved
IuPS
IuCS
Reserved
Reserved
IuPS
IuCS
Reserved
Reserved
IuPS
IuCS
Reserved
Reserved
IuPS
IuCS
Reserved
Reserved
IuPS

IuCSIf 12

IuPSIf 12
IuCSIf 13

IuPSIf 13
IuCSIf 14

IuPSIf 14
IuCSIf 15

IuPSIf 15
IuCSIf 16

IuPSIf 16
IuCSIf 17

IuPSIf 17
IuCSIf 18

IuPSIf 18
IuCSIf 19

IuPSIf 19
IuCSIf 20

IuPSIf 20
IuCSIf 21

IuPSIf 21
IuCSIf 22

IuPSIf 22
IuCSIf 23

IuPSIf 23
IuCSIf 24

IuPSIf 24

Table 5-1, RNC Iu/Iur vpi case of R99 CS coreNetwork nodes

ALU confidential

UMT/IRC/APP/11676

07.02 / EN

Standard

11/06/2009

Page 151/162

Iu Transport Engineering Guide


RNC Vpi

Assignment

IuCS CP

IuCSIf 1

IuPS

IuPSIf 1

Iur

Iurif
IuCSIf 1
IuCSIf 1

39

IuCS MGw
IuCS CP
Iur UA5 ext

- 50
51

IuCS MGw
IuCS CP

IuCSIf 2

52

IuPS

IuPSIf 2

53

IuCS CP

2
3
23
34

22

- 32
33
40

41

54

65
66
67

106

119

132

145

158

IuCS MGw
IuCS CP

IuCSIf 3

IuPS

IuPSIf 3

IuCS MGw
IuCS CP
IuPS

210

223

IuCSIf 4
236

IuPSIf 4

IuCS CP
IuCS MGw
IuCS CP

IuCSIf 5

IuPS
IuCS CP

IuPSIf 5

- 102
103

IuCS MGw
IuCS CP

IuCSIf 6

104
105
116
117
118
129
130
131
142
143
144
155
156
157
168
169

IuPS
IuCS CP
IuCS MGw
IuCS CP
IuPS
IuCS CP
IuCS MGw
IuCS CP
IuPS
IuCS CP
IuCS MGw
IuCS CP
IuPS
IuCS CP
IuCS MGw
IuCS CP
IuPS
IuCS CP
IuCS MGw
IuCS CP
IuPS

IuPSIf 6

- 89
90
91
92

93

197

IuCS CP

- 76
77
78
79

80

184

IurIf

IuCS CP

- 63
64

171

115

128

141

154

167

249

IuCSIf 7

262

275

IuPSIf 7
IuCSIf 8

288

IuPSIf 8
IuCSIf 9

301

IuPSIf 9
314

IuCSIf 10
IuPSIf 10
327

IuCSIf 11

170
181
182
183
194
195
196
207
208
209
220
221
222
233
234
235
246
247
248
259
260
261
272
273
274
285
286
287
298
299
300
311
312
313
324
325
326
337
338

180

193

206

219

232

245

258

271

284

297

310

323

336

IuCS CP
IuCS MGw
IuCS CP
IuPS
IuCS CP
IuCS MGw
IuCS CP
IuPS
IuCS CP
IuCS MGw
IuCS CP
IuPS
IuCS CP
IuCS MGw
IuCS CP
IuPS
IuCS CP
IuCS MGw
IuCS CP
IuPS
IuCS CP
IuCS MGw
IuCS CP
IuPS
IuCS CP
IuCS MGw
IuCS CP
IuPS
IuCS CP
IuCS MGw
IuCS CP
IuPS
IuCS CP
IuCS MGw
IuCS CP
IuPS
IuCS CP
IuCS MGw
IuCS CP
IuPS
IuCS CP
IuCS MGw
IuCS CP
IuPS
IuCS CP
IuCS MGw
IuCS CP
IuPS
IuCS CP
IuCS MGw
IuCS CP
IuPS

IuCSIf 12
IuPSIf 12
IuCSIf 13
IuPSIf 13
IuCSIf 14
IuPSIf 14
IuCSIf 15
IuPSIf 15
IuCSIf 16
IuPSIf 16
IuCSIf 17
IuPSIf 17
IuCSIf 18
IuPSIf 18
IuCSIf 19
IuPSIf 19
IuCSIf 20
IuPSIf 20
IuCSIf 21
IuPSIf 21
IuCSIf 22
IuPSIf 22
IuCSIf 23
IuPSIf 23
IuCSIf 24
IuPSIf 24

IuPSIf 11

Table 5-2, RNC Iu/Iur vpi case of BICN CS coreNetwork nodes


Remarks:
- Such vpi numbering plans required that the RNC port is configured with NNI interface type.
- One Vpi for IuCS UMTS CP (Ranap CS),
Currently the IuCS UMTS CP flow crosses the MGW which acts as a SGw, therefore UMTS CP and
the UMTS UP flows may be carried over atmConnections identified by the same vpi.
In such a way to cover the case where the UMTS CP does no more cross the MGW, then a dedicated
Vpi is assigned to the UMTS CP atmConnections.
- A second Vpi is assigned to the IuCS UMTS CP (Ranap CS), in such a way to be able to distribute the
UMTS CP flow over two MGw acting as SGw,
- Up to ten Vpi values are assigned to the IuCS Transport CP+UP, in such a way to address up to ten
MGw per CS coreNetwork,
- Up to 26 vpi values assigned to the Iur interface per port.
ALU confidential

UMT/IRC/APP/11676

07.02 / EN

Standard

11/06/2009

Page 152/162

Iu Transport Engineering Guide

5.2

IUCS INTERFACE

5.2.1

VPI.VCI
The same vci range applies to each interface with a MGw.
As specified in 5.1, one vpi is assigned per MGw.
Up to 12 iuCS UP vcc are reserved per MGw, in such a way the the MGw traffic may be spread over the RNC 12
PMC-PC.
According to the ITU, up to 16 SL are reserved per routeSet. Nevertheless since MTP3 over atm, less SL are going
to be used.
Two sets of Iu vpi.vci are suggested, one for the case of Release99 CS coreNetwork and one for the case of
Release4 CS coreNetwork (a.k.a. BICN):
IuCS & IuPS

RNC / ... / CS CN & PS CN

Vcc Name:

RNC Vpi

Vci

aal5
aal2

rtVbr
Cbr

32 to 47
48 to 59

16
12

aal5 rtVbr
aal5
cbr
aal5 rtVbr
aal5 nrtVbr
aal5
ubr
aal5
ubr

32
48
49
50
51
52

47
48
49
50
51
52

16
1
1
1
1
1

aal5
aal2

40

32 to 47
48 to 59

16
12

aal5 rtVbr
aal5
cbr
aal5 rtVbr
aal5 nrtVbr
aal5
ubr
aal5
ubr

52

32
48
49
50
51
52

47
48
49
50
51
52

16
1
1
1
1
1

aal5
aal2

53

32 to 47
48 to 59

16
12

65

32
48
49
50
51
52

47
48
49
50
51
52

16
1
1
1
1
1

326

32 to 47
48 to 59

16
12

338

32
48
49
50
51
52

16
1
1
1
1
1

1 Stm 1

to
to
to
0
to
to

2
rtVbr
Cbr

to
to
to
0
to
to

1 Stm 1

PS CP
PS UP (conversation)
PS UP (streaming)
PS UP (interactive)
PS UP (background)
UMTS OAM
IucsIf & IuPS
CS CP (Ranap/Alcap)
CS UP Qos0

SC

3
rtVbr
Cbr

PS CP
PS UP (conversation)
PS UP (streaming)
PS UP (interactive)
PS UP (background)
UMTS OAM

aal5 rtVbr
aal5
cbr
aal5 rtVbr
aal5 nrtVbr
aal5
ubr
aal5
ubr

IucsIf & IuPS 24


CS CP (Ranap/Alcap)
CS UP Qos0

aal5
aal2

PS CP
PS UP (conversation)
PS UP (streaming)
PS UP (interactive)
PS UP (background)
UMTS OAM

aal5 rtVbr
aal5
cbr
aal5 rtVbr
aal5 nrtVbr
aal5
ubr
aal5
ubr

rtVbr
Cbr

to
to
to
0
to
to

to
to
to
0
to
to

47
48
49
50
51
52

1 Stm 1

PS CP
PS UP (conversation)
PS UP (streaming)
PS UP (interactive)
PS UP (background)
UMTS OAM
IucsIf & IuPS
CS CP (Ranap/Alcap)
CS UP Qos0

AAL
1

1 Stm 1

IucsIf & IuPS


CS CP (Ranap/Alcap)
CS UP Qos0

Table 5-3, IuCS vpi.vci, case of R99 CS coreNetwork

ALU confidential

UMT/IRC/APP/11676

07.02 / EN

Standard

11/06/2009

Page 153/162

Iu Transport Engineering Guide


CS:

RNC / ... / MGw

Vcc Name:

AAL

SC

RNC Vpi

Vci

Ranap CP

aal5

rtVbr

32

to

39

Ranap CP

aal5

rtVbr

33

32

to

39

Alcap CP

aal5

rtVbr

UP Qos0

23 to 32

10

aal2

cbr

32

to

47 16

48

to

59 12

1 Stm 1

IucsIf 1

aal5

rtVbr

40

32

to

39

Ranap CP

aal5

rtVbr

51

32

to

39

Alcap CP
UP Qos0

aal5
aal2

rtVbr
cbr

41 to 50

10

aal5
aal5

rtVbr
rtVbr

53
64

32

to

39

32

to

39

Alcap CP

aal5

rtVbr

UP Qos0

54 to 63

10

aal2

cbr

32

to

47 16

48

to

59 12

1 Stm 1

IucsIf 2
Ranap CP

32

to

47 16

48

to

59 12

1 Stm 1

IucsIf 3
Ranap CP
Ranap CP

326
337

Ranap CP
Ranap CP

aal5
aal5

rtVbr
rtVbr

32

to

39

32

to

39

Alcap CP
UP Qos0

aal5
aal2

rtVbr
327 to 336 10
cbr

32

to

47 16

48

to

59 12

1 Stm 1

IucsIf 24

Table 5-4, IuCS vpi.vci, case of BICN CS coreNetwork

5.2.2

AAL2IF / IUCSIF
Within the RNC all the aal2 Paths serving the peer aal2 node are grouped under the components: aal2If and IucsIf.
Each CS coreNetwork node connected to the RNC is identified by one IucsIf instance in the RNC. The iucsIf range
is extended to satisfy the IuFlex feature:

iucsIf
Interface:
Iucs

Values
to

24

#
24

Table 5-5, IuCSIf reserved range


Remark:
The IuCSIf range starts from 0 in the RNC-CN and from 1 in the RNC-IN. The TEG takes into
consideration the RNC-IN IuCSIf component range.
One CS coreNetwork node being either a R99 MSC or a R4 BICN composed of up to 10 MGw.
Within a CS coreNetwork node, case of BICN, each MGW connected to the RNC is identified by one aal2If
instance in the RNC. Within one aal2If instance are grouped all the paths ending on one MGw.
Assuming a RNC connected to up to 24 CS coreNetwork nodes, each composed of 10 MGw then the RNC must be
configured with 24 iucsIf instances and 240 aal2If instances:

ALU confidential

UMT/IRC/APP/11676

07.02 / EN

Standard

11/06/2009

Page 154/162

Iu Transport Engineering Guide


aal2If, no aal2 switch
Interfaces:
Iub
Iub ext
Iub ext
Iu
Iu ext
Iu ext
Iur
Iur ext

Values
to
299
to
634
to
2676
to
349
to
483
to
714
to
373
to
726

1
484
727
300
374
635
350
715

#
299
151
1950
50
110
80
24
12

Table 5-6, aal2If reserved range


In the case aal2Switches are inserted in the network, for switching either the IuCS traffic or both the IuCS and Iur
traffic, each aal2Switch is going to be identified in the RNC by means of an aal2If instance.
The RNC is able to address up to 10 aal2Switches, then up to 10 aal2If instance values are going to be assigned on
the IuCS interface:

aal2If, aal2 switches Iucs/Iur


Interfaces:
Iu/Iur

300

Values
to

309

#
10

Table 5-7, aal2If reserved range for aal2Switches

5.2.2.1 PATHID:
On the RNC side, all the paths terminating on one adjacent aal2 node are grouped under one aal2If instance.

IU
Path
AMR+CSD

QOS
0

48

PATHID
to

59

#
12

Table 5-8, PathId reserved range per MGw or per aal2Switch


Remark:
- The same pathId value may be configured under different aal2If instances.

5.2.2.2 AAL2 QOS:


On the IuCS UP the aal2Qos=0 applies to both trafficClasses: Conversational and Streaming.

5.2.3

SPECIFIC NETWORK TOPOLOGIES


Different Transport network topologies may occur on the IuCS interface:
- Atm backbone,
- Aal2 backbone,
- Ss7 backbone,
- Combination of the different kinds of backbone.
Moreover the IuCS and IuR traffic may be handle on different dedicated or shared resources.

5.2.3.1 AAL2 BACKBONE:


On the IuCS interface is inserted one or several aal2Switch(s).
This section covers the interface between the RNC and the AAL2 switch. As an option an ATM backbone may be
inserted between the RNC and the aal2Switch(s).
ALU confidential

UMT/IRC/APP/11676

07.02 / EN

Standard

11/06/2009

Page 155/162

Iu Transport Engineering Guide


The aal2Switches are point of termination for Transport Control and User Planes.
Remark: If the UMTS ControlPlane goes through the aal2Switch, it is managed at atm level only.

5.2.3.1.1

TRANSPORT CONTROL PLANE:

Between the RNC and the aal2Switch is configured one routeSet identified by the RNC PC and the aal2Switch PC.
This routeSet is populated with up to 16 SLs; therefore up to 16 Alcap vcc.
If on the RNC side, the IuCS and Iur traffic are transmitted on the same transmission link terminating to the
aal2Switch, the routeSet configured between the RNC and the aal2Switch, carried together IuCS and Iur Transport
ControlPlane traffic.
Several aal2Switches may be connected to the RNC, therefore as many routeSet are configured as amount of
adjacent aal2Switches.
One vpi value is reserved on the RNC per aal2Switch, up to 16 alcap vci are reserved per aal2Switch.
Since the aal2Switch nodes are introduced on the Iu/Iur interface, some vpi.vci assigned to MGw in 5.2.1 are going
to be re-assigned to the aal2Switch(es) when not used:
E.g.:
- Alcap Vpi range = 23 to 32
(One Vpi per aal2Switch, up to 10 adjacent aal2Switches),
- Alcap Vci range = 32 to 47
(16 Alcap Vcc).

5.2.3.1.2

USER PLANE:

AAL2:
Each adjacent AAL2 switch is identified in the RNC by one aal2If instance.
Under each aal2If instance is configured a set of Paths serving the AAL2 switch.
If the Iur userPlane traffic transits together with IuCS userPlane traffic through the aal2Switch (es), then the
CID over one path are seized by both the IuCS calls and the Iur Calls.
The amount of paths per aal2Switch depends on the expected maximum amount of simultaneous calls. The
same pathId range may be configured for each aal2Switch.
Example:
Assuming 12 aal2 Paths required per aal2Switch, the path may be identified by pathId=[48-59]
RNC aal2 translationTable:
Within the RNC, the aal2 translation table must be filled for each MGw and neighbor RNC :

RNC aal2Translation Table:


Input:

Output:
aal2Switch PC
aal2If

A2EA

Set of Paths

aal2Switch PC
aal2If

Case of qaal2alternate Routing, one


and up to 10 (PC, aal2If) are
mapped to one A2EA.

Set of Paths

Example:
The RNC is connected to one neighbor RNC identified by A2EA_1 and to one MGw identified by
A2EA_2, through one aal2Switch identified by PC_1 and aal2If_1
ALU confidential

UMT/IRC/APP/11676

07.02 / EN

Standard

11/06/2009

Page 156/162

Iu Transport Engineering Guide


The RNC translation table consists in:

Remote
AAL2 endPoint Node

A2EA

AAL2 Switch PC

Aal2If

Set of Pathid

MGW

A2EA_2

PC_1

Aal2If_1

Pathid set1

Neighbor RNC

A2EA_1

PC_1

Aal2If_1

Pathid set1

ATM:
On the RNC side, one UP vcc is assigned to each path terminating on the aal2Switch.
One vpi is specified per aal2Switch. E.g.: vpi = [23 32].
Assuming up to 12 up vcc per aal2Switch, the up vcc may be identified by vci=[48-59].

5.3

IUPS INTERFACE
Since the IuFlex is available, the RNC may be connected to several SGSN nodes. Within the RNC each SGSN is
identified by an iupsIf instance:

iupsIf
Interface:
Iups

Values
to

24

#
24

Table 5-9, IuPSIf reserved range


Remark:
The IuPSIf range starts from 0 in the RNC-CN, and from 1 in the RNC-IN. The TEG takes into
consideration the RNC-IN range.

5.3.1

VPI.VCI:
As many PS UP vcc are configured as amount of expected qos behavior. Moreover these vcc may be duplicated
when ECMP is activated (classical 16pOC3 FP).
Per default, when qos is required, four PS UP vcc are configured on the IuPS interface; each vcc handles the traffic
for one specific qos behavior.
At the atm layer, the qos differentiation is managed by means of the serviceCategory and the proprietary
emissionPriority parameter.
The IP control traffic is carried in the PS UP vcc with the highest IPCos.
In the case the IuPS interface is spread over 2 stm1 links, four identical PS UP vcc are configured on the two stm1
links. ECMP achieves the loadSharing of the PS UP traffic over the two links.
Furthermore, between RNC and SAS equipment is specified IuPC interface. The IuPC interface transits through
SGSN. One or two ip over atm IuPC vcc is/are configured between the RNC and the SGSN.

The following table provides the suggested vpi.vci range for the IuPS vcc:

ALU confidential

UMT/IRC/APP/11676

07.02 / EN

Standard

11/06/2009

Page 157/162

Iu Transport Engineering Guide


PS:
Vcc Name:

RNC / ... / SGSN


AAL

SC

RNC Vpi

Vci

rtVbr
cbr
rtVbr

32 to 47 16
48
1
49
1

UP IpCos1

aal5

UP IpCos2

aal5 nrtVbr

UP IpCos3
UMTS OAM

aal5
aal5

ub r
ubr

aal5
IupsIf 2

ubr

53 to 54 2

aal5
aal5

rtVbr
cbr
rtVbr

32 to 47 16
48
1
49
1

IuPC

Ranap CP
UP IpCos0

50

51
52

1
1

UP IpCos1

aal5

UP IpCos2

aal5 nrtVbr

UP IpCos3
UMTS OAM

aal5
aal5

ub r
ubr

aal5
IupsIf 3

ubr

53 to 54 2

rtVbr
cbr
rtVbr

32 to 47 16
48
1
1
49

IuPC

Ranap CP
UP IpCos0
UP IpCos1

aal5
aal5
aal5

UP IpCos2

aal5 nrtVbr

UP IpCos3
UMTS OAM

aal5
aal5

ub r
ubr

IuPC

aal5

ubr

52

65

50

51
52

1
1

50

51
52

1
1

1 Stm 1

aal5
aal5

1 Stm 1

Ranap CP
UP IpCos0

1 Stm 1

IupsIf 1

53 to 54 2

aal5 rtVbr
aal5
cbr
aal5 rtVbr
aal5 nrtVbr
aal5 ub r
aal5
ubr
aal5
ubr

338

32 to 47 16
1
48
49
1
50
1
51
1
52
1
53 to 54 2

1 Stm 1

IupsIf 24
Ranap CP
UP IpCos0
UP IpCos1
UP IpCos2
UP IpCos3
UMTS OAM
IuPC

Table 5-10, IuPS vpi.vci reserved range

5.4

IU/IUR PNNI SPVC HAIRPINS


Connection Identifiers:
Since Pnni is involved in Iu and Iur atmConnection establishment, two Pnni sPVC Hairpins dedicated to Iu and Iur
interfaces must be configured on the 16pOC3.
I such a way to save the 16pOC3/Stm1 FP resources, it is preferred to identify all atmConnections configured on the
Iu/Iur pnni sPvc Hairpin, with Vpi=0:

ALU confidential

UMT/IRC/APP/11676

07.02 / EN

Standard

11/06/2009

Page 158/162

Iu Transport Engineering Guide


IU / IUR Pnni sPVC Hairpin
VCI

32 to 35
56 to 103
36 to 55

4
48
20

AAL5
AAL5
AAL5
AAL5
AAL5
AAL5
AAL5

104 to 107
108
109
110
111
112
113 to 113

4
1
1
1
1
1
1

AAL5
AAL5
AAL2
AAL2

114
154
194
214

40
40
20
20

PS CP
PS UP IpCos0
PS UP IpCos1
PS UP IpCos2
PS UP IpCos3
UMTS OAM
IuPC
VCC RNSAP
VCC Alcap
VCC UP DS
VCC UP NDS

IU CS
IU PS

Vpi

AAL5
AAL2
AAL5

IUR

AAL

VCC:
CS Ranap CP
CS UP
Transport CP (Alcap)

to
to
to
to

153
193
213
233

Figure 5-1 Vpi/Vci on Iu/Iur pnni sPvc Hairpin

5.5

FP ATTRIBUTES

5.5.1

CLASSICAL 16POC3/STM1 FP ATTRIBUTES


The 16pOC3/Stm1 FP attributes setting determines the distribution of the FP and APC resources through each FP
port.
The FP and APC resources are consumed by the amount of atmConnections and by the atmConnection identifier
ranges.
All the utran interfaces are connected to the RNC 16pOC3/Stm1 FP.
Refer to [R1 5] for the classical 16pOC3/Stm1 FP attributes values.

5.5.2

16POC3/STM1 MS3 FP
Refer to [R1 5] for the 16pOC3/Stm1 MS3 FP attributes values.

5.6

TRAFFIC CONTRACT
The trafficDescriptor values are specific to a network. They depend on the customer traffic expectation followed by
a dimensioning exercise and on the node configuration (e.g.: amount of atmConnection). No default
trafficDescriptor values are provided in the TEG.

ABBREVIATIONS
A2EA:

Aal2 Service Endpoint Address (Q2630.1)

AAL:

ATM Adaptation Layer

AESA:

ATM End System Address

ALCAP:

Access Link Control Application Part

AP-NI:

Adjunct Processor NetworkInterface

APS:

Automatic Protection Switching

AS:

Access Stratum signaling

ASP:

ATM Service Provider

ATM:

Asynchronous Transfer Mode


ALU confidential

UMT/IRC/APP/11676

07.02 / EN

Standard

11/06/2009

Page 159/162

Iu Transport Engineering Guide


AU-4:

Administrative Unit4 (149, 74 Mbps), from SDH STM1

BER:

Bit Error Rate

BICN:

Bearer Independent CoreNetwork

CCP:

Communication Control Port

CCS7:

Common Channel Signaling number7

CDV:

Cell Delay Variation

CDVT:

Cell Delay Variation Tolerance

CES:

Circuit Emulation Resource

CID:

ConnectionIDentifier (aal2)

CLR:

Cell Loss Ratio

CO:

Central Office, the place where is located the MGC.

COO:

ChangeOverOrder

CP:

Control Port

CP:

Control Plane

CPCH:

Common Packet Channel

CRC:

Cyclic Redundancy Code

CS:

Circuit Switched

DchFP:

Dedicated Channel Frame Protocol

DPC:

Destination PointCode

DS:

Delay Sensitive

DSCH:

Downlink Shared Channel

Dst:

Destination of an ATM Connection

ECMP:

Equal Cost MultiPath

ECR:

Equivalent Cell Rate

EP:

Emission Priority

ESEA:

Destination E.164 Service Endpoint Address Parameter

FACH:

Forward Access Channel

FP:

Functional Processor (Passport definition)

GMM:

GPRS Mobility Management

GTP:

GPRS Tunneling Protocol

Icn:

Internal Interface between RNC-IN and RNC-CN

IPBCP:

IP Bearer Control Plane

LCS:

Location Service

LCD:

Loss of Cell Delineation

LLC:

LogicalLinkControl

LS:

LinkSet

MTP:

MessageTransferPart

MSTE

MultiplexSection Terminating Equipment

MUX:

Multiplexer

Nap:

Nailed-up Adaptation Point

NAS:

Non Access Stratum

NBAP-c:

NBAP common

NBAP-d:

NBAP dedicated
ALU confidential

UMT/IRC/APP/11676

07.02 / EN

Standard

11/06/2009

Page 160/162

Iu Transport Engineering Guide


NDS:

Non Delay Sensitive

Nep:

Nailed-up End Point

NNI:

Network to Network Interface

Nrp:

Nailed-up Relay Point

NSAP:

Network Service Access Point

OEM:

Other Equipment Manufacturer

OPC:

Originating PointCode

PCAP:

Position Calculation Application Part

PCH:

Paging Channel

PDC:

Processor Daughter Card

PDH:

Plesiochrone Digital Hierarchy

PMC:

PCI Mezzanine Card

PNNI:

Private NNI

POC:

Point Of Concentration

POR:

Plan Of Record

PS:

Packet Switched

PTE:

Path Terminating Equipment

QOS:

Quality Of Service

RACH:

Random Access Channel

RANAP:

Radio Access Network Application part

RNC-AN:

RNC AccessNode

RNC-CN:

RNC ControlNode

RNC-IN:

RNC InterfaceNode

RNS:

Radio Network System

RRM:

RadioResourceManagement

RSTE:

RegeneratorSection Terminating Equipment

SAAL-NNI:

Signaling ATM Adaptation Layer, Network to Network Interface

SAS:

Stand Alone SMLC (Service Mobile Location Center)

SC:

Service Category

SCCP:

Signaling Connection Control Part

SDH:

Synchronous Digital Hierarchy

SL:

SignallingLink

SONET:

Synchronous Optical Network

SP:

SignalingPoint

SPVC:

Soft permanentVirtualCircuit (see PNNI)

Src:

Source of an ATM Connection

SSCF:

Service Specific Coordination Function

SSCOP:

Service Specific Connection Oriented Protocol

SSN:

SubSystemNumber (ITU SCCP)

STP:

SignallingTransferPoint

TAT:

Theorical Arrival Time.

TBM:

RNC Transport Bearer Manager

TDM:

Time Divided Multiplex

TMU-R:

Traffic Management Unit (RNC-CNODE card)


ALU confidential

UMT/IRC/APP/11676

07.02 / EN

Standard

11/06/2009

Page 161/162

Iu Transport Engineering Guide


UBR+:

UBR enhanced service, provides a guaranteed minimum cell rate (MCR), or more
officially, minimum desired cell rate (MDCR) allocation per connection. ATM Forum
has standardized UBR+ as a TM 4.1 addendum.

UDP:

User Datagram Protocol

UNI:

User to Network Interface

UP:

UserPlane

USCH:

Up link Shared Channel

VC:

Virtual Channel

VCC:

Virtual Channel Connection. VCC = VPI / VCI

VCI:

Virtual Channel Identifier

VP:

Virtual Path

VPI:

Virtual Path Identifier

VPNNI:

Virtual PNNI

VPT:

Virtual Path Terminator called VP end Point in atmForum.

VR:

Virtual Router

 END OF DOCUMENT

ALU confidential

UMT/IRC/APP/11676

07.02 / EN

Standard

11/06/2009

Page 162/162