Вы находитесь на странице: 1из 228

UTStarcom TN780

System Description Guide

Release 1.2
Revision A
Product Order No. TN780-SDG-1.2-A

UTStarcom Inc.
www.utstarcom.com

Copyright
2004 UTStarcom Inc. All rights reserved.
This Manual is the property of UTStarcom Inc. and is confidential. No part of this Manual may be reproduced for any purposes or
transmitted in any form to any third party without the express written consent of UTStarcom.
UTStarcom makes no warranties or representations, expressed or implied, of any kind relative to the information or any portion
thereof contained in this Manual or its adaptation or use, and assumes no responsibility or liability of any kind, including, but not
limited to, indirect, special, consequential or incidental damages, (1) for any errors or inaccuracies contained in the information or (2)
arising from the adaptation or use of the information or any portion thereof including any application of software referenced or utilized
in the Manual. The information in this Manual is subject to change without notice.

Trademarks
UTStarcom is a trademark of UTStarcom Inc.
GoAhead is a trademark of GoAhead Software, Inc.
All other trademarks in this Manual are the property of their respective owners.

UTStarcom TN780 and UTStarcom Optical Line Amplifier Regulatory Compliance


FCC Class A
This device complies with Part 15 of the FCC rules. Operation is subject to the following two conditions: (1) this device may not cause
harmful interference, and (2) this device must accept any interference received, including interference that may cause undesired
operation. Modifying the equipment without UTStarcom's written authorization may result in the equipment no longer complying with
FCC requirements for Class A digital devices. In that event, your right to use the equipment may be limited by FCC regulations, and
you may be required to correct any interference to radio or television communications at your own expense.

DOC Class A
This digital apparatus does not exceed the Class A limits for radio noise emissions from digital apparatus as set out in the
interference-causing equipment standard titled Digital Apparatus," ICES-003 of the Department of Communications.
Cet appareil numrique respecte les limites de bruits radiolectriques applicables aux appareils numriques de Classe A prescrites
dans la norme sur le matriel brouilleur: "Appareils Numriques," NMB-003 dicte par le Ministre des Communications.

Warning
This is a class A product. In a domestic environment this product may cause radio interference in which case the user may be
required to take adequate measures.

FDA
This product complies with the DHHS Rules 21 CFR Subchapter J, Section 1040.10, Applicable at date of manufacture.

Contents
About this Document
Objective . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi
Audience . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi
Document Organization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xii
Related Documents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiii
Conventions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiv
Technical Assistance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiv

Chapter 1 - Introduction
Digital Optical Network Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-2
UTStarcom TN780 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-3
UTStarcom Optical Line Amplifier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-4
IQ Networking Operating System Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-5
MPower Network Management Overview. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-7
UTStarcom MPower Graphical Node Manager. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-8
UTStarcom MPower Element Management System. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-9
Release 1.2 Features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-10

Chapter 2 - Network Applications


TN780 Configurations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-2
Reconfigurable Digital Add/Drop Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-2
Digital Repeater Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-3

UTStarcom Inc.

TN780 System Description

Release 1.2

Page ii

Contents

Digital Terminal Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-3


Junction Node Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-3
Network Topologies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Point-to-point Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Linear Add/Drop Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Hub and Spoke Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Ring Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

2-4
2-4
2-5
2-5
2-6

Chapter 3 - Digital Optical Networking Systems


TN780 Hardware Overview. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-2
DTC Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-2
DTC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-4
Management Control Module (MCM-A, MCM-B). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-7
Tributary Optical Module-10G-SR1 (TOM-10G-SR1) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-7
Tributary Optical Module-2.5G-SR1 (TOM-2.5G-SR1) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-8
Tributary Optical Module-2.5G-IR1 (TOM-2.5G-IR1). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-8
Tributary Optical Module-1G-LX (TOM-1G-LX1) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-8
Tributary Adapter Module-10G (TAM-10G) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-8
Tributary Adapter Module-2.5G (TAM-2.5G) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-8
Tributary Adapter Module-1G (TAM-1G) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-9
Digital Line Module (DLM) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-9
DMC Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-11
Optical Line Amplifier Hardware Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
OTC Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
OTC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Optical Management Module (OMM). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Optical Amplifier Module (OAM-CX-A, OAM-CX-B) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

3-13
3-13
3-14
3-16
3-16

System Interfaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Operations Interfaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Management Interfaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Transport Interfaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Client/Trib Interfaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Line Interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Input/Output Alarm Contacts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Office Alarms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Alarm Cutoff (ACO) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Parallel Telemetry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Datawire . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

3-17
3-17
3-17
3-18
3-18
3-18
3-19
3-19
3-19
3-20
3-20

System Data Plane Functions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .


Digital Transport . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Tributary Adaptation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Digital Transport Frame . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Digital Transport Network Layers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

3-21
3-21
3-22
3-23
3-23

TN780 System Description

Release 1.2

UTStarcom Inc.

Contents

Page iii

DTF Section Layer. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .


DTF Line Layer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
DTF Path Layer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Digital Transport Frame Structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Digital Transport Frame Alignment Overhead (DTFA-OH) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
DTF Section Overhead (DTS-OH). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
DTF Line Overhead (DTL-OH) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
DTF Path k Overhead (DTPk-OH). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Digital Transport Payload Envelope k (DTEk) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
FEC Overhead (FEC-OH) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Bandwidth Grooming . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Reconfigurable Add/Drop . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Digital Regeneration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Digital Conditioning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Digital Transport Performance Monitoring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Digital Transport Maintenance Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Optical Transport . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Optical Transport Layers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Optical Amplification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Optical Conditioning. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Optical Performance Monitoring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Optical Transport Maintenance Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Data Plane Redundancy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

3-24
3-24
3-25
3-25
3-25
3-25
3-26
3-26
3-26
3-26
3-26
3-27
3-28
3-28
3-28
3-29
3-30
3-30
3-32
3-32
3-33
3-33
3-34

System Control Plane Functions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .


Intra-chassis Control Plane . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Inter-chassis Control Plane . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Inter-node Control Plane (over OSC) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

3-35
3-35
3-37
3-38

System Management Plane Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-40


Digital Terminal Site Operation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-41
Digital Add/Drop Site Operation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-44
Digital Repeater Site Operation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-49
Optical Line Amplifier Site Operation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-51

Chapter 4 - IQ Network Operating System


Fault Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-2
Alarm Surveillance. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-2
Defect Detection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-2
Failure Declaration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-3
Alarm Reporting. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-3
Alarm Masking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-6
Local Alarm Summary Indicators. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-6
Alarm Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-7
Network Fault Isolation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-10

UTStarcom Inc.

TN780 System Description

Release 1.2

Page iv

Contents

Event Log . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Maintenance and Troubleshooting Tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Loopbacks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
PRBS Test . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Hairpin Circuits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Trace Messaging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

4-10
4-11
4-12
4-12
4-13
4-13

Equipment Management and Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .


Managed Object Entities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
System Discovery and Inventory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Circuit Pack Discovery. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Optical Data Plane Autodiscovery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Circuit Pack Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Circuit Pack Auto-configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Circuit Pack Pre-configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
State Modeling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Administrative State. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Operational State. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Service State . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

4-15
4-15
4-16
4-17
4-17
4-19
4-19
4-19
4-19
4-20
4-21
4-21

Service Provisioning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Manual Cross-connect Provisioning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Dynamically Signaled SNC Provisioning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Service Pre-provisioning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

4-23
4-23
4-26
4-27

Protection Group Provisioning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-28


Performance Monitoring and Management. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
PM Data Collection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Real-time PM Data Collection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Historical PM Data Collection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
PM Thresholding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Suspect Interval Marking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
PM Data Transfer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
PM Data Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

4-31
4-32
4-32
4-32
4-33
4-33
4-33
4-34

Security and Access Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .


User Identification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Authentication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Access Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
..................................................................................
Authorization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Security Audit Log . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Security Administration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

4-35
4-35
4-36
4-36
4-36
4-37
4-39
4-40

Software Configuration Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .


Software Download . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Software Upgrade . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Remote Hardware FPGA Upgrade . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

4-41
4-41
4-41
4-42

TN780 System Description

Release 1.2

UTStarcom Inc.

Contents

Page v

Database: Download/Backup/Restoration/Rebranding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Database Download . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Database Backup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Database Restoration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Database rebranding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

4-43
4-43
4-43
4-44
4-46

IQ GMPLS Control Plane Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .


OSPF-TE Routing Protocol . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Network Topology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Traffic Engineering. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Constrained Shortest Path Route Computation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
GMPLS Signaling (RSVP-TE) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Handling Fault Conditions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Topology Configuration Guidelines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Control Link Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
GMPLS Link Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

4-47
4-47
4-48
4-49
4-51
4-51
4-51
4-52
4-52
4-52

IQ Management Plane Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .


DCN Communication Path . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
DCN Link Failure Recovery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
MCM-B/OMM Failure Recovery. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Management Application Proxy. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Configuration Settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Static Routing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Time-of-Day Synchronization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

4-53
4-53
4-54
4-55
4-56
4-58
4-58
4-59

Chapter 5 - MPower Management Software


MPower Graphical Node Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-3
Graphical User Interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-4
MPower GNM Features in Release 1.2: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-5
Inventory Manager. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-10
Network Topology Display. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-11
Software Configuration Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-11
Fault Management. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-12
Equipment Configuration and Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-13
Service Provisioning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-13
Performance Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-14
Security Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-14
MPower EMS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Administrative Domains. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Release Compatibility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Network Topology Discovery. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Network Element Information File Editor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Dynamic Seed File Editor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Discovery Key Ring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Topology Shallow Discovery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

UTStarcom Inc.

TN780 System Description

5-15
5-15
5-16
5-17
5-18
5-21
5-21
5-22

Release 1.2

Page vi

Contents

Topology Deep Discovery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .


Topology Discovery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Network Topology Views . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Network-level OAM&P Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Network Level Fault Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Network Level Inventory Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
End-to-end Circuit Provisioning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Circuit Layout. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Performance Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
MPower EMS Security and Access Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
User Identification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Authentication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Access Control. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Authorization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Security Audit Log . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Security Administration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
MPower EMS Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
MPower EMS Client. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
MPower EMS Server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
MPower EMS Platform Requirements. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
MPower Server Requirements. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
MPower Client Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
MPower Simple Network Management Protocol (SNMP) Agent. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
MPower SNMP Trap Agent Features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Alarm Trap Generation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
MPower SNMP Licensing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
MPower SNMP Configurable Features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Configurable Parameters. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Configuring MPower SNMP Trap Managers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
SNMP MIBs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Standard MIB Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
UTStarcom Enterprise MIBs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

5-24
5-24
5-25
5-26
5-26
5-27
5-27
5-28
5-30
5-31
5-31
5-32
5-32
5-33
5-33
5-34
5-34
5-35
5-35
5-36
5-36
5-37
5-38
5-38
5-38
5-39
5-39
5-39
5-40
5-40
5-40
5-40

Appendix A - TN780 PM Data


Optical PM Parameters and Thresholds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-2
DTF PM Parameters and Thresholds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-10
FEC PM Parameters and Thresholds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-15
Client Signal PM Parameters and Thresholds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-16
OSC PM Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-20

Appendix B - Optical Channel Plan


TN780 Optical Channel Plan. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . B-2

Appendix C - Acronyms
TN780 System Description

Release 1.2

UTStarcom Inc.

Figures
Figure 1-1
Figure 1-2
Figure 1-3
Figure 2-1
Figure 2-2
Figure 2-3
Figure 2-4
Figure 2-5
Figure 2-6
Figure 3-1
Figure 3-2
Figure 3-3
Figure 3-4
Figure 3-5
Figure 3-6
Figure 3-7
Figure 3-8
Figure 3-9
Figure 3-10
Figure 3-11
Figure 3-12
Figure 3-13
Figure 3-14
Figure 3-15
Figure 3-16
Figure 3-17

UTStarcom Inc.

Digital Optical Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-3


UTStarcom MPower Network Management Suite . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-7
Digital Optical Network and UTStarcom MPower Management Solution . . . . . . . . . . . . . . . . . . . 1-8
TN780 Configurations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-2
Point-to-point Network - Single Span. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-4
Point-to-point Network - Multiple Spans . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-4
Linear Add/Drop Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-5
Hub and Spoke Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-6
Ring Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-7
DTC Front View . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-4
DMC with Two Half-width DCMs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-12
DMC with One Full-width DCM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-12
OTC Front View . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-14
DTC Digital and Optical Transport Architecture. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-22
Digital Transport Network Layers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-24
Digital Transport Frame Structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-25
DTC Grooming Capacity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-27
Optical Transport Layers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-31
Optical Signal Multiplexing. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-32
Logical Illustration of Intra-chassis Control Plane in a DTC. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-36
Logical Illustration of Intra-chassis Control Plane in a OTC. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-36
Logical Illustration of Inter-chassis Control Plane . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-38
DTC with Minimum Hardware for a Digital Terminal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-41
Hardware Chassis Configuration of a 400Gbps Digital Terminal. . . . . . . . . . . . . . . . . . . . . . . . . 3-42
Hardware Logical Configuration of a 400Gbps Digital Terminal . . . . . . . . . . . . . . . . . . . . . . . . . 3-43
DTC with Minimum Hardware of a Digital Add/Drop Node . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-44

TN780 System Description

Release 1.2

Page viii

Figure 3-18
Figure 3-19
Figure 3-20
Figure 3-21
Figure 3-22
Figure 3-23
Figure 4-1
Figure 4-2
Figure 4-3
Figure 4-4
Figure 4-5
Figure 4-6
Figure 4-7
Figure 4-8
Figure 4-9
Figure 4-10
Figure 4-11
Figure 4-12
Figure 4-13
Figure 4-14
Figure 4-15
Figure 4-16
Figure 4-17
Figure 4-18
Figure 4-19
Figure 5-1
Figure 5-2
Figure 5-3
Figure 5-4
Figure 5-5
Figure 5-6
Figure 5-7
Figure 5-8
Figure 5-9
Figure 5-10
Figure 5-11
Figure 5-12
Figure 5-13
Figure 5-14
Figure 5-15

Figures

Hardware Physical Configuration of a 400Gbps Digital Add/Drop Node . . . . . . . . . . . . . . . . . . .3-46


Hardware Logical Configuration of a 400Gpbs Digital Add/Drop Node . . . . . . . . . . . . . . . . . . . .3-47
Hardware Physical Configuration of a 200Gbps Digital Add/Drop Node . . . . . . . . . . . . . . . . . . .3-48
Hardware Physical Configuration of a 200Gbps Digital Repeater Node . . . . . . . . . . . . . . . . . . .3-49
Hardware Logical Configuration of a 200Gpbs Digital Repeater Node . . . . . . . . . . . . . . . . . . . .3-50
Hardware Physical Configuration of an Optical Line Amplifier Node . . . . . . . . . . . . . . . . . . . . . .3-51
Alarm reporting behavior during ARC period . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4-9
Loopbacks supported by the TN780 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4-12
PRBS Tests Supported by the TN780 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4-13
Trace Messaging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4-14
Managed Object Entities and Hierarchy. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4-16
Express Cross-connect . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4-24
Add/Drop Cross-connect . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4-25
Hairpin Cross-connects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4-26
TribY-cable Protection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4-30
Physical Network Topology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4-48
Single Network with Topology Partition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4-49
Service Provisioning Topology. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4-49
Illustration of Using Node Inclusion Constraint . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4-50
Redundant DCN Connectivity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4-54
DCN Link Failure Recovery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4-55
MCM/OMM Failure Recovery. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4-56
Management Application Proxy Function. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4-57
Using Static Routing to Reach External Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4-59
NTP Server Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4-60
Digital Optical Network and UTStarcom MPower Management Solution . . . . . . . . . . . . . . . . . . .5-1
MPower GNM Main View . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5-5
Multi-window display . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5-6
MCM Redundancy Support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5-7
10G Clear Channel Service Type . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5-8
Protection Group Manager. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5-9
NCT ports on MPower GNM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5-10
MPower EMS Administrative Domains . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5-16
Network Information File Editor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5-19
Add Administrative Domain Menu . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5-20
Network Topology Map View . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5-23
Junction Site Topology. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5-24
Circuit Layout Record. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5-29
Cross-Connect Circuit Trace . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5-30
MPower EMS Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5-35

TN780 System Description

Release 1.2

UTStarcom Inc.

Tables
Table 1-1
Table 3-1
Table 3-2
Table 4-1
Table 5-2
Table B-1
Table C-1

UTStarcom Inc.

Release 1.2 Features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-10


DTC Hardware Equipment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-2
OTC Hardware Equipment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-13
Access Privilege Permissions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-37
MPower Server Platform Recommendations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-37
TN780 Optical Channel Plan . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . B-2
List of Acronyms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . C-1

TN780 System Description

Release 1.2

Page x

TN780 System Description

Tables

Release 1.2

UTStarcom Inc.

About this Document


This chapter provides an overview of this document. It includes:
Objective on page xi
Audience on page xi
Document Organization on page xii
Related Documents on page xiii
Conventions on page xiv
Technical Assistance on page xiv

Objective
This guide provides an introduction and reference to Digital Optical Networking Systems which includes
the UTStarcom TN780 (referred to as the TN780) and UTStarcom Optical Line Amplifier (referred to as
the Optical Line Amplifier) used to build Digital Optical Network. This guide also includes UTStarcom IQ
Network Operating System (referred to as the IQ) operating TN780 and Optical Line Amplifier network
elements, and UTStarcom MPower Management Suite (referred to as the MPower) provided to manage
UTStarcom products.

Audience
The primary audience for this guide includes network planners, network operations personnel and system
administrators who are responsible for deploying and administering the Digital Optical Network. This guide
assumes that the reader is familiar with the following topics and products:
Basic internetworking terminology and concepts

UTStarcom Inc.

TN780 System Description

Release 1.2

CHAPTER 1

Introduction
This chapter provides an introduction to Digital Optical Network, UTStarcom Digital Optical Networking
Systems, MPower Network Management, and Release 1.2 features in the following sections:
Digital Optical Network Overview on page 1-2
IQ Networking Operating System Overview on page 1-5
MPower Network Management Overview on page 1-7
Release 1.2 Features on page 1-10

UTStarcom Inc.

TN780 System Description

Release 1.2

Page 1-2

Digital Optical Network Overview

Digital Optical Network Overview


UTStarcom delivers the Digital Optical Network solution, referred to as Digital Optical Network. Digital
Optical Network provides the ability to multiplex, transport, add, drop, groom, switch and protect SONET,
SDH, Ethernet, and other services inexpensively, transparently, reliably, flexibly and quickly. Digital Optical
Network allows the construction of a single unified optical transport network that scales from metro to long
haul applications.
UTStarcom offers Digital Optical Networking Systems which help carriers build Digital Optical Networks.
The UTStarcom TN780 is the first Digital Optical Networking System which provides digital add/drop and
bandwidth management capabilities. In addition, UTStarcom Optical Line Amplifiers are provided to extend
the optical reach between the TN780s.
Digital Optical Network, as shown in Figure 1-1 on page 1-3, is comprised of TN780s deployed anywhere
client access is desired and Optical Line Amplifiers where client access is not anticipated. The links
between the TN780s, referred to as the Digital Links, isolate analog engineering and impairments within
that Digital Link. Customers can progressively deploy the transport network with TN780s at more points of
presence, interconnected by Digital Links, when and where capacity is required, without re-engineering the
network.

TN780 System Description

Release 1.2

UTStarcom Inc.

CHAPTER 1

Introduction
This chapter provides an introduction to Digital Optical Network, UTStarcom Digital Optical Networking
Systems, MPower Network Management, and Release 1.2 features in the following sections:
Digital Optical Network Overview on page 1-2
IQ Networking Operating System Overview on page 1-5
MPower Network Management Overview on page 1-7
Release 1.2 Features on page 1-10

UTStarcom Inc.

TN780 System Description

Release 1.2

Page 1-2

Digital Optical Network Overview

Digital Optical Network Overview


UTStarcom delivers the Digital Optical Network solution, referred to as Digital Optical Network. Digital
Optical Network provides the ability to multiplex, transport, add, drop, groom, switch and protect SONET,
SDH, Ethernet, and other services inexpensively, transparently, reliably, flexibly and quickly. Digital Optical
Network allows the construction of a single unified optical transport network that scales from metro to long
haul applications.
UTStarcom offers Digital Optical Networking Systems which help carriers build Digital Optical Networks.
The UTStarcom TN780 is the first Digital Optical Networking System which provides digital add/drop and
bandwidth management capabilities. In addition, UTStarcom Optical Line Amplifiers are provided to extend
the optical reach between the TN780s.
Digital Optical Network, as shown in Figure 1-1 on page 1-3, is comprised of TN780s deployed anywhere
client access is desired and Optical Line Amplifiers where client access is not anticipated. The links
between the TN780s, referred to as the Digital Links, isolate analog engineering and impairments within
that Digital Link. Customers can progressively deploy the transport network with TN780s at more points of
presence, interconnected by Digital Links, when and where capacity is required, without re-engineering the
network.

TN780 System Description

Release 1.2

UTStarcom Inc.

Introduction

Page 1-3

Figure 1-1 Digital Optical Network

D
i
g
i
t
a
l
L
i
n
k
s
C
l
i
e
n
t

C
l
i
e
n
t

C
l
i
e
n
t

C
l
i
e
n
t

UTStarcom Optical Line Amplifier

UTStarcom TN780

UTStarcom TN780
The UTStarcom TN780, referred to as the TN780, provides digital bandwidth management within a Digital
Optical Network. The TN780 provides a means for direct access to client data at 10Gbps, and 2.5Gbps

UTStarcom Inc.

TN780 System Description

Release 1.2

Page 1-4

Digital Optical Network Overview

wavelength granularity at a site, allowing flexible selection of whether to multiplex, add/drop, amplify,
groom, or wavelength interchange individual channels. The TN780 can be equipped in a variety of network
configurations using a common set of circuit packs. Refer to TN780 Configurations on page 2-2 for a
detailed description of the various configurations supported by the TN780. The detailed description of the
TN780 hardware is provided in CHAPTER 3.

UTStarcom Optical Line Amplifier


The UTStarcom Optical Line Amplifier, referred to as the Optical Line Amplifier is used to extend the optical
reach between TN780s. The Optical Line Amplifier is deployed at locations where customer access is not
anticipated. The detailed description of the Optical Line Amplifier hardware is provided in CHAPTER 3.

TN780 System Description

Release 1.2

UTStarcom Inc.

Introduction

Page 1-5

IQ Networking Operating System Overview


Digital Optical Network architecture includes an intelligent embedded control software called the IQ
Network Operating System, referred to as the IQ. The IQ operating on TN780 and Optical Line Amplifier
network elements provides reliable and intelligent interfaces for the Operation, Administration,
Maintenance and Provisioning (OAM&P) tasks performed by the operating personnel and management
systems. The IQ also includes an intelligent Generalized Multiprotocol Label Switching (GMPLS) control
plane architecture which provides automated end-to-end service provisioning and a management plane
architecture which provides reliable and redundant communication paths for the management traffic
between the management systems and the network elements.
IQ supports the following features:
Operates on TN780 and Optical Line Amplifier network elements.
Standards based operations and information model (ITU-T, TMF 814, Telcordia).
Extensive fault management capabilities including current alarm reporting, alarm reporting inhibition, hierarchical alarm correlation, configurable alarm severity assignment profile, and event logging.
Network diagnostics capability including digital path and digital section level loopbacks, circuit-level
Pseudo Random Bit Sequence (PRBS) 31 and detection, and SONET/SDH J0 monitoring at the
tributaries.
Automatic equipment configuration and equipment pre-configuration.
Fully automated network topology discovery including physical topology and service topology views.
Robust end-to-end automated circuit routing and provisioning utilizing GMPLS routing and signaling
protocols including the ability to pre-configure circuits, optional selection of SNC path utilizing constraint based routing, the option to specify the channel number within an OCG for a SNC, and the
option to specify 10G or 2.5G sub-channel to specify an explicit route.
Flexible software and configuration database management including remote software upgrade/rollback, configuration database backup and restore, and bulk File Transfer Protocol (FTP) transfers.
Analog performance monitoring at every node, digital performance monitoring at TN780s, native client signal performance monitoring at tributaries.
Supports Network Timing Protocol (NTP) to synchronize the timestamps on all alarms, events and
Performance Monitoring (PM) data across the network.
GR-815-CORE based security administration.
Hitless software and FPGA upgrades.
Multi-chassis configurations utilizing the Nodal Control and Timing (NCT) ports located on the I/O
panel of the TN780 and Optical Line Amplifier network elements.
Redundant control plane communication paths utilizing two Management Control Modules-B (MCMB)/Optical Management Modules (OMM) which provide shelf management and node management
functions.

UTStarcom Inc.

TN780 System Description

Release 1.2

Page 1-6

IQ Networking Operating System Overview

Redundant management plane communication paths utilizing Gateway Network Element and Management Proxy services.
Telcordia compliant TL1 for OSS integration.
Open integration interfaces including TL1, XML, and flat files.
Refer to CHAPTER 4 for a detailed description of the features.

TN780 System Description

Release 1.2

UTStarcom Inc.

Introduction

Page 1-7

MPower Network Management Overview


MPower Network Management Suite, referred to as the MPower, is a scalable, robust, carrier class
management software suite which simplifies Digital Optical Network operation and OSS integration. The
MPower is comprised of both management applications as well as OSS integration servers for flowthrough operations integration with other management systems. The MPower architecture, as shown in
Figure 1-2 on page 1-7, addresses various elements of the NEL, EML, NML, and SML layers of the ITU-T
TMN architecture by empowering and embedding much of the networking intelligence into the network
elements.
Figure 1-2 UTStarcom MPower Network Management Suite
Planning &
Design

MoM
NMS

3rd party
systems

OSSs

OSS Integration Servers


MPower
GNM
HTTP / XML

MPower EMS
TL1

XML / TCP
Data Communications Network

OSC/ GMPLS Control Plane

MPower management architecture employs a network-is-master model, allowing the network itself to
asynchronously inform and update all registered management clients and mitigate any synchronization or
accuracy issues. The network state and status is automatically discovered and reported to the
management client. This network-is-master model enables each network element to be managed by
multiple management applications, allowing for full management redundancy and allowing each
management application to maintain synchrony with what is occurring within its purview.
In the current release, MPower includes the following applications:
UTStarcom MPower Graphical Node Manager
UTStarcom MPower Element Management System

UTStarcom Inc.

TN780 System Description

Release 1.2

Page 1-8

MPower Network Management Overview

Figure 1-3 Digital Optical Network and UTStarcom MPower Management Solution

UTStarcom MPower Graphical Node Manager


UTStarcom MPower Graphical Node Manager, referred to as the MPower GNM, is a web-based
application which provides on-site craft access to local or remote nodes. The MPower GNM provides users
access to key management functional areas at the network element level, including:
Extensive fault management and performance monitoring
Automated end-to-end circuit provisioning
System and equipment configuration
Topology and inventory management
Maintenance and diagnostics
Security administration
Refer to CHAPTER 5 for a detailed description of the MPower GNM.

TN780 System Description

Release 1.2

UTStarcom Inc.

Introduction

Page 1-9

UTStarcom MPower Element Management System


UTStarcom MPower Element Management System, referred to as the MPower EMS, is a real-time
management software application for administering and managing the Digital Optical Network. The
MPower EMS provides end-users in the Network Operations Center (NOC) with an integrated networklevel and network-element-level view. The MPower EMS is comprised of distinct server applications and
client applications. The MPower EMS employs distributed client-server technology to allow deployment of
server components across multiple computing platforms. This distributed model enables the MPower EMS
to scale and support thousands of network elements as well as hundreds of end-users of the web-based
Graphical User Interface (GUI). The MPower EMS supports following features:
Network-wide real-time fault management and monitoring, including current alarm summary, historical event logs, and threshold crossing alerts.
Multiple topological views, topology updates, auto-discovery and network synchronization.
Network equipment inventory reporting with comprehensive manufacturing information.
Point and click circuit provisioning application and circuit inventory views, with correlated alarm status.
Scheduled network element configuration backup and restoration.
Historical performance monitoring collection and archiving.
Network element software distribution and upgrade.
Network element and MPower EMS security administration.
Automated MPower EMS client installation.
MPower EMS Server redundancy.
Refer to CHAPTER 5 for a detailed description of the MPower EMS.

UTStarcom Inc.

TN780 System Description

Release 1.2

Page 1-10

Release 1.2 Features

Release 1.2 Features


Release 1.2 features are summarized in Table 1-1 on page 1-10.

Table 1-1 Release 1.2 Features


Feature

Description

Network Topologies

Point-to-point, linear ADM, Hub and Spoke, and Ring topologies.

Multi-junction System
Application

Allows engineers to deploy interconnected rings that will simplify network designs
and provide flexible networking implementations.

UTStarcom TN780 Network Element

Digital Optical Networking System which provides digital add/drop and bandwidth
management capabilities.

Multi-Chassis configuration

Enables users to scale system capacity of deployed network equipment in new or


existing systems allowing for multi-chassis/multi-BMM and expanded DLM configurations.

UTStarcom Optical Line


Amplifier Network Element

Optical line amplifier provided to extend the optical reach between the TN780s.

6x24dB Optical Reach

Within a digital link between adjacent TN780s, up to six 24dB optical spans and
up to five Optical Line Amplifiers are supported.

DTC (Digital Transport


Chassis)

Supports Digital Optical Node functions; 400Gbps per fiber pair.

MCM-A (Management
Control Module)

Performs management and control functions for the TN780 network element.

MCM-B

Performs management and control functions for the TN780TN780 network element. Provides enhanced CPU frequency, FLASH memory for the persistence
storage, and Physical memory (SDRAM).

MCM redundancy

Allows for one MCM-B to be active and the other MCM-B to be stand-by. The
active MCM-B terminates the management interfaces to the system and provides
all of the control and monitoring functions for the system. The standby MCM-B
maintains synchronization with its active partner so that it is capable of becoming
active at any time, but is not actively involved in system control or monitoring.

BMM-C-4-A (Band Mux


Module)

Performs optical multiplexing and demultiplexing of four Optical Carrier Groups


(OCG). Each OCG contains ten 10Gbps DWDM channels. Three types of BMMC-4-As are provided with various combinations of fixed gain, variable gain and
mid-stage access for dispersion compensation fiber.

BMM-C-4-B

Performs optical multiplexing and demultiplexing of four Optical Carrier Groups


(OCG). Each OCG contains ten 10Gbps DWDM channels. Contains a new EDFA.
Three types of BMM-C-4-Bs are provided with various combinations of fixed gain,
variable gain and mid-stage access for dispersion compensation fiber.

BMM-C-8-A

Performs optical multiplexing and demultiplexing of eight Optical Carrier Groups


(OCG). Each OCG contains ten 10Gbps DWDM channels. Three types of BMMC-8s are provided with various combinations of fixed gain, variable gain and midstage access for dispersion compensation fiber.

TN780 System Description

Release 1.2

UTStarcom Inc.

Introduction

Page 1-11

Table 1-1 Release 1.2 Features


Feature

Description

DLM (Digital Line Module)

Performs add/drop or switching of ten 10Gbps optical channels. Performs Forward Error Correction (FEC) encoding/decoding on each channel. There are 8
types of DLMs, one for each OCG. Each DLM can house up to five TAM-2-10G,
TAM-4-2.5G and TAM-4-1G modules.

TAM-2-10G (Tributary
Adapter Module)

Houses two 10G Tributary Optical Modules (TOM) and adapts client signals for
transport over the Digital Optical Network. Up to two TOM-10G-SR-1, and/or
TOM-10G-IR2 modules are supported within each TAM-2-10G.

TAM-4-2.5G (Tributary
Adapter Module)

Houses four 2.5G Tributary Optical Modules and adapts client signals for transport over Digital Optical Network. Up to four TOM-2.5G-SR-1 and/or TOM-2.5GIR1 modules are supported within each TAM-4-2.5G.

TAM-4-1G (Tributary
Adapter Module)

Houses four 1GbE Tributary Optical Modules and adapts client signals for transport over Digital Optical Network. Up to four TOM-1G-LX modules are supported
within each TAM-4-1G.

TOM-10G-SR1 (Tributary
Optical Module)

Pluggable XFP optical module supporting client interface operating at 1550nm;


10km reach; LC connector; SONET OC-192, SDH STM-64, 10GbE LAN Phy, 10G
Clear Channel and 10GbE WAN Phy client signals.

TOM-10G-IR2 (Tributary
Optical Module)

Pluggable XFP optical module supporting client interface operating at 1550nm;


40km reach; LC connector; SONET OC-192, SDH STM-64, 10GbE LAN Phy, 10G
Clear Channel and 10GbE WAN Phy client signals.

TOM-2.5G-SR1 (Tributary
Optical Module)

Pluggable SFP optical module supporting client interface operating at 1310 nm;
2km reach; SONET OC-48 and SDH STM-16 client signals.

TOM-2.5G-IR1 (Tributary
Optical Module)

Pluggable SFP optical module supporting client interface operating at 1310 nm;
15km reach; SONET OC-48 and SDH STM-16 client signals.

TOM-1G-LX (Tributary
Optical Module)

Pluggable SFP optical module supporting client interface operating at 1310 nm;
5km reach; 1G Ethernet client signals.

OTC (Optical Transport


Chassis)

Supports Optical Line Amplification function.

OAM (Optical Amplifier


Module)

Performs uni-directional optical amplification. Up to two OAMs can be housed in


one OTC. Three types of OAMs are provided with various combinations of fixed
gain, variable gain and mid-stage access for dispersion compensation fiber.

OMM (Optical Management Module)

Performs management and control functions for the Optical Line Amplifier network
element.

Office alarms

Supports 20 external alarm inputs and 20 control outputs.

Datawire

Two 10Mbps Ethernet AUX ports to carry customer management data.

Management interfaces

Craft serial DCE (DB-9 female/RS232 interface) and craft Ethernet (10Mbps RJ45
interface) on MCM/OMM, and two 10/100Mbps DCN ports on the I/O panel of the
DTC and OTC.

OSC (Optical Supervisory


Channel)

100Mbps Optical Supervisory Channel for inter-node communication.

UTStarcom Inc.

TN780 System Description

Release 1.2

Page 1-12

Release 1.2 Features

Table 1-1 Release 1.2 Features


Feature

Description

10G Clear Channel Service

Provides services and technologies transported at the 10G SONET/SDH line rate
in unframed payloads.

Laser safety (ALS)

Automatic Laser Shutdown (ALS) during fiber cut.

Automatic channel turn-up

Automatically adjusts the power of the amplifiers across the entire link while turning up new channels or deleting existing channels.

In-service upgrade to Add/


Drop

The Digital Repeater sites can be upgraded to an Add/Drop configuration in-service by populating the tributary modules.

Eighty channel scalability

The limited availability of eighty channel BMMs will allow deployment of equipment that will support eighty channels in the future.

Automatic end-to-end circuit provisioning

The OSPF routing and GMPLS signaling protocols are implemented to support
the network topology discovery and end-to-end service provisioning and management.

Y-cable Protection

Enables 1+1 protection of diverse Sub Network Connection (SNC) paths through
the Digital Optical Network for sub-50ms switching. Y-cable protection increases
the overall reliability and service up-time of the optical path.

Enhanced digital transport


path grooming

Enhanced inter-DLM cross-connecting allows more flexible and efficient use of


bandwidth at add drop and multi-junction sites.

Export all alarms and


events

A feature provided in MPower EMS and MPower GNM that gives the user the ability to export all alarms and events.

Circuit Tracing

An EMS feature that gives the user the ability to trace a circuit by displaying intermediate points in the circuit.

Equipment auto-configuration and pre-configuration

In auto-configuration the software can automatically detect the hardware and configure. In pre-configuration the users can pre-configure the hardware before it is
installed.

Software upgrade protection

Allows the system the ability to gracefully fall-back or down-grade to a prior


release in the rare event that a failure is experienced during the upgrade process.

Remote Hardware FPGA


Upgrade

The TN780 hardware modules that support the ability to be remotely upgraded
include all types of TAMs, DLMs and BMMs. The ability to remotely upgrade hardware using a controlled process is integrated in Release 1.2.

Network Information File


Editor

An EMS feature that allows for the addition of administrative domains and Node
information updates while the EMS core server is running.

Optical PM, Digital PM,


SONET/SDH PM

Optical PM data collection is supported on the Optical Line Amplifier and the
TN780 network elements. Digital PM data collection is supported on the TN780 at
the Terminal, Add/Drop and Digital Repeater sites. SONET/SDH PM data collection is supported in the TN780 network element for the tributary interfaces at the
Terminal and Add/Drop sites. Both, current and historical PM counters are supported. The counters can be reset.

PM data upload

Automatic and periodic transfer of PM data in Comma Separated Value (CSV) format enabling customers to integrate with their management applications.

TN780 System Description

Release 1.2

UTStarcom Inc.

Introduction

Page 1-13

Table 1-1 Release 1.2 Features


Feature

Description

Gateway Network Element (GNE) and MAP


(Management Application
Proxy) functions

Minimizes the number of external DCN IP addresses and provides proxy services
to management traffic to manage network elements that do not have direct DCN
connectivity. Also supports redundant management access to all network elements and automatic recovery from single failure in communications path.

Non-Modal Multi-Window
display

Facilitates the ability to launch numerous windows with the GUI, creating ease of
provisioning, alarm correlation, and troubleshooting.

MPower Graphical Node


Manager (GNM) GUI

Supports web based Graphical User Interface (GUI) to manage a network element. MPower GNM GUI resides on the network element and has the same look
and feel as the MPower EMS. MPower GNM supports log-in to remote network
elements utilizing OSC.
- Event/Alarm management
- Topology navigation
- Inventory management
- Export inventory information in TSV and CSV format
- Automatic end-to-end circuit provisioning
- Manual cross-connect provisioning
- Historical and real-time performance monitoring
- Network element security management
- Software download
- Configuration database backup/restore

MPower Element Management System

Provides full fault management, configuration management, service provisioning,


performance management, and security management (FCPS) support of TN780
and Optical Line Amplifier network elements and network-level end-to-end control
and monitoring.
- Network/network element level event/alarm management
- Network/network element level topology management
- Network/network element level inventory management
- Network element PM archiving and scheduling
- Network element PM report generation
- Network element and MPower EMS security management
- Network element software download
- Network element configuration database backup/restore

MPower SNMP Trap agent

- SNMPv2C agent with dynamic trap registration


- Automated generation of current standing alarms upon registration
- Architected for future robust trap implementation

TL1 Interface

UTStarcom Inc.

The Telcordia standards compliant TL1 interface provides full FCPS support of
TN780 and Optical Line Amplifier network elements.

TN780 System Description

Release 1.2

Page 1-14

TN780 System Description

Release 1.2 Features

Release 1.2

UTStarcom Inc.

CHAPTER 2

Network Applications
This chapter describes the configurations and network topologies supported by the TN780 in the following
sections:
TN780 Configurations on page 2-2
Network Topologies on page 2-4

UTStarcom Inc.

TN780 System Description

Release 1.2

Page 2-2

TN780 Configurations

TN780 Configurations
The flexibility of the TN780 eliminates the need for distinct node types, as opposed to the traditional
Wavelength Division Multiplexing (WDM) networks that contain distinct node types performing a
specialized function, such as terminal, add/drop and amplification function. The TN780 provides all these
functions using a common set of circuit packs by allowing the terminal, add/drop or amplification functions
to be selected on a per channel (10Gbps and 2.5Gbps) basis. The TN780 eliminates the node-type
concept and introduces dynamically re-configurable 0-100% digital add/drop, terminal and amplification
functions in a single network element. In addition, the TN780 provides digital performance monitoring on a
per channel basis at each digital site for fault isolation and troubleshooting.
Figure 2-1 TN780 Configurations
D
igital
Term
inal S
ite
(D
T)

O
ptical Line
A
m
plifierS
ite

D
igital
A
dd/D
ropS
ite
(A
D
)

D
igital
R
epeaterS
ite
(D
R
)

D
igital
JunctionS
ite
(JN
)

D
igital
Term
inal S
ite
(D
T)

C
lient
S
pan
C
lient

C
lient

D
igital Link

C
lient

Reconfigurable Digital Add/Drop Configuration


When two fibers are terminated at the TN780, it can be configured to add/drop or pass-through the lineside traffic on a per channel (10Gbps or 2.5Gbps) basis. The TN780 can be configured to add/drop 0% to
100% of the system capacity. Each TN780 can add/drop up to 200Gbps per chassis in each direction. In
Release 1.2, the multi-chassis configuration may be utilized to increase the system capacity. The digital
performance monitoring and fault monitoring is performed for each add/drop or pass-through channel.
Note: A terminal or a digital repeater site can be upgraded in-service to a re-configurable add/
drop site by populating additional circuit packs. No network engineering is required to
enable add/drop capacity at any digital site.The reconfigurabale add/drop capability does
require a software license.

TN780 System Description

Release 1.2

UTStarcom Inc.

Network Applications

Page 2-3

Digital Repeater Configuration


A TN780 in a digital repeater configuration can be initially equipped with 0% add/drop client-side capacity,
providing a 3R regenerator function plus FEC re-coding, but offering the possibility of future upgrade into a
digital add/drop configuration. The TN780 can perform digital amplification of up to 200Gbps per chassis in
each direction. As with other configurations, the TN780 also provides intermediate digital performance
monitoring data for each digitally amplified channel.

Digital Terminal Configuration


A TN780 in digital terminal mode terminates the incoming line side traffic and hands off the traffic to the
customer equipment. The TN780 can terminate up to 400Gbps per chassis in increments of 10Gbps,
2.5Gbps, or 1Gbps by populating the client side interfaces as and when needed. The optical transport
capacity on the line side is deployed in 100Gbps increments. As with other configurations, the TN780 also
provides intermediate digital performance monitoring data for each digitally amplified channel.

Junction Node Configuration


The TN780 can be used at a junction site where multiple fibers from different directions meet. The
common locations are at the switching sites in a core backbone network or at transition points such as
metro/core network boundaries. In such locations, the TN780 can terminate incoming traffic on the line
side fibers or pass through the traffic after performing

UTStarcom Inc.

TN780 System Description

Release 1.2

Page 2-8

TN780 System Description

Network Topologies

Release 1.2

UTStarcom Inc.

CHAPTER 3

Digital Optical Networking Systems


As described in CHAPTER 1, UTStarcom offers Digital Optical Networking Systems which help carriers build Digital
Optical Networks. The TN780 is the first Digital Optical Networking System offered by UTStarcom. The following
section provides a brief overview of the hardware modules that make up the TN780.
TN780 Hardware Overview on page 3-2
UTStarcom also offers Optical Line Amplifiers optimized to extend the optical reach between two TN780s.
The following section provides a brief overview of the hardware modules that make up the Optical Line Amplifier.
Optical Line Amplifier Hardware Overview on page 3-13
The TN780 and Optical Line Amplifier network elements provide similar system interfaces, data plane and
control plane functions as described in the following sections. The difference in the functionality of the
TN780 and Optical Line Amplifier network elements is called out as needed.
System Interfaces on page 3-17
System Data Plane Functions on page 3-21
System Control Plane Functions on page 3-35
System Management Plane Functions on page 3-40
As described in CHAPTER 2, the TN780 supports multiple configurations. The following sections provide signal
flow within the TN780 for each supported configuration.

Digital Terminal Site Operation on page 3-41


Digital Add/Drop Site Operation on page 3-44
Digital Repeater Site Operation on page 3-49
The following section provides signal flow within an Optical Line Amplifier.
Optical Line Amplifier Site Operation on page 3-51

UTStarcom Inc.

TN780 System Description

Release 1.2

Page 2-6

Network Topologies

Figure 2-5 Hub and Spoke Network


Junctionsite
To/From
Customer

To/From
Customer

To/From
Customer

To/From
Customer

To/From
Customer

To/From
Customer

To/From
Customer

A linear add/drop network can be upgraded in-service to a hub and spoke network configuration by adding
a spoke-route at a Digital Junction site. Additionally, more spoke-routes can be added in-service to an
existing Digital Junction site. Also, a spoke route can be extended in an in-service manner with the addition
of Digital Add/Drop nodes.

Ring Network
A ring network is a special case of linear add/drop network where two Digital Terminal nodes are replaced
by a single Digital Add/Drop node. So, a digital optical ring network consists of TN780s configured to
perform add/drop function and interconnected in a ring topology. (See Figure 2-6 on page 2-7.) As with all
other network configurations, a linear add/drop network is in-service upgradeable to a ring network.The
UTStarcom digital optical ring network eliminates the distance limitations on ring circumference. Removing
the distance limitations on ring circumference allows the digital optical ring to be deployed in metro
applications and core network applications.

TN780 System Description

Release 1.2

UTStarcom Inc.

Network Applications

Page 2-7

Figure 2-6 Ring Network

To/From
Customer

To/From
Customer

To/From
Customer

To/From
Customer

UTStarcom Inc.

TN780 System Description

Release 1.2

Page 2-8

TN780 System Description

Network Topologies

Release 1.2

UTStarcom Inc.

CHAPTER 3

Digital Optical Networking Systems


As described in CHAPTER 1, UTStarcom offers Digital Optical Networking Systems which help carriers build Digital
Optical Networks. The TN780 is the first Digital Optical Networking System offered by UTStarcom. The following
section provides a brief overview of the hardware modules that make up the TN780.
TN780 Hardware Overview on page 3-2
UTStarcom also offers Optical Line Amplifiers optimized to extend the optical reach between two TN780s.
The following section provides a brief overview of the hardware modules that make up the Optical Line Amplifier.
Optical Line Amplifier Hardware Overview on page 3-13
The TN780 and Optical Line Amplifier network elements provide similar system interfaces, data plane and
control plane functions as described in the following sections. The difference in the functionality of the
TN780 and Optical Line Amplifier network elements is called out as needed.
System Interfaces on page 3-17
System Data Plane Functions on page 3-21
System Control Plane Functions on page 3-35
System Management Plane Functions on page 3-40
As described in CHAPTER 2, the TN780 supports multiple configurations. The following sections provide signal
flow within the TN780 for each supported configuration.

Digital Terminal Site Operation on page 3-41


Digital Add/Drop Site Operation on page 3-44
Digital Repeater Site Operation on page 3-49
The following section provides signal flow within an Optical Line Amplifier.
Optical Line Amplifier Site Operation on page 3-51

UTStarcom Inc.

TN780 System Description

Release 1.2

Page 3-2

TN780 Hardware Overview

TN780 Hardware Overview


This section provides an overview of the hardware modules that are equipped in the TN780 network
element. For the detailed description and technical specifications of the TN780 hardware, refer to the
UTStarcom TN780 Hardware Description manual.
The TN780 is comprised of one or more DTCs and optionally one or more passive Dispersion
Management Chassis (DMCs) for dispersion compensation depending on configuration, as described in
the following sections.
DTC Overview on page 3-2
DMC Overview on page 3-11

DTC Overview
The DTC is comprised of a DTC and field replaceable circuit packs. The DTC consists of several common
equipment components. DTC Hardware Equipment on page 3-2 gives a list of the DTC components and
field replaceable circuit packs. A front view of the DTC with the DTC components and circuit packs is
shown in Figure 3-1 on page 3-4.

Table 3-1 DTC Hardware Equipment


Equipment Type

Name

DTC

Rack mounting ears

components

Power Entry Modules (PEM)


I/O Panel
Timing and Alarm Panel (TAP)
Fan Trays
Air Filter

Circuit Packs

Management Control Module-A (MCM-A)


Management Control Module-B (MCM-B)
Band Mux Module-4-CX-A (BMM-4-CX-A)
Band Mux Module-4-CX-B (BMM-4-CX-B)
Band Mux Module-8-CX-A (BMM-8-CX-A)
Digital Line Module (DLM)
Tributary Adaptor Module-10G (TAM-2-10G)
Tributary Optical Module-10G-SR1 (TOM-10G-SR1)
Tributary Optical Module-10G-IR2 (TOM-10G-IR2)
Tributary Adaptor Module-2.5G (TAM-4-2.5G)
Tributary Optical Module-2.5G-SR1 (TOM-2.5G-SR1)

TN780 System Description

Release 1.2

UTStarcom Inc.

Digital Optical Networking Systems

Page 3-3

Table 3-1 DTC Hardware Equipment


Equipment Type

Name
Tributary Optical Module-2.5G-IR1 (TOM-2.5G-IR1)
Tributary Adaptor Module-1G (TAM-4-1G)
Tributary Optical Module-1G-LX (TOM-1G-LX)

UTStarcom Inc.

TN780 System Description

Release 1.2

Page 3-4

TN780 Hardware Overview

Figure 3-1 DTC Front View

DTC

TN780 System Description

Release 1.2

UTStarcom Inc.

Digital Optical Networking Systems

Page 3-13

Optical Line Amplifier Hardware Overview


This section provides an overview of the hardware modules that are equipped in the Optical Line Amplifier
network element. For a detailed description and technical specifications refer to the UTStarcom TN780
Hardware Description manual.
The Optical Line Amplifier is comprised of an OTC and optionally a DMC for dispersion compensation
depending on configuration, as described in the following sections.
OTC Overview on page 3-13
DMC Overview on page 3-11

OTC Overview
The OTC is comprised of an OTC and field replaceable circuit packs. Table 3-2 on page 3-13 gives a list of
OTC components and field replaceable circuit packs. A front view of the OTC with the OTC components
and circuit packs is shown in Figure 3-4 on page 3-14.
Table 3-2 OTC Hardware Equipment
Equipment Type
OTC components

Name
Rack mounting ears
Power Entry Module
IO/Alarm Panel
Fan Tray
Air Filter

Circuit Packs

Optical Management Module (OMM)


Optical Amplifier Module (OAM)

UTStarcom Inc.

TN780 System Description

Release 1.2

Page 3-14

Optical Line Amplifier Hardware Overview

Figure 3-4 OTC Front View


Management
for IAP

IAP

Management
for IAP

PEM A

PEM B
Fiber Guide

OMMs
OWM
Cable Guide

Air Filter
Air Inlet

OAMs

Fan Tray A

Fan Tray B

OTC
The OTC houses the common equipment required for operations and circuit packs that amplify optical
signals. Each OTC supports bidirectional optical amplification function. The OTC includes the following
common equipment that provides power, performs system supervision, and enables system-level
communication:
Rack Mounting Ears (see Rack Mounting Ears on page 3-14)
Two Power Entry Modules (see Power Entry Module on page 3-15)
One IO/Alarm Panel (see IO/Alarm Panel on page 3-15)
Two Fan Trays (see Fan Tray on page 3-15)
One Air Filter (see Air Filter on page 3-15)
One Card cage (see Card Cage on page 3-15)

Rack Mounting Ears


Each OTC includes integrated rack mount ears used to flush mount in a 600mmx600mm ETSI rack.
Separate rack mounting ears are provided to mount the chassis in a 23 rack in flush-mount, and 5, and 6
forward-mount positions.

TN780 System Description

Release 1.2

UTStarcom Inc.

Digital Optical Networking Systems

Page 3-15

Power Entry Module


The Power Entry Module, referred to as the PEM, is used in redundant pairs to manage the two
independent input power supplies (redundant A and B power feeds) to the OTC. The PEM module outputs
are paralleled together on the OTC to form a fully redundant and independent power supply.
Each PEM is designed to connect to the -60VDC or -48VDC external Power Distribution Units (PDU). The
PEM supports the OTC operating voltage range of -72VDC to -40VDC and a worst case load current of
40A at -10V. The PEM does not provide chassis-level over current protection. The over current protection
must be provided by the external PDU to which it is connected. The PEM provides the input over-voltage
and under-voltage protection.

IO/Alarm Panel
The IO/Alarm Panel houses the management and operations interfaces as described below:
Two 10/100Mb auto-negotiating Data Communication Network (DCN) RJ-45 interfaces
Two 10Mb Administrative Inter-LAN RJ-45 interfaces to support datawire application
One Craft RS232 Modem port
Chassis level alarm LEDs (Critical, Major, Minor, Power)
Four inter-chassis interconnect RJ-45 interfaces referred to as Nodal Control and Timing, for multichassis configuration
One Lamp Test button
One ACO button
One ACO LED
The IO/Alarm Panel also houses telemetry alarm contacts. It provides 19 user customizable alarm input
contact sets and 10 user customizable alarm contact outputs.

Fan Tray
Each OTC accommodates two fan trays, one on the left side of the chassis and the other on the right side
of the chassis. Each fan tray contains one cooling fan. The two fan trays work concurrently to push/pull air
through the system with air flow entering from the front right and exiting on the left side.

Air Filter
Each OTC accommodates one replaceable air filter located on the right side of the chassis to filter out
particles at the air intake of the OTC.

Card Cage
Each OTC has a card cage into which field replaceable circuit packs are installed. Each OTC card cage
can accommodate:

UTStarcom Inc.

TN780 System Description

Release 1.2

Page 3-16

Optical Line Amplifier Hardware Overview

Up to two Optical Management Modules (OMM) in slot 1A and 1B


Up to two Optical Amplifier Modules (OAM) in slots 2 and 3
Future support for one OWM to be plugged into an OAM

Optical Management Module (OMM)


The OMM provides the same functions as the MCM (see Management Control Module (MCM-A, MCM-B)
on page 3-7), but for the OTC. As with MCM-B, redundant OMMs are supported for high availability.

Optical Amplifier Module (OAM-CX-A, OAM-CX-B)


The OAM performs uni-directional in-line optical amplification of the incoming signal, and terminates the
OSC for processing control and in-band management traffic. Two OAMs are required in an OTC to perform
bi-directional optical amplification.
As with the BMM, the OAM terminates the OSC. The OAM contains the OSC optical transmitter and
receiver. However, since each OAM receives optical signals from one direction and transmits towards the
opposite direction, a special consideration is given to ensure that the OSC transmitter and receiver for a
given link are located on the same OAM. This is done by having the West OSC transmitter located on the
W-E OAM and the East OSC transmitter located on the E-W OAM. The OSC transmit signal is crossed
over using a front panel, duplex, optical patch cord. This is done so that the failure of a single OAM does
not result in isolation of the network element for management traffic. Refer to Optical Line Amplifier Site
Operation on page 3-51 for more details.
The OAM also includes a mid-stage DCF access port to enable optical dispersion compensation. The DMC
as described in DMC Overview on page 3-11 is used to provide dispersion compensation.The OAM
automatically detects the presence of the DCF during turn-up and adjusts its pump powers to achieve the
correct gain with the DCF in place. As a precaution during initial system turn-up, an alarm is generated if
the measured mid-stage loss is out-of-tolerance relative to the provisioned expected mid-stage loss.
There are six different OAM types providing different EDFA gain and with/without mid-stage access. Since
the reach requirements may be different in the two directions, two different OAM types (with respect to
gain, mid-stage access, or band) may be combined within an OTC.
The OAM-CX-B has an enhanced EDFA for greater reliability.

TN780 System Description

Release 1.2

UTStarcom Inc.

Digital Optical Networking Systems

Page 3-9

Tributary Adapter Module-1G (TAM-1G)


The TAM-1G, maps the client optical signals into digital signals for subsequent transmission through the
DLM. The 1G TAM can be arbitrarily equipped in any of the five sub-slots located on the DLM. The TAM1G provides four sub-slots to enable insertion of up to 4 TOMs.
The TAM-4-1G supports 1GbE interfaces.

Digital Line Module (DLM)


The DLM performs transport and switching of ten 10Gbps DWDM signals, referred to as the Optical
Carrier Group (OCG). The DLM performs the following functions:
Performs 4R function (retiming, reshaping, regeneration and recoding) on each optical channel. The
Forward Error Correction (FEC) is applied for each channel that is transmitted/received to/from a
BMM providing a coding gain of 8.7 dB at 10Gbps at a BER of 1e-15.
Converts the digital signals received from a TAM into ITU-compliant optical signals and then multiplexes the ten 10Gbps optical channels into an OCG. The DLM performs the opposite function in the
reverse direction.
Each DLM houses up to five TAMs terminating up to100Gbps of client traffic.
Supports grooming and switching of optical channels at 10Gbps, 2.5Gbps or 1Gbps granularity utilizing a cross-point switch and backplane connectivity to other DLMs. The DLM provides flexible
selection of add-drop, as well as wavelength interchange for pass-through traffic. Refer to Bandwidth Grooming on page 3-26 for the description of backplane bandwidth and switching rules.
Optically connects to the BMM for the second stage multiplexing of multiple OCGs onto the line side
fiber.
There are four DLM versions, one for each OCG. Refer to TN780 Optical Channel Plan on page B-2 for
more details on OCGs. The DLM supports the installation of any combination of TAM-2-10G, TAM-4-2.5G
and TAM-4-1G.

Band Mux Module-4-CX-A (BMM-4-CX-A)


The BMM-4-CX-A perform the following functions:
Optically multiplexes four OCGs from the DLMs onto the line side facility
Optically de-multiplexes the line side signal into four OCGs and passes them to local DLMs
Provides optical insertion and extraction of the 1510nm Optical Supervisory Channel (OSC) using a
1510nm optical filter
Optically amplifies the multiplexed transmitted and received OCG signals using either an optical
booster or a pre-amplifier
Provides a C/L-band splitter to support an in-service expansion of the system to enable optical
transmission in the L-band

UTStarcom Inc.

TN780 System Description

Release 1.2

Page 3-10

TN780 Hardware Overview

Provides optical access points for power monitors or optical spectrum analyzers. This includes two
(2) receive access points and one (1) transmit access point
Provides sub-slot access for the OWM supported in future release
Accommodates mid-stage access to Dispersion Compensation Fiber (DCF)
There are three different BMM-4-CX-A types providing different EDFA gain and with/without mid-stage
DCF access.

Band Mux Module-4-CX-B (BMM-4-CX-B)


The BMM-4-CX-B performs the following functions:
Enhanced EDFA for increased reliability
Optically multiplexes eight OCGs from the DLMs onto the line side facility
Optically de-multiplexes the line side signal into eight OCGs and passes them to local DLMs
Provides optical insertion and extraction of the 1510nm Optical Supervisory Channel (OSC) using a
1510nm optical filter
Optically amplifies the multiplexed transmitted and received OCG signals using either an optical
booster or a pre-amplifier
Provides a C/L-band splitter to support an in-service expansion of the system to enable optical transmission in the L-band
Provides optical access points for power monitors or optical spectrum analyzers. This includes two
(2) receive access points and one (1) transmit access point
Provides sub-slot access for the OWM supported in future release
Accommodates mid-stage access to Dispersion Compensation Fiber (DCF)
There are three different BMM-4-CX-B types providing different EDFA gain and with/without mid-stage
DCF access.

Band Mux Module-8-CX-A (BMM-8-CX-A)


The BMM-8-CX-A performs the following functions:
Enhanced EDFA for increased reliability
Optically multiplexes eight OCGs from the DLMs onto the line side facility
Optically de-multiplexes the line side signal into eight OCGs and passes them to local DLMs
Provides optical insertion and extraction of the 1510nm Optical Supervisory Channel (OSC) using a
1510nm optical filter
Optically amplifies the multiplexed transmitted and received OCG signals using either an optical
booster or a pre-amplifier

TN780 System Description

Release 1.2

UTStarcom Inc.

Digital Optical Networking Systems

Page 3-11

Provides a C/L-band splitter to support an in-service expansion of the system to enable optical
transmission in the L-band
Provides optical access points for power monitors or optical spectrum analyzers. This includes two
(2) receive access points and one (1) transmit access point
Provides sub-slot access for the OWM supported in future release
Accommodates mid-stage access to Dispersion Compensation Fiber (DCF)
There are three different BMM-8-CX-A types providing different EDFA gain and with/without mid-stage
DCF access.
Note: In R1.2 the support for the BMM-8 is on a limited availability basis. Please contact your
UTStarcom sales account team for more information.

DMC Overview
This section provides an overview of the DMC. For the detailed description and technical specifications
refer to the UTStarcom TN780 Hardware Description manual.
The DMC is a passive chassis and does not require management. Depending on the span characteristics,
the DMC is optionally included in TN780 and Optical Line Amplifier network elements to provide dispersion
compensation.
The DMC is comprised of a chassis and Dispersion Compensation Modules (DCMs).
The DMC is a 1RU chassis. As with the DTC, the DMC can be mounted in a 23 rack (flush-mount and 1,
2, 5 and 6 forward-mount) and 600mmx600mm ETSI rack (flush-mount). Each DMC can accommodate
two half-width DCMs (see Figure 3-2 on page 3-12) or one full width DCM (see Figure 3-3 on page 3-12).
Multiple DCMs are available providing 100ps/nm to 1800ps/nm in 100ps/nm increments.

UTStarcom Inc.

TN780 System Description

Release 1.2

Page 3-12

TN780 Hardware Overview

Figure 3-2 DMC with Two Half-width DCMs

Figure 3-3 DMC with One Full-width DCM

TN780 System Description

Release 1.2

UTStarcom Inc.

Digital Optical Networking Systems

Page 3-13

Optical Line Amplifier Hardware Overview


This section provides an overview of the hardware modules that are equipped in the Optical Line Amplifier
network element. For a detailed description and technical specifications refer to the UTStarcom TN780
Hardware Description manual.
The Optical Line Amplifier is comprised of an OTC and optionally a DMC for dispersion compensation
depending on configuration, as described in the following sections.
OTC Overview on page 3-13
DMC Overview on page 3-11

OTC Overview
The OTC is comprised of an OTC and field replaceable circuit packs. Table 3-2 on page 3-13 gives a list of
OTC components and field replaceable circuit packs. A front view of the OTC with the OTC components
and circuit packs is shown in Figure 3-4 on page 3-14.
Table 3-2 OTC Hardware Equipment
Equipment Type
OTC components

Name
Rack mounting ears
Power Entry Module
IO/Alarm Panel
Fan Tray
Air Filter

Circuit Packs

Optical Management Module (OMM)


Optical Amplifier Module (OAM)

UTStarcom Inc.

TN780 System Description

Release 1.2

Page 3-14

Optical Line Amplifier Hardware Overview

Figure 3-4 OTC Front View


Management
for IAP

IAP

Management
for IAP

PEM A

PEM B
Fiber Guide

OMMs
OWM
Cable Guide

Air Filter
Air Inlet

OAMs

Fan Tray A

Fan Tray B

OTC
The OTC houses the common equipment required for operations and circuit packs that amplify optical
signals. Each OTC supports bidirectional optical amplification function. The OTC includes the following
common equipment that provides power, performs system supervision, and enables system-level
communication:
Rack Mounting Ears (see Rack Mounting Ears on page 3-14)
Two Power Entry Modules (see Power Entry Module on page 3-15)
One IO/Alarm Panel (see IO/Alarm Panel on page 3-15)
Two Fan Trays (see Fan Tray on page 3-15)
One Air Filter (see Air Filter on page 3-15)
One Card cage (see Card Cage on page 3-15)

Rack Mounting Ears


Each OTC includes integrated rack mount ears used to flush mount in a 600mmx600mm ETSI rack.
Separate rack mounting ears are provided to mount the chassis in a 23 rack in flush-mount, and 5, and 6
forward-mount positions.

TN780 System Description

Release 1.2

UTStarcom Inc.

Digital Optical Networking Systems

Page 3-15

Power Entry Module


The Power Entry Module, referred to as the PEM, is used in redundant pairs to manage the two
independent input power supplies (redundant A and B power feeds) to the OTC. The PEM module outputs
are paralleled together on the OTC to form a fully redundant and independent power supply.
Each PEM is designed to connect to the -60VDC or -48VDC external Power Distribution Units (PDU). The
PEM supports the OTC operating voltage range of -72VDC to -40VDC and a worst case load current of
40A at -10V. The PEM does not provide chassis-level over current protection. The over current protection
must be provided by the external PDU to which it is connected. The PEM provides the input over-voltage
and under-voltage protection.

IO/Alarm Panel
The IO/Alarm Panel houses the management and operations interfaces as described below:
Two 10/100Mb auto-negotiating Data Communication Network (DCN) RJ-45 interfaces
Two 10Mb Administrative Inter-LAN RJ-45 interfaces to support datawire application
One Craft RS232 Modem port
Chassis level alarm LEDs (Critical, Major, Minor, Power)
Four inter-chassis interconnect RJ-45 interfaces referred to as Nodal Control and Timing, for multichassis configuration
One Lamp Test button
One ACO button
One ACO LED
The IO/Alarm Panel also houses telemetry alarm contacts. It provides 19 user customizable alarm input
contact sets and 10 user customizable alarm contact outputs.

Fan Tray
Each OTC accommodates two fan trays, one on the left side of the chassis and the other on the right side
of the chassis. Each fan tray contains one cooling fan. The two fan trays work concurrently to push/pull air
through the system with air flow entering from the front right and exiting on the left side.

Air Filter
Each OTC accommodates one replaceable air filter located on the right side of the chassis to filter out
particles at the air intake of the OTC.

Card Cage
Each OTC has a card cage into which field replaceable circuit packs are installed. Each OTC card cage
can accommodate:

UTStarcom Inc.

TN780 System Description

Release 1.2

Page 3-16

Optical Line Amplifier Hardware Overview

Up to two Optical Management Modules (OMM) in slot 1A and 1B


Up to two Optical Amplifier Modules (OAM) in slots 2 and 3
Future support for one OWM to be plugged into an OAM

Optical Management Module (OMM)


The OMM provides the same functions as the MCM (see Management Control Module (MCM-A, MCM-B)
on page 3-7), but for the OTC. As with MCM-B, redundant OMMs are supported for high availability.

Optical Amplifier Module (OAM-CX-A, OAM-CX-B)


The OAM performs uni-directional in-line optical amplification of the incoming signal, and terminates the
OSC for processing control and in-band management traffic. Two OAMs are required in an OTC to perform
bi-directional optical amplification.
As with the BMM, the OAM terminates the OSC. The OAM contains the OSC optical transmitter and
receiver. However, since each OAM receives optical signals from one direction and transmits towards the
opposite direction, a special consideration is given to ensure that the OSC transmitter and receiver for a
given link are located on the same OAM. This is done by having the West OSC transmitter located on the
W-E OAM and the East OSC transmitter located on the E-W OAM. The OSC transmit signal is crossed
over using a front panel, duplex, optical patch cord. This is done so that the failure of a single OAM does
not result in isolation of the network element for management traffic. Refer to Optical Line Amplifier Site
Operation on page 3-51 for more details.
The OAM also includes a mid-stage DCF access port to enable optical dispersion compensation. The DMC
as described in DMC Overview on page 3-11 is used to provide dispersion compensation.The OAM
automatically detects the presence of the DCF during turn-up and adjusts its pump powers to achieve the
correct gain with the DCF in place. As a precaution during initial system turn-up, an alarm is generated if
the measured mid-stage loss is out-of-tolerance relative to the provisioned expected mid-stage loss.
There are six different OAM types providing different EDFA gain and with/without mid-stage access. Since
the reach requirements may be different in the two directions, two different OAM types (with respect to
gain, mid-stage access, or band) may be combined within an OTC.
The OAM-CX-B has an enhanced EDFA for greater reliability.

TN780 System Description

Release 1.2

UTStarcom Inc.

Digital Optical Networking Systems

Page 3-17

System Interfaces
The TN780 and Optical Line Amplifier network elements provide several external interfaces as described
in the following sections:
Operations Interfaces on page 3-17
Transport Interfaces on page 3-18
Input/Output Alarm Contacts on page 3-19
Datawire on page 3-20

Operations Interfaces
The operations interfaces provide the management and administration of the network element. The TN780
and Optical Line Amplifier network elements provide two kinds of interfaces described below.

Management Interfaces
The network elements provide multiple craft interfaces for local user access to network management and
Operations, Administration, Maintenance and Provisioning (OAM&P) functions and also DCN interfaces
for remote access. Following is a list of external interfaces that can be used to facilitate the connection of
management devices to the TN780 and Optical Line Amplifier network elements.
Craft Serial DCE - This is a DB-9 female/RS-232 DCE interface used to connect a dumb terminal.
This serial port supports TL1 only (not EMS or Craft GUI). Maintenance personnel can use this interface for managing the local network element or any subtending network elements utilizing this network element as a Gateway. The craft serial interface is located on the MCM/OMM.
Craft Ethernet - This is a 10Mbps Ethernet RJ45 interface. This interface can be used to access the
network element through the TL1 Interface or MPower GNM. Maintenance personnel can use this
interface for managing the local network element or any subtending network elements utilizing this
network element as a Gateway. The craft Ethernet interface is located on the MCM/OMM.
DCN - This is an auto-negotiating 10/100Mbps Ethernet RJ45 interface. There are two DCN interfaces per network element supporting redundant inter-connectivity to the DCN. OSS personnel can
use this interface to manage the network element remotely. OSS personnel can use any of
UTStarcom Network Management Software applications, such as MPower EMS, MPower GNM or
systems TL1 interface, to manage the local network element or any subtending network elements
utilizing this network element as a Gateway. DCN interfaces are located on the IO Panel of the
TN780 and IO/Alarm Panel of the Optical Line Amplifier.
Craft Serial DTE - This is a DB-9 Male/RS-232 DTE interface used to connect an external modem or
a dumb terminal. This interface is located on the IO Panel of the TN780 and IO/Alarm Panel of the
Optical Line Amplifier.

UTStarcom Inc.

TN780 System Description

Release 1.2

Page 3-18

System Interfaces

Refer to UTStarcom TL1 User Guide, UTStarcom MPower GNM User Guide, and UTStarcom MPower
EMS User Guide for more details on how to use these interfaces to access the corresponding network
management applications.

Transport Interfaces
The transport interfaces carry the user data. Two types of transport interfaces are provided as described
below.

Client/Trib Interfaces
The client/trib interfaces are the ingress/egress points of the customer signals into/out of the TN780. These
signals can be added/removed at a terminal site, or an Add/Drop site. The following client/trib signals are
supported:
SONET OC-192 with full SONET overhead transparency
SONET OC-48 with full SONET overhead transparency
SDH STM-64 with full SDH overhead transparency
SDH STM-16 with full SDH overhead transparency
10G clear channel
10GbE LAN Phy
10GbE WAN Phy
1GbE

Line Interface
The line side optical interface carries the aggregate signal coming into/out of the TN780 and Optical Line
Amplifier network elements. The line side signal has the following characteristics:
40x10G channels with integrated OC-3c OSC
Enhanced FEC for 1E-15 end-to-end BER
Digital section layer & digital path level OAM (PM, tracing, alarms)
Traffic-agnostic transport for any 10Gbps/2.5Gbps/1Gbps signals
The line side interface supports multiple fiber types, such as SMF, TW-RS, and E-LEAF.
For more details on the optical characteristics of the line interfaces, refer to UTStarcom TN780 Hardware
Description manual.

TN780 System Description

Release 1.2

UTStarcom Inc.

Digital Optical Networking Systems

Page 3-19

Input/Output Alarm Contacts


The network element provides several input and output alarm contacts for integration with existing
telemetry systems. The network element also provides visual and audible indicators for office alarms. The
alarm contacts are available on a front-accessible, wire-wrap connector located on the Timing/Alarm
Panel.
Each network element provides 20 alarm input contacts and 20 alarm output contacts. Some of the
contacts are reserved for pre-defined functions and the remaining are user-customizable, as described in
the following sections.

Office Alarms
The TN780 and Optical Line Amplifier network elements provide seven office dry alarm contact sets to
connect to the Central Office alarm grid. Following are the office alarms provided:
Critical Audible
Critical Visual
Major Audible
Major Visible
Minor Audible
Minor Visible
Power failure
Each set consists of normally-closed (NC), normally-open (NO) and common contacts. When two or more
chassis are installed in a single bay, the alarm outputs may be Ored by wiring the associated outputs in
parallel (normally-open) or in series (normally-closed), as preferred by the customer.

Alarm Cutoff (ACO)


The DTC and OTC provide ACO function so that customers can mute the audible alarms while other alarm
indications persist. The ACO is implemented with a front panel push button and a front panel LED. When
the front panel ACO push button is pressed all the current outstanding audible alarms (of all severities) are
silenced and the ACO LED is illuminated. The illuminated ACO LED indicates that one or more audible
alarms are present, but the audible indicators have been suppressed. The subsequent alarms will trigger
the audible alarms. However, the ACO LED stays illuminated until all the silenced audible alarms are
cleared. Note that the alarm acknowledgment does not change the ACO LED state.
The ACO can also be operated remotely through management applications.
Note: The ACO function is local to the chassis. It does not affect the audible alarm state in other
chassis.

UTStarcom Inc.

TN780 System Description

Release 1.2

Page 3-20

System Interfaces

Parallel Telemetry
The DTC provides sixteen user-customizable environmental alarm input contact sets and the OTC
provides nineteen user-customizable alarm input contact sets through opto-isolators. Each alarm input
contact set consists of a signal and return contact. The users can customize these alarm inputs, and, when
activated, will result in the generation of a customized alarm. The status of all alarms is accessible through
the management applications.
The DTC and OTC provide ten user-customizable parallel telemetry output contact sets using latching,
form-c relays. The control relays are latching, meaning they maintain their relay position (open or closed)
even during a power failure. Each output contact set consists of normally-closed, normally-open and
common contacts. The alarm outputs are controlled by the MCM/OMM.

Datawire
The TN780 and Optical Line Amplifier network elements provide two physical 10Mbs Ethernet RJ45
interfaces to support redundant access to the 10Mbps Datawire channel over the OSC. The Datawire
channel is used for interconnecting customers LAN segments at various sites along a route. For example,
the Datawire channel can be used for applications such as backhauling customers network management
traffic from the remote sites to a gateway network element site, or for serving as a network management
access port for field personnel to gain management access to a remote network element.
The configured IP addresses and subnets of the Datawire LAN ports are advertised by the GMPLS routing
protocol (see IQ GMPLS Control Plane Overview on page 4-47) therefore, the subnets become
reachable from other Datawire ports.

TN780 System Description

Release 1.2

UTStarcom Inc.

Digital Optical Networking Systems

Page 3-21

System Data Plane Functions


The DTC data plane consists of optical, optoelectronics, and electrical components located in multiple
circuit packs performing adaptation, conversion, multiplexing/de-multiplexing, and switching of signals to
provide digital transport and optical transport functions as described in the following sections. The
following data plane functions are described below:
Digital Transport on page 3-21
Optical Transport on page 3-30

Digital Transport
The DTC and corresponding circuit packs provide the digital transport capability. Figure 3-5 on page 3-22
illustrates the interconnection between the circuit packs and major components along the data path. The
sections that follow describe the data plane functions.
Note: Figure 3-5 on page 3-22 is for the illustration of the function feature. The inter connectivity
between the circuit packs could vary based on the network element configuration and customer application.

UTStarcom Inc.

TN780 System Description

Release 1.2

Page 3-22

System Data Plane Functions

Figure 3-5 DTC Digital and Optical Transport Architecture

Tributary Adaptation
As shown in Figure 3-5 on page 3-22 the DTC data plane performs tributary adaptation function where any
variety of 10Gbps, 2.5Gbps and 1Gbps client signal is adapted to an ITU-compliant optical signal for
transmitting on the line fiber. The tributary adaptation includes conversion of clients optical signals into
digital signals (performed in the TOM), encapsulation of 10Gbps, 2.5Gbps or 1Gbps payload into a Digital
Transport Frame, referred to as the DTF, (performed in the TAM and DLM) and conversion of the digital
signals into the ITU-

TN780 System Description

Release 1.2

UTStarcom Inc.

Digital Optical Networking Systems

Page 3-23

SONET OC-48 with full SONET overhead transparency


SDH STM-64 with full SDH overhead transparency
SDH STM-16 with full SDH overhead transparency
10G Clear Channel with full transparency
10GbE LAN Phy
10GbE WAN Phy
1GbE
The DTF is architected to accommodate a mix of 10Gbps, 2.5Gbps and 1Gbps payload formats, including:
SONET OC-192
SDH STM-64
SONET OC-48
SDH STM-16
GbE
Other emerging or future service types

Digital Transport Frame


The UTStarcom Digital Transport Frame, referred to as the DTF, is used within Digital Optical Network to
transport client signals end-to-end. UTStarcom Digital Transport Architecture is modeled after ITU G.709
digital wrapper architecture, but has been simplified and extended to support the following features:
Transparent to the client signal format
Accommodates different types of 2.5Gbps signals asynchronously multiplexed into a common
10Gbps wavelength for wavelength efficiency
Provides performance and maintenance functions on a per-channel basis
Provides consistent transport management and monitoring capabilities irrespective of the specific
client signal format
The DTF accommodates three network layers as described in the sections that follow. The DTF format
provides asynchronous mapping of the client signals within the frame; that is it provides stuffing
opportunities that may or may not contain real data. This allows the client signal frequency to be
independent of the facility and system clock frequencies within an allowed range. The DTF framing is
performed in the TAM and DLM.

Digital Transport Network Layers


UTStarcom DTF format accommodates three Digital Transport Network Layers, analogous to the OTU/
ODU/OPU layered architecture specified by G.709. The three Digital Transport Network layers are
described below.

UTStarcom Inc.

TN780 System Description

Release 1.2

Page 3-24

System Data Plane Functions

Figure 3-6 Digital Transport Network Layers

DTFSection

DTS

DTFLine

DTL

DTS

DTS

DTL

DTL
DTP

DTP

DTFPath

DTS

DTP
DTP

NativeClient
Services

10G
bpsclient signal
(eg. O
C192)

10G
bpsclient signal
(eg. 10G
bE)
10G
bpsclient signal (e.g. SDHSTM
-64)
2.5G
bpsclient signal (e.g. SO
NETO
C48)

DTF Section Layer


The DTF Section layer, referred to as the DTS, is terminated by each TN780 within the Digital Optical
Network, and provides a dedicated layer of PM and maintenance functions for localized section monitoring
and tracing. The DTS layer corresponds to the digital optical segments shown in Figure 3-6 on page 3-24.

DTF Line Layer


The DTF Line layer, referred to as the DTL, is analogous to the SONET Line layer and SDH Multiplex
Section layer. It provides line-level PM and maintenance functions between TN780 nodes that are
configured to digital add/drop mode. The DTL also transports overhead bytes for APS protection functions
at the optical channel level, transparent to the payload. The DTL provides multiplexing support for the DTF
Path layer, and can transport one 10Gbps or four 2.5Gbps DTF Path signals.

TN780 System Description

Release 1.2

UTStarcom Inc.

Page 3-36

System Control Plane Functions

Figure 3-11 Logical Illustration of Intra-chassis Control Plane in a DTC


Line East

Line West

BMM- 1
CPU

BMM- 2
OSC

CPU

100Mb FE
Switch

OSC

DLM- 1

DLM- 2

DLM- 3

DLM- 4

MCM- A

MCM- B

CPU

CPU

CPU

CPU

CPU

CPU

Switch/
Router

Switch/
Router

100Mb FE
Switch

Backplane

Control Path B
(Secondary control path)

Control Path A
(Primary control path)

Craft

Craft

DCN

DCN

Inter-Chassis
NCT Ports

Inter-Chassis
NCT Ports

I/O Panel

Figure 3-12 Logical Illustration of Intra-chassis Control Plane in a OTC


Line East

Line W est

OMM - A

OMM - B

CPU

CPU

Switch/
Router

Switch/
Router

OAM - 1
CPU

OAM - 2
OSC

100Mb FE
Switch

CPU

OSC

100Mb FE
Switch

Backplane
Control Path A
(Primary control path)
Craft

Craft

DCN

DCN

Inter-Chassis
NCT Ports

Control Path B
(Secondary control path)

Inter-Chassis
NCT Ports

I/O Panel

TN780 System Description

Release 1.2

UTStarcom Inc.

Digital Optical Networking Systems

Page 3-37

Inter-chassis Control Plane


As with intra-chassis control plane, the inter-chassis control plane is also based on 100Mbps redundant
Ethernet control path. Each MCM/OMM has two 100Mbps Ethernet ports, referred to as NCT-A and NCTB, to connect to two chassis, one uplink chassis and one downlink chassis.
In a multi-chassis configuration, one of the chassis performs node control function and it is referred to as
Main chassis and the remaining chassis are referred to as Expansion chassis. As shown in Figure 3-13 on
page 3-38, the chassis with in a network element can be connected in a ring fashion. By deploying two
MCMs in each chassis and therefore two control paths, protection against MCM failure, link failure, etc. is
supported. Additionally, the ring configuration when combined with redundant paths allows for new chassis
to be added to the network element without impacting the control or data planes.
The chassis can also be connected in a linear topology.
Note that the NCT-A and NCT-B interfaces are designed to distribute the timing information in subsequent
releases.
Note: For the Multi-Chassis configuration, the MCM-B must be used due to the enhanced CPU
frequency, persistence storage, and physical memory (SDRAM).

Note: The system is designed to allow up to six chassis in a multi-chassis configuration. In


Release 1.2 only two chassis are supported for the multi-chassis configuration.

Note: In a Multi-Chassis configuration the DCN ports on the Main chassis are active. The DCN
ports on the Expansion shelf are disabled.

UTStarcom Inc.

TN780 System Description

Release 1.2

Digital Optical Networking Systems

Page 3-27

Figure 3-8 DTC Grooming Capacity

BMM-4-C1-A

OCG 3
IN

TAM -2-10G

IN
1
OUT

IN
2
OUT

OCG 3
IN

7
IN
1
OUT

IN
2
OUT

OCG 7

IN
1
OUT

IN
2
OUT

IN
2
OUT

IN
2
OUT

60Gbps
IN
1
OUT

IN
2
OUT

IN
1
OUT

IN
2
OUT

IN
1
OUT

60Gbps
IN
2
OUT

IN
1
OUT

IN
2
OUT

IN
1
OUT

MCM

IN
1
OUT

LIN K DATA

IN
1
OUT

Ethernet
LIN K DATA

IN
2
OUT

DCE

.
..

TAM -2-10G

DLM-1-C1-A

IN
2
OUT

TAM -2-10G

TAM -2-10G

DLM-5-C1-A

IN
1
OUT

IN
1
OUT

DCE

.
..

IN
2
OUT

TAM -2-10G

IN
2
OUT

IN
2
OUT

OUT

TAM -2-10G

OCG 5

IN
1
OUT

IN
1
OUT

TAM -2-10G

OCG 5

IN
2
OUT

OUT

100Gbps
TAM -2-10G

OCG 3

IN
1
OUT

TAM -2-10G

OCG 3

IN
1
OUT

100Gbps
TAM -2-10G

OCG 1

TAM -2-10G

OUT IN

OCG 1

OCG 7

OCG 3
IN

TAM -2-10G

IN
2
OUT

OUT IN

IN
2
OUT

TAM -2-10G

IN
1
OUT

TAM -2-10G

IN
2
OUT

OUT

DLM-7-C1-A

IN
1
OUT

TAM -2-10G

TAM -2-10G

DLM-3-C1-A

LINE
IN OUT

Line W est
Line East

IN
2
OUT

IN
1
OUT

Ethernet

OUT

TAM -2-10G

BMM-4-C1-A
LINE
IN OUT

IN
1
OUT

MCM

OCG 3
IN

TAM -2-10G

TAM -2-10G

TAM -2-10G

IN
2
OUT

Note: Figure 3-8 on page 3-27 illustrates an example where the DLMs in the odd numbered slots
are connected to one BMM towards west direction, DLMs in the even numbered slots are
connected to the other BMM towards east direction. In this example configuration each
DTC can support up to 400Gbps grooming capacity.

Reconfigurable Add/Drop
The TN780 system data plane implements fully flexible 0% to 100% add/drop capabilities on a perchannel basis (10Gbps, and 2.5Gbps). The channels can be configured to pass-through or add/drop. A
pass-through channel can be re-configured to an add/drop channel by:
Populating the client side circuit packs (TAM and TOM)
Provisioning end-to-end circuit through management applications
There are no restrictions as to how many channels or which channels are added/dropped at any given site.
Whenever an add/drop channel is added or deleted, no network engineering is required. Furthermore, the
add/drop channels are transparent to the client signal format and can carry many client signals.

UTStarcom Inc.

TN780 System Description

Release 1.2

Page 3-28

System Data Plane Functions

Digital Regeneration
The TN780 system data plane implements fully flexible 0% to 100% digital amplification capabilities on a
per-channel basis (10Gbps, and 2.5Gpbs). It has the capability to digitally amplify the channels at 10Gbps,
and 2.5Gbps. There are no restrictions as to how many channels or which channels are digitally amplified
at any given site. Whenever a digital amp channel is added or deleted, no network engineering is required.
Furthermore, the digital amp channels are transparent to the client signal format and can carry many client
signals.

Digital Conditioning
The TN780 system data plane includes Forward Error Correction (FEC) encoder/decoder for every
channel on the line side at every digital add/drop, digital terminal and digital repeater node to improve the
overall BER.
UTStarcom implements an enhanced FEC algorithm which has a higher coding gain than the standard
G.709 RS(255,239) algorithm. The Enhanced FEC algorithm provides a coding gain of 8.7 dB at 10Gbps
at BER of 1e-15 with the same 7% overhead ratio as the standard G.709 FEC algorithm.
The Enhanced FEC function is implemented on the DLM.

Digital Transport Performance Monitoring


As described in Digital Transport Frame on page 3-23, client signals are encapsulated within a DTF to
transport across the Digital Optical Network. The DTF architecture supports digital performance monitoring
that is agnostic to the client signal payload. The DTF overhead bytes are designed to provide users
performance monitoring capabilities at transport layers analogous to SONET/SDH layering. As illustrated
in Figure 3-6 on page 3-24, the DTF architecture includes the DTF Section, DTF Line and DTF Path (DTP)
layers. The digital performance monitoring is supported at each of these layers, as described in the
following sections, for every channel (10Gbps,2.5Gbps and 1Gbps) at every digital site enhancing the
troubleshooting and fault isolation in the transport domain.
Also, the DTF includes FEC overhead bytes providing FEC performance data for BER computation on
each digital link and on each end-to-end digital channel.

DTF Section PM
The DTF Section layer includes a BIP-8 counter on each 10Gbps digital channel of a digital link, and it can
be monitored at each digital site.

DTF Line PM
The DTF Line layer defines BIP-8 statistics across multiple consecutive digital links along a route, as
defined by the customer.
This counter is not supported in Release 1.2.

TN780 System Description

Release 1.2

UTStarcom Inc.

Digital Optical Networking Systems

Page 3-29

DTF Path PM
The DTF Path layer includes a BIP-8 counter for both 2.5Gbps and 10Gbps client signals, and is
associated with the end-to-end path of the signal. The path performance monitoring data is available at the
DTP end points and also available at the intermediate digital sites where the DTF is regenerated,
analogous to SONET/SDH intermediate path performance monitoring.

Native Client Signal Performance Monitoring


The performance monitoring data of the native client signal is collected at the end point prior to
encapsulating the client signal in a DTF. The native client signal performance data is transported
transparently across the Digital Optical Network. At the egress endpoint the parity errors on the
encapsulated client signal are detected and appropriately included in the client signal overhead bytes prior
to handing off the client signal to the customer equipment. The client signal performance monitoring data is
collected for all the supported client signal types, including:
SONET OC-192
SONET OC-48
SDH STM-64
SDH STM 16
10GbE LAN Phy
10GbE WAN Phy

FEC PM
As described in Digital Conditioning on page 3-28 FEC encoding and decoding is performed on every
digital channel. The FEC statistics are collected at every digital site on every channel, including:
Uncorrected bit error rate
Corrected bit error rate
Corrected number of zeros
Corrected number of ones
Uncorrected number of codewords
Total number of codewords
Raw total bit errors before applying FEC

Digital Transport Maintenance Functions


The DTF architecture supports the maintenance functions that are agnostic to the client signal payload
format. The DTF overhead bytes are designed to provide users the maintenance and the troubleshooting
capabilities at transport layers analogous to SONET/SDH layering. It includes:

UTStarcom Inc.

TN780 System Description

Release 1.2

Page 3-30

System Data Plane Functions

The DTF defines several maintenance signals which are transmitted in-band to the upstream and
downstream network elements using the overhead bytes. It includes:
DTF BDI-L and DTF BDI-P are Backward Defect Indication signals sent upstream as an indication that a downstream defect has been detected
DTF AIS-L and AIS-P are Alarm Indication Signals sent downstream as an indication that an
upstream defect has been detected
DTF OCI-L and DTF OCI-P are Open Connection Indication (OCI) signals sent downstream as
an indication that the signal is not connected to a source in the upstream
DTF LCK-L and DTF LCK-P are Locked signals sent downstream as an indication that the connection is locked in the upstream node
Signal Degrade (SD) signal is sent downstream indicating the BER of the received signal is
above set limits
Signal Fail (SF) signal is sent downstream indicating the BER of the received signal is above set
limits
Trace message (TTI) at DTF Section layer providing continuity check along a digital link between
consecutive Digital Optical Nodes
Trace message (TTI) at DTF Path layer providing end-to-end continuity check between the two endpoints within the Digital Transport Network

Optical Transport
The TN780 and Optical Line Amplifier network elements include the optical transport functions which are
described below.

Optical Transport Layers


As with digital transport layers, the TN780 and Optical Line Amplifier network elements define optical
transport layers within the optical domain (see Figure 3-9 on page 3-31).

TN780 System Description

Release 1.2

UTStarcom Inc.

Digital Optical Networking Systems

Page 3-31

Figure 3-9 Optical Transport Layers

O
p
t
i
c
a
lT
r
a
n
s
p
o
r
tS
e
c
t
i
o
nO
T
SO
T
S O
T
S O
T
SO
T
S O
T
S

O
T
SO
T
S O
T
S O
T
S

O
p
t
i
c
a
lM
u
x
S
e
c
t
i
o
n
(
b
a
n
d
)O
M
S
bO
M
S
bO
b
M
S
b O
M
S
bO
M
S
bO
M
S
b O
M
S
bO
M
S
bO
M
S
bO
M
S
b
O
p
t
i
c
a
lM
u
x
S
e
c
t
i
o
n
(
O
C
G
)
O
p
t
i
c
a
lC
h
a
n
n
e
l

O
M
S
o

O
M
S
o

O
M
S
o

O
M
S
o

O
C
h

O
C
h

O
C
h

O
C
h

At the lowest layer, Optical Channel (OCh) is a 10Gbps channel within the C-band channel plan. Next
layer is the Optical Multiplex Section (OMS) layer. UTStarcom defines two-stage multiplexing resulting in
two OMS layers (OMSo and OMSb). The OMSo is a 100Gbps signal, an aggregate of ten Optical
Channels (OChs). The (OMSo) is referred to as the Optical Carrier Group (OCG). The OMSb is a
400Gbps signal, an aggregate of four OCGs with the support for 800Gbps or 8 OCGs in future. The OMSb
is commonly referred to as C-band and L-band. Release 1.2 supports only C-band channel plan with future
support for L-band channel plan. The optical transport section (OTS) is an aggregate of OMSb (C-band),
OMSb (L-band in future release) and OSC channel providing 1.2Tbps capacity per fiber in future.
Thus, an OTS signal may contain 0 to 80 C-band channels (1530.334nm to 1563.455nm), 0 to 80 L-band
channels in future, plus OSC channel at 1510nm, outside of both bands. Each OCh may be arbitrarily
added and dropped multiple times across a route. However, the individual channels are not managed,
instead the OCGs are managed. The OCGs are the basic unit of optical granularity, not the channel; all the
OChs in an active OCG are optically present on the fiber (barring single-channel failures).

UTStarcom Inc.

TN780 System Description

Release 1.2

Page 3-44

Digital Add/Drop Site Operation

Digital Add/Drop Site Operation


Each network element may have one or more (up to 6) DTC and optionally one or more passive DMC for
dispersion compensation depending on the configuration.
Each DTC must have the following minimum hardware to provide Digital Add/Drop function (see Figure 317 on page 3-44):
One DTC
One MCM
Two BMMs
Two DLMs
Two TAMs
Two TOMs
Figure 3-17 DTC with Minimum Hardware of a Digital Add/Drop Node

BMM-4-C1-A

IN

IN
1
OUT

IN
2
OUT

OCG 3

IN

OCG 3

7
IN
1
OUT

IN
2
OUT

MCM

IN
2
OUT

6
TAM-2-10G

IN
1
OUT

OCG 7

IN
1
OUT

IN
2
OUT

IN
1
OUT

IN
2
OUT

IN
2
OUT

IN
1
OUT

IN
2
OUT

LINK DATA

DCE

.
..

TAM -2-10G

IN
2
OUT

IN
1
OUT

IN
2
OUT

IN
1
OUT

IN
2
OUT

IN
1
OUT

MCM

IN
1
OUT

DLM-3-C1-A

TAM-2-10G

DLM-3-C1-A

IN
2
OUT

IN
2
OUT

IN
1
OUT

Ethernet
LINK DATA

IN
1
OUT

IN
1
OUT

OUT

TAM -2-10G

IN
2
OUT

IN
2
OUT

TAM-2-10G

TAM -2-10G
TAM -2-10G

DLM-1-C1-A

DLM-1-C1-A

TAM-2-10G

IN
2
OUT

IN
1
OUT

IN
1
OUT

DCE

.
..

IN
1
OUT

IN
2
OUT

OUT

TAM-2-10G

OCG 5

IN
2
OUT

IN
1
OUT

TAM -2-10G

OCG 3

OCG 5

IN
1
OUT

OUT

TAM -2-10G

OCG 3

IN
1
OUT

TAM-2-10G

OCG 1

Optical fiber
connections
between
circuit packs

OCG 1

TAM -2-10G

OUT IN

OCG 1

OCG 7

IN

TAM-2-10G

OUT IN

IN
2
OUT

IN
2
OUT

TAM-2-10G

LINE
IN OUT

Line W est
Line East

IN
1
OUT

Ethernet

OUT

TAM -2-10G

LINE
IN OUT

OCG 1

TAM-2-10G

BMM-4-C1-A

IN

TAM -2-10G

TAM-2-10G

TAM-2-10G

IN
2
OUT

Note: Figure 3-17 shows a DTC deployed with a BMM-4-CX-A. The DTC can also be deployed
with a BMM-4-CX-B or a BMM-8-CX-A.
Two DTCs are required to add/drop 400Gbps in each direction as shown in Figure 3-18 on page 3-46.
Following hardware is required to add/drop 400Gbps in each direction:
TN780 System Description

Release 1.2

UTStarcom Inc.

Digital Optical Networking Systems

Page 3-33

Optical Performance Monitoring


The TN780 and Optical Line Amplifier network elements support optical performance monitoring at OTS,
Band, OCG and OCh layers. Refer to Optical PM Parameters and Thresholds on page A-2 for a detailed
description of the supported optical PM parameters.
In addition, several OSA (optical spectrum analyzer) ports are provided in order to measure the optical
spectrum without affecting the traffic. The BMM provides the following OSA ports:
OSA port for the aggregate line input
OSA port for the receive EDFA output
OSA port for the aggregate line output
The OAM provides the following OSA ports:
OSA port for the aggregate line output
OSA port for the aggregate line input

Optical Transport Maintenance Functions


The optical transport architecture supports the maintenance signals at the OTS and OCG layers. The
maintenance signals are transmitted out-of-band over the OSC channel. The maintenance signals are
transmitted to the upstream and downstream network elements indicating the isolation of faults. Following
maintenance signals are supported:
BDI-OTS - The BDI-OTS (backward defect indication-optical transport section) signal is transmitted
by the BMM and OAM to an upstream network element on detecting LOL-OTS on its receive link
FDI-OTS - The FDI-OTS (forward defect indication-optical transport section) signal is transmitted by
the BMM and OAM to a downstream network element indicating that a failure has been detected in
the upstream network
BDI-Band - The BDI-Band signal is transmitted by the BMM and OAM to an upstream network element on detecting LOL-Band on its receive link
FDI-Band - The FDI-Band signal is transmitted by the BMM and OAM to a downstream network element indicating that a failure has been detected in the upstream network at the Band layer
BDI-OCG - The BDI-OCG signal is transmitted by the BMM and OAM to an upstream network element on detecting LOL-OCG on its receive link
FDI-OCG - The FDI-OCG signal is transmitted by the BMM and OAMs to a downstream network
element indicating that a failure has been detected in the upstream network at the OCG layer
BDI-OCh - The BDI-Channel signal is transmitted by the BMM and OAM to an upstream network
element on detecting LOL-OCh on its receive link
FDI-OCh - The FDI-Channel signal is transmitted by the BMM and OAM to a downstream network
element indicating that a failure has been detected in the upstream network at the OCh layer

UTStarcom Inc.

TN780 System Description

Release 1.2

Page 3-34

System Data Plane Functions

Data Plane Redundancy


The data plane architecture in TN780 and Optical Line Amplifier network element is highly reliable, but not
highly available. In Release 1.2, no data plane protection (protection switching feature) is provided. The
end-to-end data plane must be protected by the external equipment. The Digital Optical Network system is
designed to support the external equipment that provides SONET/SDH protection.

TN780 System Description

Release 1.2

UTStarcom Inc.

Digital Optical Networking Systems

Page 3-35

System Control Plane Functions


The TN780 and Optical Line Amplifier network elements include a fault tolerant and redundant control
plane in order to support a reliable Digital Optical Network. The control plane provides:
Communication between circuits packs within the same chassis (refer to Intra-chassis Control
Plane on page 3-35)
Communication between chassis within the network element with multiple chassis (refer to Interchassis Control Plane on page 3-37)
Communication between network elements within the Digital Optical Network (refer to Inter-node
Control Plane (over OSC) on page 3-38)
The Intra-chassis and Inter-chassis control planes provide redundant control path to enhance the overall
reliability of the network element. The following sections describe the redundancy provided at the
hardware level. The IQ Network Operating System software utilizes the hardware features and enables the
system level redundancy.

Intra-chassis Control Plane


The intra-chassis control plane in TN780 and Optical Line Amplifier network elements provide a fault
tolerant, high performance control path.
The intra-chassis control plane consists of a redundant point-to-point switched 100Mbps Ethernet control
path. The backplane contains two 100Mbps Ethernet control bus for connecting the control circuit packs
(MCMs in TN780, OMMs in Optical Line Amplifier, referred to as MCM/OMM) to the remaining circuit
packs, referred to as line circuit packs.
To enable redundancy, each chassis must be populated with two control MCM-Bs/OMMs. Each MCM/
OMM houses a 100Mbps Ethernet switch. The line circuit packs connect to two MCM/OMM, as logically
depicted in Figure 3-12 on page 3-36 for TN780 and Figure 3-12 on page 3-36 for Optical Line Amplifier.
At any given time, one MCM/OMM is active and the other one is in stand-by mode. The line circuit pack
communicates with the Active MCM/OMM over the control path A (Primary control path); in the event of
communication path failure, the line circuit packs will communicate through the control path B (Secondary
control path). Layer 2 switching is used to transport the control traffic between the circuit packs and
chassis within a network element.
Note: For the Multi-Chassis configuration, the MCM-B must be used due to the enhanced CPU
frequency, persistence storage, and physical memory (SDRAM).

UTStarcom Inc.

TN780 System Description

Release 1.2

Page 3-36

System Control Plane Functions

Figure 3-11 Logical Illustration of Intra-chassis Control Plane in a DTC


Line East

Line West

BMM- 1
CPU

BMM- 2
OSC

CPU

100Mb FE
Switch

OSC

DLM- 1

DLM- 2

DLM- 3

DLM- 4

MCM- A

MCM- B

CPU

CPU

CPU

CPU

CPU

CPU

Switch/
Router

Switch/
Router

100Mb FE
Switch

Backplane

Control Path B
(Secondary control path)

Control Path A
(Primary control path)

Craft

Craft

DCN

DCN

Inter-Chassis
NCT Ports

Inter-Chassis
NCT Ports

I/O Panel

Figure 3-12 Logical Illustration of Intra-chassis Control Plane in a OTC


Line East

Line W est

OMM - A

OMM - B

CPU

CPU

Switch/
Router

Switch/
Router

OAM - 1
CPU

OAM - 2
OSC

100Mb FE
Switch

CPU

OSC

100Mb FE
Switch

Backplane
Control Path A
(Primary control path)
Craft

Craft

DCN

DCN

Inter-Chassis
NCT Ports

Control Path B
(Secondary control path)

Inter-Chassis
NCT Ports

I/O Panel

TN780 System Description

Release 1.2

UTStarcom Inc.

Digital Optical Networking Systems

Page 3-37

Inter-chassis Control Plane


As with intra-chassis control plane, the inter-chassis control plane is also based on 100Mbps redundant
Ethernet control path. Each MCM/OMM has two 100Mbps Ethernet ports, referred to as NCT-A and NCTB, to connect to two chassis, one uplink chassis and one downlink chassis.
In a multi-chassis configuration, one of the chassis performs node control function and it is referred to as
Main chassis and the remaining chassis are referred to as Expansion chassis. As shown in Figure 3-13 on
page 3-38, the chassis with in a network element can be connected in a ring fashion. By deploying two
MCMs in each chassis and therefore two control paths, protection against MCM failure, link failure, etc. is
supported. Additionally, the ring configuration when combined with redundant paths allows for new chassis
to be added to the network element without impacting the control or data planes.
The chassis can also be connected in a linear topology.
Note that the NCT-A and NCT-B interfaces are designed to distribute the timing information in subsequent
releases.
Note: For the Multi-Chassis configuration, the MCM-B must be used due to the enhanced CPU
frequency, persistence storage, and physical memory (SDRAM).

Note: The system is designed to allow up to six chassis in a multi-chassis configuration. In


Release 1.2 only two chassis are supported for the multi-chassis configuration.

Note: In a Multi-Chassis configuration the DCN ports on the Main chassis are active. The DCN
ports on the Expansion shelf are disabled.

UTStarcom Inc.

TN780 System Description

Release 1.2

Page 3-38

System Control Plane Functions

Figure 3-13 Logical Illustration of Inter-chassis Control Plane

D T C C h a s s is
IO P a n e l

N C T 2 -A

N C T 1 -A

N C T 2 -B

M C M -A

M aste r C o n tro l
C h a s s is

N C T 1 -B
M C M -B

CPU

CPU

S w itc h /
R o u te r

S w itc h /
R o u te r

D T C C h a s s is
IO P a n e l

N C T 2 -A

N C T 1 -A

N C T 2 -B

M C M -A

E x p a n s io n
C h a s s is 1

N C T 1 -B
M C M -B

CPU

CPU

S w itc h /
R o u te r

S w itc h /
R o u te r

D T C C h a s s is
IO P a n e l

N C T 2 -A

N C T 1 -A

N C T 2 -B

M C M -A

E x p a n s io n
C h a s s is 2

N C T 1 -B
M C M -B

CPU

CPU

S w itc h /
R o u te r

S w itc h /
R o u te r

Inter-node Control Plane (over OSC)


The TN780 and Optical Line Amplifier network elements support Optical Supervisory Channel (OSC) for
out-of-band communication between adjacent network elements. The OSC is a SONET OC-3c (155.52
Mb/s) channel operated at 1510nm outside the EDFA band on each span. The OSC is terminated at every
TN780 and Optical Line Amplifier node. The OSC provides 100Mbps throughput.
The OSC control path carries the following traffic between network elements:
Management Plane Traffic - includes traffic from the remote management systems to access network elements for the purpose of managing them
Control Plane Traffic - GMPLS routing and signaling control protocol traffic
Datawire Traffic - customer management traffic by interconnecting customers 10Mbps Ethernet
LAN segments at various sites through Aux port interface

TN780 System Description

Release 1.2

UTStarcom Inc.

Digital Optical Networking Systems

Page 3-39

Orderwire Traffic - voice communication traffic between customer sites through the orderwire interfaces which will be supported in a future release
The physical OSC interfaces are located on the BMM and OAM. The packets received on the OSC are
switched to the MCM/OMM for processing. So, though the OSC is terminated on the BMM/OAM, the
packets are processed in the MCM/OMM.

UTStarcom Inc.

TN780 System Description

Release 1.2

Page 3-40

System Management Plane Functions

System Management Plane Functions


The management plane includes the communication between the network element and the external
management stations. As described in Management Interfaces on page 3-17, Optical Line Amplifier and
TN780 network elements provide several management interfaces for management stations to
communicate with the network element. The supported interfaces include, Craft Ethernet and Craft Serial
DCE ports for local personnel access, Serial DTE port for remote access through a Modem and redundant
DCN ports for remote access. As shown in Figure 3-12 on page 3-36 and Figure 3-12 on page 3-36, each
MCM/OMM includes a 10/100Mbps Ethernet DCN port to connect to the customers DCN network. The
redundancy is provided by populating two MCM-B/OMM per chassis and also IQ Network Operating
System software as described in IQ Management Plane Overview on page 4-53.
The management traffic is carried over the OSC control channel between adjacent network elements.
Note: In a Multi-Chassis configuration the DCN ports on the Main chassis are active. The DCN
ports on the Expansion shelf are disabled.

TN780 System Description

Release 1.2

UTStarcom Inc.

Digital Optical Networking Systems

Page 3-41

Digital Terminal Site Operation


Customers can deploy the TN780 network element in Digital Terminal mode at a Terminal site. The TN780
network element may have one or more (up to six) DTC and optionally one or more passive DMC for
dispersion compensation depending on the configuration.
Each DTC must have the following minimum hardware to provide Digital Terminal function (see Figure 314 on page 3-41):
One DTC
One MCM
One BMM
One DLM
One TAM
One TOM
Figure 3-14 DTC with Minimum Hardware for a Digital Terminal

BMM-4-C1-A

IN
2
OUT

OCG 3
IN

TOM
TAM

IN
1
OUT

IN
2
OUT

TOM Blank

OCG 7

IN
2
OUT

IN
1
OUT

IN
2
OUT

IN
2
OUT

IN
1
OUT

IN
2
OUT

IN
1
OUT

IN
1
OUT

IN
2
OUT

LINK DATA

...

TAM-2-10G

DLM-3-C1-A

TAM-2-10G

DLM-3-C1-A

TAM-2-10G

TAM-2-10G

TAM-2-10G

DLM-1-C1-A

TAM-2-10G

TAM-2-10G

IN
2
OUT

DCE

MCM

IN
1
OUT

IN
2
OUT

IN
1
OUT

MCM

IN
1
OUT

IN
2
OUT

IN
2
OUT

TAM Blank
IN
2
OUT

IN
1
OUT

Ethernet
LINK DATA

IN
1
OUT

IN
2
OUT

IN
1
OUT

IN
1
OUT

DCE

...

OCG 5

IN
2
OUT

IN
1
OUT

IN
2
OUT

TAM-2-10G

OCG 5

IN
1
OUT

IN
2
OUT

IN
1
OUT

OUT

TAM-2-10G

OCG 3

IN
2
OUT

IN
1
OUT

OUT

TAM-2-10G

OCG 3

OCG 7

OCG 3
IN

TAM-2-10G

OCG 1

TAM-2-10G

Optical fiber
connection
between
circuit packs

OUT IN

OCG 1

TAM-2-10G

West / East

IN
1
OUT

OUT

TAM-2-10G

Line
OUT IN

IN
2
OUT

OCG 3
IN

TAM-2-10G

LINE
IN OUT

DLM-1-C1-A

BMM
LINE
IN OUT

IN
2
OUT

IN
1
OUT

Ethernet

OUT

TAM-2-10G

BMM-4-C1-A

OCG 1
IN

IN
1
OUT

MCM

BMM Blank

IN
1
OUT

TAM-2-10G

DLM

TAM-2-10G

TAM-2-10G

TAM-2-10G

DLM Blank
MCM Blank

IN
2
OUT

Note: Figure 3-14 shows a DTC deployed with a BMM-4-CX-A. The DTC can also be deployed
with a BMM-4-CX-B or a BMM-8-C-A.

UTStarcom Inc.

TN780 System Description

Release 1.2

Page 3-42

Digital Terminal Site Operation

A fully loaded DTC can terminate up to 400Gbps of traffic. A fully loaded DTC includes the following
hardware (see Figure 3-15 on page 3-42):
One DTC
One MCM
One BMM
Four DLMs
Twenty TAMs
Up to 40 10G TOMs, up to eighty 2.5G TOMs, or up to eighty 1G TOMs (or any combination of both
10G, 2.5 G, and 1G TOM)
Figure 3-15 on page 3-42 illustrates an example of optical fiber connections between the modules. The line
side port on the DLM is connected to the corresponding OCG port on the BMM. For example, the line port
on DLM-1-C1 is connected to the OCG 1 port on the BMM. Note that actual connections depend on the
installed configuration.
Figure 3-15 Hardware Chassis Configuration of a 400Gbps Digital Terminal

BMM-4-C1-A

IN

7
IN
1
OUT

IN
2
OUT

OCG 1

MCM

TAM -2-10G

IN
1
OUT

IN
2
OUT

OCG 3

IN
2
OUT

IN
2
OUT

IN
2
OUT

IN
1
OUT

IN
2
OUT

LINK DATA

.
..

TAM -2-10G

TAM -2-10G

DLM-1-C1-A

TAM -2-10G

IN
2
OUT

IN
1
OUT

IN
2
OUT

IN
1
OUT

MCM

OCG 7

IN
1
OUT

IN
1
OUT

Ethernet
LINK DATA

OCG 7

IN
1
OUT

IN
2
OUT

DCE

DCE

.
..

OCG 5

IN
2
OUT

IN
1
OUT

IN
1
OUT

IN
2
OUT

TAM -2-10G

OCG 5

IN
2
OUT

IN
1
OUT

OUT

TAM -2-10G

OCG 3

IN
1
OUT

IN
1
OUT

IN
2
OUT

IN
2
OUT

TAM -2-10G

OCG 3

IN
1
OUT

IN
1
OUT

OUT

DLM-3-C1-A

TAM -2-10G

IN
2
OUT

TAM -2-10G

OCG 1

TAM -2-10G

OUT IN

OCG 1

TAM -2-10G

W est / East

IN
1
OUT

TAM -2-10G

IN
1
OUT

DLM-5-C1-A

IN
2
OUT

OUT

TAM -2-10G

LINE
OUT

TAM -2-10G

DLM-7-C1-A

IN

IN
1
OUT

IN
2
OUT

Optical fiber
connections
between
circuit packs

IN

Ethernet

OUT

Line
OUT IN

IN
1
OUT

IN
2
OUT

OCG 5

TAM -2-10G

LINE
OUT

IN

TAM -2-10G

IN

IN
1
OUT

IN
2
OUT

OCG 7

TAM -2-10G

BMM-4-C1-A

IN

TAM -2-10G

TAM -2-10G

TAM -2-10G

IN
2
OUT

Note: Figure 3-15 shows a DTC deployed with a BMM-4-CX-A. The DTC can also be deployed
with a BMM-4-CX-B or a BMM-8-CX-A.
Figure 3-16 on page 3-43 illustrates the logical configuration of a fully loaded DTC at a Digital Terminal
site.

TN780 System Description

Release 1.2

UTStarcom Inc.

IQ Network Operating System

Page 4-3

On termination of defects, IQ stops transmitting maintenance signals. See Network Fault Isolation on
page 4-10 for more details.
The detection of facility defects, such as LOL, AIS, FDI, etc., and transmission of maintenance signals to
the upstream and downstream network elements is in compliance with Telcordia and ITU specifications.

Failure Declaration
As specified in GR-253 specification, the defects associated with facilities/incoming signal are soaked for a
pre-defined period before they are declared as failures. It prevents spurious failures being reported. So,
when a defect is detected on a facility, it is soaked for a time interval of 2.5secs before the corresponding
failure is declared. Similarly, when a facility defect terminates, it is soaked for 10secs before the
corresponding failure is terminated. This eliminates pre-mature termination of the failure.
The defects associated with hardware equipment are not soaked. Failure condition is declared as soon as
the defect is detected and similarly, the failure condition is cleared as soon as the defect is terminated.

Alarm Reporting
IQ reports the hardware and software failures as alarms. Detection of a failure condition results in an alarm
being raised which is asynchronously reported to all the registered management applications. The
termination of a failure results in clearing the corresponding alarm, which is again reported asynchronously
to all the registered management applications. IQ stores the alarm conditions locally and they are
retrievable by the management applications. Thus, at any given time users see only the current standing
alarm conditions.
Alarm generation is also dependent on the administrative state (see Administrative State on page 4-20)
of the managed object instance and presence of other failure conditions and the user configuration, as
described below:
Administrative StateAlarms are generated when the administrative state of a managed object
instance and its ancestor objects is unlocked. When the administrative state of an object or any of
its ancestor objects is locked or in maintenance, the alarms are not generated (except for the Loopback related alarms).
Alarm HierarchyAn alarm is generated only if no high priority alarms exist for the managed object
instance. Thus, only the alarms corresponding to the root cause of the fault condition is reported.
This capability prevents too many alarms being reported for a single fault condition. (See Alarm
Masking on page 4-6).
User ConfigurationIQ provides users the ability to selectively inhibit the alarm reporting utilizing
alarm reporting control feature. (See Alarm Reporting Control on page 4-7).
IQ reports each alarm with sufficient information, as described below, so that the user can take appropriate
corrective actions to clear the alarm. For detailed description of all the parameters of an alarm reported to
the management applications, refer to the corresponding user guides.
Alarm Categorythis information isolates the alarm to a functional area (See Alarm Category on
page 4-5 for the list of supported alarm types).

UTStarcom Inc.

TN780 System Description

Release 1.2

Page 3-44

Digital Add/Drop Site Operation

Digital Add/Drop Site Operation


Each network element may have one or more (up to 6) DTC and optionally one or more passive DMC for
dispersion compensation depending on the configuration.
Each DTC must have the following minimum hardware to provide Digital Add/Drop function (see Figure 317 on page 3-44):
One DTC
One MCM
Two BMMs
Two DLMs
Two TAMs
Two TOMs
Figure 3-17 DTC with Minimum Hardware of a Digital Add/Drop Node

BMM-4-C1-A

IN

IN
1
OUT

IN
2
OUT

OCG 3

IN

OCG 3

7
IN
1
OUT

IN
2
OUT

MCM

IN
2
OUT

6
TAM-2-10G

IN
1
OUT

OCG 7

IN
1
OUT

IN
2
OUT

IN
1
OUT

IN
2
OUT

IN
2
OUT

IN
1
OUT

IN
2
OUT

LINK DATA

DCE

.
..

TAM -2-10G

IN
2
OUT

IN
1
OUT

IN
2
OUT

IN
1
OUT

IN
2
OUT

IN
1
OUT

MCM

IN
1
OUT

DLM-3-C1-A

TAM-2-10G

DLM-3-C1-A

IN
2
OUT

IN
2
OUT

IN
1
OUT

Ethernet
LINK DATA

IN
1
OUT

IN
1
OUT

OUT

TAM -2-10G

IN
2
OUT

IN
2
OUT

TAM-2-10G

TAM -2-10G
TAM -2-10G

DLM-1-C1-A

DLM-1-C1-A

TAM-2-10G

IN
2
OUT

IN
1
OUT

IN
1
OUT

DCE

.
..

IN
1
OUT

IN
2
OUT

OUT

TAM-2-10G

OCG 5

IN
2
OUT

IN
1
OUT

TAM -2-10G

OCG 3

OCG 5

IN
1
OUT

OUT

TAM -2-10G

OCG 3

IN
1
OUT

TAM-2-10G

OCG 1

Optical fiber
connections
between
circuit packs

OCG 1

TAM -2-10G

OUT IN

OCG 1

OCG 7

IN

TAM-2-10G

OUT IN

IN
2
OUT

IN
2
OUT

TAM-2-10G

LINE
IN OUT

Line W est
Line East

IN
1
OUT

Ethernet

OUT

TAM -2-10G

LINE
IN OUT

OCG 1

TAM-2-10G

BMM-4-C1-A

IN

TAM -2-10G

TAM-2-10G

TAM-2-10G

IN
2
OUT

Note: Figure 3-17 shows a DTC deployed with a BMM-4-CX-A. The DTC can also be deployed
with a BMM-4-CX-B or a BMM-8-CX-A.
Two DTCs are required to add/drop 400Gbps in each direction as shown in Figure 3-18 on page 3-46.
Following hardware is required to add/drop 400Gbps in each direction:
TN780 System Description

Release 1.2

UTStarcom Inc.

Digital Optical Networking Systems

Page 3-45

Two DTCs
Two MCM-Bs (One MCM for each DTC)
Two BMMs
Eight DLMs
Forty TAMs
Eighty 10G TOMs
Figure 3-18 on page 3-46 also illustrates the optical fiber interconnection between the modules. As shown,
two BMMs and four DLMs are located in the Main chassis. The remaining DLMs are located in the
Expansion chassis. The DLMs in the Expansion chassis are connected to the BMM in the Main chassis.

UTStarcom Inc.

TN780 System Description

Release 1.2

Page 3-46

Digital Add/Drop Site Operation

Figure 3-18 Hardware Physical Configuration of a 400Gbps Digital Add/Drop Node

B M M -4 -C 1 -A

7
IN
1
O U T

IN
2
O UT

O C G 5
IN

M C M

T A M -2 -1 0 G

IN
1
O U T

IN
2
O U T

O C G 5
IN

O C G 5

O C G 7

O C G 7

DATA
L IN K

IN
1
O U T

IN
2
O U T

IN
2
O U T

IN
2
O U T

IN
2
O UT

IN
1
O U T

IN
1
O U T

IN
1
O U T

IN
1
O U T

IN
2
O U T

IN
2
O U T

IN
2
O U T

IN
2
O UT

IN
1
O U T

IN
1
O U T

IN
1
O U T

IN
1
O U T

B M M -4 -C 1 -A

E th e rn et

IN

IN

IN
2
O U T

O C G 1

IN

7
IN
1
O U T

IN
2
O U T

O C G 1

M CM

IN
1
O U T

T A M -2 -1 0 G

IN
1
O UT

...

T A M -2 -1 0 G

IN
2
O UT

IN
2
O U T

O C G 3

O CG 5

O C G 7

O CG 7

DATA
L IN K

IN
1
O U T

IN
2
O U T

IN
2
O U T

IN
2
O U T

IN
2
O U T

IN
1
O U T

IN
1
O UT

IN
1
O U T

IN
1
O U T

IN
2
O U T

IN
2
O U T

IN
2
O U T

IN
2
O U T

IN
1
O U T

IN
1
O UT

IN
1
O U T

IN
1
O U T

TN780 System Description

Release 1.2

IN
2
O U T

IN
2
O U T

E th e rn et

DCE

...

T A M -2 -1 0 G

T A M -2 -1 0 G

T A M -2 -1 0 G

T A M -2 -1 0 G

IN
2
O U T

DCE

...

T A M -2 -1 0 G

D L M -1 -C 1 -A

IN
2
O U T

T A M -2 -1 0 G

T A M -2 -1 0 G
T A M -2 -1 0 G

D L M -1 -C 1 -A

T A M -2 -1 0 G
T A M -2 -1 0 G

T A M -2 -1 0 G

D L M -3 -C 1 -A

IN
1
O U T

IN
1
O U T

M C M

O C G 5

IN
2
O U T

O U T

D ATA

O C G 3

IN
1
O UT

IN
1
O U T

L IN K

O C G 3

IN

IN
2
O U T

O U T

T A M -2 -1 0 G

O CG 1

IN
1
O U T

IN
1
O UT

T A M -2 -1 0 G

O UT

O C G 1

T A M -2 -1 0 G

L IN E
O UT

O U T

T A M -2 -1 0 G

IN

IN

IN
1
O U T

IN
2
O U T

T A M -2 -1 0 G

O UT

IN
1
O U T

DCE

E th e rn et

O U T

D L M -3 -C 1 -A

B M M -4 -C 1 -A
L IN E
O UT

IN
2
O U T

O C G 3

IN
2
O U T

T A M -2 -1 0 G

IN

IN

IN
2
O U T

T A M -2 -1 0 G

T A M -2 -1 0 G

T A M -2 -1 0 G

T A M -2 -1 0 G

T A M -2 -1 0 G

IN
2
O U T

O p tic a l fib e r
c o n n e c t io n s
b e tw e e n
c ir c u it p a c k s

DCE

...

T A M -2 -1 0 G

IN
1
O U T

IN
2
O UT

T A M -2 -1 0 G

IN
1
O U T

D L M -5 -C 1 -A

T A M -2 -1 0 G

D L M -5 -C 1 -A

T A M -2 -1 0 G

T A M -2 -1 0 G

D L M -7 -C 1 -A

T A M -2 -1 0 G

IN
2
O U T

O U T

M CM

O C G 5

IN
1
O U T

IN
1
O U T

D ATA

O C G 3

IN
2
O U T

O U T

L IN K

O CG 3

IN

IN
1
O U T

T A M -2 -1 0 G

O UT
O C G 1

IN
1
O U T

O U T

T A M -2 -1 0 G

IN

O C G 1

IN
1
O U T

IN
2
O U T

T A M -2 -1 0 G

O UT

IN
1
O U T

IN
2
O U T

O C G 7
IN

T A M -2 -1 0 G

L IN E
O UT

T A M -2 -1 0 G

D L M -7 -C 1 -A

IN

L in e W e s t
L in e E a s t

E th e rn et

O UT

T A M -2 -1 0 G

B M M -4 -C 1 -A
L IN E
O UT

IN
1
O U T

IN
2
O U T

O C G 7
IN

IN

T A M -2 -1 0 G

T A M -2 -1 0 G

T A M -2 -1 0 G

IN
2
O U T

UTStarcom Inc.

Digital Optical Networking Systems

Page 3-47

Note: Figure 3-18 shows DTCs deployed with a BMM-4-CX-A. The DTCs can also be deployed
with a BMM-4-CX-B or a BMM-8-CX-A.
Figure 3-16 on page 3-43 illustrates the logical configuration of a network element providing 400Gbps add/
drop capacity.
Figure 3-19 Hardware Logical Configuration of a 400Gpbs Digital Add/Drop Node
DTC

Client

DLM
(Slot 6)
OCG 5

Client

TAM

TOM
TOM

TOM
TOM

TAM

TAM

TOM
TOM

TOM
TOM

TAM

TAM

TOM
TOM

TOM
TOM

TAM

TAM

TOM
TOM

TOM
TOM

TAM

TAM

TOM
TOM

TOM
TOM

TAM

DLM
(Slot 5)
OCG 5

100 Gbps Backplane connection

West

BMM
OCG 7

DLM
(Slot 4)

OSC

TAM

TOM
TOM

TOM
TOM

TAM

TAM

TOM
TOM

TOM
TOM

TAM

TAM

TOM
TOM

TOM
TOM

TAM

TAM

TOM
TOM

TOM
TOM

TAM

TAM

TOM
TOM

TOM
TOM

TAM

DLM
(Slot 3)

BMM

East

OCG 7
OSC

100 Gbps Backplane connection

DTC

Client

OCG 1

DLM
(Slot 6)

Client

TAM

TOM
TOM

TOM
TOM

TAM

TAM

TOM
TOM

TOM
TOM

TAM

TAM

TOM
TOM

TOM
TOM

TAM

TAM

TOM
TOM

TOM
TOM

TAM

TAM

TOM
TOM

TOM
TOM

TAM

DLM
(Slot 5)

OCG 1

100 Gbps Backplane connection

DLM
(Slot 4)
OCG 3

TAM

TOM
TOM

TOM
TOM

TAM

TAM

TOM
TOM

TOM
TOM

TAM

TAM

TOM
TOM

TOM
TOM

TAM

TAM

TOM
TOM

TOM
TOM

TAM

TAM

TOM
TOM

TOM
TOM

TAM

DLM
(Slot 3)
OCG 3

100 Gbps Backplane connection

Each DTC can support 200Gbps add/drop traffic in each direction. Figure 3-20 on page 3-48 illustrates the
physical configuration of a network element providing 200Gbps add/drop capacity.

UTStarcom Inc.

TN780 System Description

Release 1.2

Page 3-48

Digital Add/Drop Site Operation

Figure 3-20 Hardware Physical Configuration of a 200Gbps Digital Add/Drop Node

BMM-4-C1-A

IN

7
IN
1
OUT

IN
2
OUT

OCG 1

MCM

TAM -2-10G

IN
1
OUT

IN
2
OUT

OCG 1

IN
2
OUT

IN
1
OUT

IN
2
OUT

IN
2
OUT

LINK DATA

.
..

TAM -2-10G

IN
1
OUT

IN
2
OUT

IN
1
OUT

MCM

TAM -2-10G

DLM-1-C1-A

TAM -2-10G

IN
2
OUT

IN
2
OUT

IN
1
OUT

IN
1
OUT

Ethernet
LINK DATA

OCG 7

IN
2
OUT

DCE

DCE

.
..

OCG 7

IN
1
OUT

IN
2
OUT

IN
1
OUT

IN
1
OUT

IN
2
OUT

TAM -2-10G

OCG 5

IN
1
OUT

OUT

TAM -2-10G

OCG 5

IN
2
OUT

IN
1
OUT

IN
1
OUT

IN
2
OUT

IN
2
OUT

TAM -2-10G

OCG 3

IN
1
OUT

IN
1
OUT

OUT

DLM-1-C1-A

TAM -2-10G

IN
2
OUT

TAM -2-10G

OCG 3

TAM -2-10G

OCG 1

TAM -2-10G

OUT IN

OCG 1

IN
1
OUT

TAM -2-10G

IN
1
OUT

DLM-3-C1-A

IN
2
OUT

OUT

TAM -2-10G

LINE
OUT

TAM -2-10G

DLM-3-C1-A

IN

IN
1
OUT

IN
2
OUT

O UT IN

IN

Ethernet

OUT

Line W est
Line East

IN
1
OUT

IN
2
OUT

OCG 3

TAM -2-10G

LINE
OUT

IN

TAM -2-10G

IN

IN
1
OUT

IN
2
OUT

OCG 3

TAM -2-10G

BMM-4-C1-A

IN

TAM -2-10G

TAM -2-10G

TAM -2-10G

IN
2
OUT

Optical fiber
connections
between
circuit packs

Note: Figure 3-20 shows a DTC deployed with a BMM-4-CX4-A. The DTC can also be deployed
with a BMM-4-CX-B or a BMM-8-CX-A.
Figure 3-21 on page 3-49 illustrates the logical configuration of a network element providing 200Gbps add/
drop capacity.

TN780 System Description

Release 1.2

UTStarcom Inc.

Digital Optical Networking Systems

Page 3-49

Digital Repeater Site Operation


Digital Repeater configuration is a special case of Digital Add/Drop configuration where the client side
equipment (TAM and TOM) modules are not populated; those slots are populated with blank circuit packs.
As with Digital Add/Drop configuration, each DTC can support up to 200Gbps capacity in each direction.
As described in Bandwidth Grooming on page 3-26, in Release 1.2, 100Gbps grooming capacity is
supported between all adjacent DLM slots (between slots 3 & 4, slots 5 & 6, slots 3 & 5 and slots 4 & 6).
Figure 3-21 on page 3-49 illustrates an example configuration of a DTC providing 200Gbps per direction
digital repeater function.
Figure 3-21 Hardware Physical Configuration of a 200Gbps Digital Repeater Node

BMM-4-C1-A

Optical fiber
connections
between
circuit packs

IN

7
IN
1
OUT

IN
2
OUT

OCG 1

MCM

TAM -2-10G

IN
1
OUT

IN
2
OUT

OCG 1

IN
2
OUT

IN
1
OUT

IN
2
OUT

IN
2
OUT

LINK DATA

.
..

TAM -2-10G

IN
1
OUT

IN
2
OUT

IN
1
OUT

MCM

TAM -2-10G

DLM-1-C1-A

TAM -2-10G

IN
2
OUT

IN
2
OUT

IN
1
OUT

IN
1
OUT

Ethernet
LINK DATA

OCG 7

IN
2
OUT

DCE

DCE

.
..

OCG 7

IN
1
OUT

IN
2
OUT

IN
1
OUT

IN
1
OUT

IN
2
OUT

TAM -2-10G

OCG 5

IN
1
OUT

OUT

TAM -2-10G

OCG 5

IN
2
OUT

IN
1
OUT

IN
1
OUT

IN
2
OUT

IN
2
OUT

TAM -2-10G

OCG 3

IN
1
OUT

IN
1
OUT

OUT

TAM -2-10G

TAM -2-10G

IN
2
OUT

TAM -2-10G

OCG 3

TAM -2-10G

OCG 1

TAM -2-10G

O UT IN

OCG 1

IN
1
OUT

DLM-1-C1-A

IN
1
OUT

DLM-3-C1-A

IN
2
OUT

OUT

TAM -2-10G

LINE
O UT

TAM -2-10G

DLM-3-C1-A

IN

IN
1
OUT

IN
2
OUT

OUT IN

IN

Ethernet

OUT

Line W est
Line East

IN
1
OUT

IN
2
OUT

OCG 3

TAM -2-10G

LINE
OUT

IN

TAM -2-10G

IN

IN
1
OUT

IN
2
OUT

OCG 3

TAM -2-10G

BMM-4-C1-A

IN

TAM -2-10G

TAM -2-10G

TAM -2-10G

IN
2
OUT

Note: Figure 3-21 shows a DTC deployed with a BMM-4-CX-A. The DTC can also be deployed
with a BMM-4-CX-B or a BMM-8-CX8-A.
Figure 3-22 on page 3-50 illustrates an example configuration of a Digital Repeater. As shown, the digitally
repeated traffic is switched between the adjacent DLMs.

UTStarcom Inc.

TN780 System Description

Release 1.2

Page 3-50

Digital Repeater Site Operation

Figure 3-22 Hardware Logical Configuration of a 200Gpbs Digital Repeater Node

O
C
G1

D
L
M
(S
lo
t5
)

D
L
M
(S
lo
t6
)

O
C
G1

1
0
0G
b
p
sB
a
ckp
la
n
eco
n
n
e
ctio
n

W
e
st

B
M
M
O
C
G3

B
M
M

D
L
M
(S
lo
t3
)

D
L
M
(S
lo
t4
)

E
a
st

O
C
G3

O
S
C

O
S
C
1
0
0G
b
p
sB
a
ckp
la
n
eco
n
n
e
ctio
n

D
C
M

TN780 System Description

(O
p
tio
n
a
l)

Release 1.2

(O
p
tio
n
a
l)

D
C
M

UTStarcom Inc.

Digital Optical Networking Systems

Page 3-51

Optical Line Amplifier Site Operation


The OTC (refer to Optical Line Amplifier Hardware Overview on page 3-13) provides the line amplifier
function. The following hardware equipment is required to provide optical amplification in both directions
(see Figure 3-23 on page 3-51).
One OTC
One OMM
Two OAMs
Figure 3-23 Hardware Physical Configuration of an Optical Line Amplifier Node

O
S
CO
ptical
p

Figure 3-23 on page 3-51 also indicates the required optical fiber connections. As shown, to provide line
amplification for signals going from West to East, the Line IN port on a given OAM is connected to the
incoming fiber from one direction (e.g. West) while the Line OUT port on the same OAM is connected to
the outgoing fiber in the opposite direction (e.g. East). As a result, the receiver on the OAM receives from
one direction and the transmitter on the same OAM transmits towards the opposite direction. However, an
OAM provides the option to ensure that the OSC Transmitter and OSC Receiver for a given direction are
located in the same OAM so that when an OAM fails, it impacts the OSC in one direction only and the
node will still be accessible. This is done by passing the OSC transmit signals between the OAMs using a
front-panel duplex optical patch cord. The OSC OUT port on one OAM is connected to the OSC IN port on
the other OAM as shown in Figure 3-23 on page 3-51.

UTStarcom Inc.

TN780 System Description

Release 1.2

Page 3-52

TN780 System Description

Optical Line Amplifier Site Operation

Release 1.2

UTStarcom Inc.

CHAPTER 4

IQ Network Operating System


UTStarcom IQ Network Operating System, referred to as IQ, is intelligent software operated on all
UTStarcom network elements providing significant usability and operational benefits for the Digital Optical
Network solutions. This chapter describes the major functions provided by IQ.
IQ provides a robust and reliable Operations, Administration, Maintenance, and Provisioning (OAM&P)
functions based on a number of industry standards. The OAM&P functions provided by IQ are described in
the following sections:
Fault Management on page 4-2
Equipment Management and Configuration on page 4-15
Service Provisioning on page 4-23
Performance Monitoring and Management on page 4-31
Security and Access Management on page 4-35
Software Configuration Management on page 4-41
The OAM&P functions are accessible to both human and machine clients through a variety of
management interfaces and applications, referred to as management applications in the rest of this
chapter.
In addition to OAM&P functions, IQ provides intelligent control plane and management plane functions as
described in the following sections:
IQ GMPLS Control Plane Overview on page 4-47
IQ Management Plane Overview on page 4-53

UTStarcom Inc.

TN780 System Description

Release 1.2

Page 4-2

Fault Management

Fault Management
IQ provides an extensive fault monitoring and management capability that are modeled after Telcordia and
ITU standards. All these capabilities are agnostic to the client signal type and provides the ability to
identify, correlate and correct faults based on actual digital performance indicators leading to quicker
problem resolution. Additionally, IQ communicates all state and status information of the network element
automatically and asynchronously to the other network elements within the Digital Optical Network and to
all the registered management applications, thus maintaining synchrony with in the network.
IQ provides the following fault management capabilities to help users in managing and maintaining the
network element.
The alarm surveillance functions to detect and report degraded conditions in the network element.
Including:
Detection of defects in the TN780 and Optical Line Amplifier network elements and the incoming
signals (See Defect Detection on page 4-2).
Declaration of defects as failures (See Failure Declaration on page 4-3).
Reporting failures as alarms to the management applications (See Alarm Reporting on page 43).
Masking low priority alarms in the presence of high priority alarms (See Alarm Masking on
page 4-6).
Reporting alarms through local alarm indicators (See Local Alarm Summary Indicators on
page 4-6).
Configuring alarm reporting (See Alarm Configuration on page 4-7).
Isolating network faults utilizing Automatic Laser Shutdown feature (See Network Fault Isolation on page 4-10).
The wrap-around historical event logging that tracks all changes that occur within the network elemen. (See Event Log on page 4-10).
In-service and out-of-service maintenance and troubleshooting tools (See Maintenance and Troubleshooting Tools on page 4-11).

Alarm Surveillance
Defect Detection
IQ detects and terminates all hardware and software defects within the system. A defect is defined to be a
limited interruption in the ability of an item to perform a required function. The detected defects are
analyzed and localized to the specific network site, network element, facility (or incoming signal) and circuit
pack. On detecting certain defects, for example defects in the incoming signal, IQ transmits maintenance
signals to the upstream and downstream network elements indicating successful localization of the defect.
TN780 System Description

Release 1.2

UTStarcom Inc.

IQ Network Operating System

Page 4-3

On termination of defects, IQ stops transmitting maintenance signals. See Network Fault Isolation on
page 4-10 for more details.
The detection of facility defects, such as LOL, AIS, FDI, etc., and transmission of maintenance signals to
the upstream and downstream network elements is in compliance with Telcordia and ITU specifications.

Failure Declaration
As specified in GR-253 specification, the defects associated with facilities/incoming signal are soaked for a
pre-defined period before they are declared as failures. It prevents spurious failures being reported. So,
when a defect is detected on a facility, it is soaked for a time interval of 2.5secs before the corresponding
failure is declared. Similarly, when a facility defect terminates, it is soaked for 10secs before the
corresponding failure is terminated. This eliminates pre-mature termination of the failure.
The defects associated with hardware equipment are not soaked. Failure condition is declared as soon as
the defect is detected and similarly, the failure condition is cleared as soon as the defect is terminated.

Alarm Reporting
IQ reports the hardware and software failures as alarms. Detection of a failure condition results in an alarm
being raised which is asynchronously reported to all the registered management applications. The
termination of a failure results in clearing the corresponding alarm, which is again reported asynchronously
to all the registered management applications. IQ stores the alarm conditions locally and they are
retrievable by the management applications. Thus, at any given time users see only the current standing
alarm conditions.
Alarm generation is also dependent on the administrative state (see Administrative State on page 4-20)
of the managed object instance and presence of other failure conditions and the user configuration, as
described below:
Administrative StateAlarms are generated when the administrative state of a managed object
instance and its ancestor objects is unlocked. When the administrative state of an object or any of
its ancestor objects is locked or in maintenance, the alarms are not generated (except for the Loopback related alarms).
Alarm HierarchyAn alarm is generated only if no high priority alarms exist for the managed object
instance. Thus, only the alarms corresponding to the root cause of the fault condition is reported.
This capability prevents too many alarms being reported for a single fault condition. (See Alarm
Masking on page 4-6).
User ConfigurationIQ provides users the ability to selectively inhibit the alarm reporting utilizing
alarm reporting control feature. (See Alarm Reporting Control on page 4-7).
IQ reports each alarm with sufficient information, as described below, so that the user can take appropriate
corrective actions to clear the alarm. For detailed description of all the parameters of an alarm reported to
the management applications, refer to the corresponding user guides.
Alarm Categorythis information isolates the alarm to a functional area (See Alarm Category on
page 4-5 for the list of supported alarm types).

UTStarcom Inc.

TN780 System Description

Release 1.2

Page 4-4

Fault Management

Alarm Severitythis information indicates the level of degradation that the alarm causes to the service (See Alarm Severity on page 4-5 the list of supported severities).
This information is reported as NTFCNCDE parameter in the TL1 notifications.
Probable Causethis information describes the probable cause of the alarm. This is a short description of the cause of the alarm. More detailed description is provided as Probable Cause Description.
TL1 Condition Typethis field is analogous to the probable cause except that the condition type
string is in accordance with the GR-833-CORE. It is reported as CONDTYPE parameter in the TL1
notifications.
Probable Cause Descriptionthis information provides the detailed description of the alarm and isolates the alarm to a specific area. It is an elaboration of the Probable Cause. This is a string which
provides more information on the cause of the alarm condition.
This information is reported as CONDDESCR parameter in TL1 notifications.
Service Affectingthis information indicates whether the given alarm condition interrupts the data
plane services through the system or network. The two possibilities are: SA for service affecting and
NSA for non-service affecting. An alarm is reported as service-affecting if the alarm condition affects
a hardware or software entity in the data plane, and the affected hardware or software entity is
administratively enabled.
This information is reported as SRVEFF parameter in the TL1 notifications.
Source Objectthis information identifies the managed object instance on which the failure is
detected.
This information is reported as AID in the TL1 notifications.
Locationthis information identifies the location of the managed object as near end or far end, when
applicable.
This information is reported as LOCN parameter in the TL1 notifications.
Directionthis information indicates whether the alarm has occurred in the receive direction or in the
transmit direction, when applicable.
This information is reported as DIRN parameters in the TL1 notifications.
Time & Date of occurrencethis information provides the time at which the alarm was detected. It is
derived from the system time. IQ provides users the ability to manually configure the system time or
enable Network Timing Protocol (see Time-of-Day Synchronization on page 4-59) so that the accurate and synchronized time is reported for all alarms. It allows the root cause analysis of failures
across network elements and networks.
This information is reported as OCRDAT parameter in the TL1 notifications.
TypeAs described in PM Thresholding on page 4-33, IQ supports performance monitoring and
thresholds, enabling early detection of degradation in system and network performance. The threshold crossing conditions are handled utilizing the same mechanism as alarms. The type field indicates
whether the reported condition is an alarm or a threshold crossing condition.
IQ records all the current alarms with alarm details, as described above, in an alarm table. The alarms are
persisted in the MCM/OMM across reboots. After a system reboot or MCM/OMM reboot, the alarms in the

TN780 System Description

Release 1.2

UTStarcom Inc.

IQ Network Operating System

Page 4-5

persistent storage are validated to remove any cleared alarms and raise only the current outstanding
alarms.
Refer to the UTStarcom TN780 Maintenance and Troubleshooting Guide for the detailed description of all
the alarms generated by IQ and the corresponding clearing procedures.

Alarm Category
IQ categorizes the alarms into the following types:
Facility Alarmalarms of this type are associated with the line and tributary facilities, and incoming
signal. For example: LOL, LOS, AIS, and FDI.
Equipment Alarmalarms of this type are associated with hardware errors. For example: Equipment Failure, and Equipment Unreachable.
Communications Alarmalarms of this type are associated with faults which impact the communication between the modules within the network element and between network elements. For example: No Communication with OSC Neighbor, and LOL on OSC.
Software Processing Alarmalarms of this type are associated with software processing errors. For
example, Software Upgrade has Failed, and Persistence space less than 2%-critical.
Environmental Alarmalarms of this type are caused by the change in the state of the environmental alarm input contact.

Alarm Severity
Each alarm generated by IQ has one of four severity levels set by default. These levels are:
Criticalthe Critical severity level indicates that a service affecting condition has occurred and an
immediate corrective action is required. This severity is reported, for example, when a managed
object instance becomes totally out-of-service and its capability must be restored.
Majorthe Major severity level indicates that a service affecting condition has developed and an
urgent corrective action is required. This severity is reported, for example, when there is a severe
degradation in the capability of the managed object instance and its full capability must be restored.
Minorthe Minor severity level indicates the existence of a non-service affecting fault condition and
that corrective action should be taken in order to prevent a more serious (for example, service
affecting) fault. Such a severity is reported, for example, when the detected alarm condition is not
currently degrading the capacity of the managed object instance.
Warningthe Warning severity level indicates the detection of a potential or impending service
affecting fault, before any significant effects have been felt. Action should be taken to further diagnose (if necessary) and correct the problem in order to prevent it from becoming a more serious service affecting fault.
The alarm severity is referred to as the notification code in GR-833-CORE and it is reported as such in the
TL1 notifications.
The user can customize the severity associated with an alarm through the management applications. (See
Alarm Severity Assignment Profile on page 4-9.)

UTStarcom Inc.

TN780 System Description

Release 1.2

Page 4-6

Fault Management

Alarm Masking
IQ masks (i.e., not autonomously report) a failure that is the result of the same root-cause problem or
maintenance signal as another higher-priority failure reported simultaneously by that network element per
the containment hierarchy, similar to those defined for SONET/SDH protocols. This prevents logs and
management applications from being flooded with redundant information. For example, a circuit pack
failure may cause a LOL alarm. Since the underlying fault is the circuit pack failure, suppressing LOL alarm
prevents redundant information being reported.
The masked condition is neither reported to the management applications nor recorded in the alarm table.
However, the masked condition does not have any effect on changes to the operational state of the
managed object instance on which the condition exists.

Local Alarm Summary Indicators


The TN780 and Optical Line Amplifier network elements provide local visual and audio indicators to report
the summary of current alarm conditions of a network element and chassis to the local personnel. For the
detailed description of the indicators and their function refer to the UTStarcom TN780 Hardware
Description document and the UTStarcom TN780 Maintenance and Troubleshooting Guide.
Following is a brief summary of the local indicators provided by the TN780 and Optical Line Amplifier
network elements:
Bay Level Visual Alarm IndicatorsThese indicators provide the summary of the outstanding alarm
conditions of all chassis within a bay. A bay level visual alarm indicator (LEDs) is lit if there is at least
one corresponding outstanding alarm condition in any of the chassis within the bay. The following
bay level LED indicators are provided:
Critical LED to indicate the presence of critical alarm within the bay.
Major LED to indicate the presence of major alarm within the bay.
Minor LED to indicate the presence of minor alarm within the bay.
For the bay-level indicators to operate correctly, the pre-defined alarm contacts must be wire
wrapped and the software must be configured appropriately. Refer to UTStarcom TN780 Site Preparation and Hardware Installation Guide document for a detailed description of cabling and configuration to provide bay-level alarm indication.
Note: The TN780 supports the bay-level alarm indicators. The Optical Line Amplifier does not
support the bay-level alarm indicators. The bay-level indicators provided by the PDU is recommended to be used whenever it is present in a bay.
Chassis Level Visual Alarm IndicatorsThese indicators provide the summary of the outstanding
alarm conditions of the chassis. A chassis level visual alarm indicator is lit if there is at least one corresponding outstanding alarm condition within the chassis. The following bay level LED indicators
are provided:
Critical LED to indicate the presence of critical alarm within the chassis.

TN780 System Description

Release 1.2

UTStarcom Inc.

IQ Network Operating System

Page 4-7

Major LED to indicate the presence of major alarm within the chassis.
Minor LED to indicate the presence of minor alarm within the chassis.
Power LED to indicate the status of power input to the chassis.
Chassis Level Office Alarm IndicatorsAs described in Office Alarms on page 3-19, the TN780
and Optical Line Amplifier network elements provide alarm output contacts to support chassis level
visual and audio indication of critical, major and minor alarms. As described in Alarm Cutoff (ACO)
on page 3-19, ACO buttons and ACO LEDs are also supported.
Card Level Visual IndicatorsAll circuit packs include LEDs to indicate the card status. In general,
all circuit packs provide the following LEDs.
Power (PWR) LED to indicate the status of the power input to the circuit pack.
Active (ACT) LED to indicate administrative state and service state of the circuit pack.
Fault (FLT) LED to indicate the presence of the critical, major or minor alarm.
Port Level IndicatorsThese indicators are provided for each tributary port and line port. In general,
the port level LEDs include:
Active (ACT) LED to indicate the administrative state and service state of the port.
LOS LED to indicate the incoming signal status.
Note: By default all critical, major, and minor alarms affect the corresponding chassis LED status.
However, through the management applications users can disable the facility alarms not to
affect the chassis LEDs. The equipment alarms always affect the chassis LEDs.

Alarm Configuration

UTStarcom Inc.

TN780 System Description

Release 1.2

IQ Network Operating System

Page 4-27

Automatic re-establishment of a SNC after network problems are corrected (note that SNCs are not
automatically released on detecting network problems; the SNC must be released by the user at the
source node where the SNC was originated).
User configured circuit identifiers for easy correlation of alarms and performance monitoring information to the end-to-end circuit aiding in service level monitoring.
Circuit tracking by storing and making available to the management the hop-by-hop circuit route
along with the source endpoint of the SNC.
Refer to IQ GMPLS Control Plane Overview on page 4-47 for a detailed description of the GMPLS
functions.

Service Pre-provisioning
IQ supports pre-provisioning of circuits, enabling users to set up both manual cross-connects and SNCs in
the absence of DLMs and TAMs. Pre-provisioning of data plane connections keeps the resources in a
pending state until the DLM and/or TAM is inserted. IQ internally tracks resource utilization to ensure that
resources are not overbooked. The pre-provisioning of circuits requires that the supporting circuit packs
first be pre-configured.

UTStarcom Inc.

TN780 System Description

Release 1.2

Page 4-28

Protection Group Provisioning

Protection Group Provisioning


IQ supports the provisioning of protection groups, enabling users to establish TribY-cable protection on the
TN780 client tributary ports. Utilizing protection groups increases the overall reliability and service up-time
for the circuits while maintaining collection of digital PMs and path protection.
Utilizing the TribY-cable protection feature offered in R1.2 eliminates the need for expensive Add/Drop
Multiplexers in the network, reducing overall network operation cost.
TribY-cable enables 1+1 equipment protection of diverse paths through the Digital Optical Network for sub-50ms
switching. TribY-cable protection increases the overall reliability and service up-time of the optical path.

An additional license is required to utilize TribY-cable protection:


IQ Fast Protection Software RTU, a usage based software runtime license for enabling up to 5 Y-cable protection pairs on the TN780.

For information on how to purchase IQFast Protection Software RTU license contact an UTStarcom
Customer Service and Technical Support resource (see Technical Assistance on page xiv).
In order to provision TribY-cable protection, these rules are applied:
Two tributary ports are required to form a protection group
Trib ports must be on separate DLMs
Trib ports must be in the same chassis
Trib ports must be the same service type (for example, OC-192)
Trib ports cannot be associated with an existing SNC or cross-connect
Protection Groups supports the following operations:
Allows for the provisioning of the preferred working protection unit (PU) upon creation of the protection group
Lockout of working
A user initiated switch that when invoked causes the traffic that was on the working line to be
switched to the protect line.
Traffic cannot be moved back to the working until the Lockout of working has been cleared.
Note: If a failure occurs on the protect while there is a lockout of working, traffic cannot switch to
the working until the lockout is cleared. This can result in a loss of traffic.
Lockout of protect
A user initiated switch that when invoked causes the traffic that was on the protect line to be
switched to the working line.
Traffic cannot be moved back to the protect until the Lockout of protect has been cleared.

TN780 System Description

Release 1.2

UTStarcom Inc.

Page 4-10

Fault Management

Network Fault Isolation


The TN780 and Optical Line Amplifier network elements implement Automatic Laser Shutdown feature to
isolate the effects of a fault. The Automatic Laser Shutdown features include:
Automatically turn off the EDFAs transmitting at high powers toward the upstream and downstream
network elements, in order to comply with stringent laser eye-safety requirements.
Transmit maintenance signals to alert downstream and upstream network elements that a fault has
been isolated. The maintenance signals help distinguish between the defect that is local to this network elements hardware, or on an adjacent facility vs. the defect in a remote network element or on
a remote facility.
As described in Digital Transport Maintenance Functions on page 3-29, the DTF architecture supports
maintenance signals which are modeled after the SONET/SDH layers. The maintenance signals are
transmitted in-band to the upstream and downstream TN780s.
Similarly, the OTN architecture defines maintenance signals (see Optical Transport Maintenance
Functions on page 3-33) which are transmitted out-of-band over the OSC.
The maintenance signals are transmitted immediately after the detection of a defect in the incoming signal
or equipment failure and are removed after the termination of a defect or equipment failure.

Event Log
IQ provides a wrap-around historical event log that tracks all changes that occur within the system. The
events are recorded locally in the network element and are retrievable through the management
applications. The event log enables users and management applications to retrieve all events (including
alarms) that occurred during a communication failure between the management applications and the
network element, and will maintain data synchrony between the network element and the management
application.
IQ records the following types of events in the event log:
Alarm related events which include alarm raise and clear events.
PM data thresholding related events which include threshold crossing raise and clear events.
Threshold crossing alerts as described in PM Thresholding on page 4-33.
Managed object creation and deletion events triggered by the user actions.
Security administration related events triggered by the user actions.
Network administration events triggered by the user actions to software upgrade, software downgrade, database restore, etc.
Audit events triggered by the user actions to change attribute value(s) of a managed object.
State change events indicating the state changes of a managed object triggered by the user action
and/or changes in the operation capability of the managed object.
The event logs are stored in the persistent storage on the network element, and therefore, the event logs
will be available after restarts and reboots. Note that the attribute value change events are not stored in the

TN780 System Description

Release 1.2

UTStarcom Inc.

IQ Network Operating System

Page 4-11

persistent storage. IQ stores up to 1000 attribute value change events which are not persisted and 3000
remaining events which are persisted over reboots. Users can export the event log information in TSV
format using management applications.
Following are some of the important information stored for each event log record:
Managed object instance that generated the event.
The time at which IQ generated the event.
Event type indicating the event category, including:
Update Event which includes managed object create and delete events.
Report Event which includes security administration related event, network administration
related event, audit events, and threshold crossing events (TCE).
Condition which includes alarm raise and clear event, non-alarmed conditions, and Threshold
crossing condition events.
Refer to UTStarcom TN780 Maintenance and Troubleshooting Guide for a list of events logged in an event
log on TN780 and Optical Line Amplifier network elements.

Maintenance and Troubleshooting Tools


IQ provides extensive maintenance and troubleshooting tools used for pre-service operations and
troubleshooting problems to isolate the source of the problems. The troubleshooting tools help sectionalize
the problems and accurately identify the troubled spot by running tests progressively at the network
element, span, digital link and path level.
IQ provides both out-of-service troubleshooting tools which require the corresponding facilities (managed
object entities) to be in administrative maintenance state and in-service troubleshooting tools which can be
run while the corresponding facilities are in administrative unlocked state.
Out-of-service Troubleshooting Tools:
Loopbacks to test circuit paths through the network or logically isolate faults. (See Loopbacks
on page 4-12)
PRBS generation and detection (PRBS Test on page 4-12)
Hairpin circuits (Hairpin Circuits on page 4-13)
In-service Troubleshooting Tools:
Trace messaging (See Trace Messaging on page 4-13)
The troubleshooting tools are accessible through the management applications by the users with TT
(Turn-up and Test) access privilege.

UTStarcom Inc.

TN780 System Description

Release 1.2

Page 4-12

Fault Management

Loopbacks
Loopbacks are used to test newly created circuits before running live traffic or to logically locate the source
of a network failure. Loopbacks provide a mechanism where the signal under test (either the user signal or
the test pattern signal such as PRBS) is looped back at some location on the network element in order to
test the integrity and validity of the signal being looped back. Since loopbacks affect the normal data traffic
flow, they must be invoked only when the associated facility is in administrative maintenance state.
IQ provides access to the loopback capabilities in a TN780 network element. These loopbacks are
agnostic to the client payload type. Following is a list of loopbacks supported to test each section of the
network as shown in Figure 4-2 on page 4-12 and also various hardware components along the data path
(see DTC Digital and Optical Transport Architecture on page 3-22). The loopbacks can be enabled or
disabled remotely through the management applications.
Client Trib Facility Loopbackis performed on the TAM. The tributary port Rx is looped back to the
Tx on the TAM. This loopback test verifies the operation of the tributary side optics in the TOM and
TAM.
DTF Path Terminal Loopbackis performed on the DLM circuit. In this case the cross-point switch
on the DLM loops back the received client signal towards the TAM. This loopback verifies the operation of the tributary side optics as well as the adaptation of client signals into digital signals performed in the TOM and TAM and the cross-point switch on the DLM.
DTF Path Facility Loopbackis performed on the DLM. In this case the cross-point switch on the
DLM loops back the received line side signal towards the line. This loopback verifies the line side
connectivity and the DTF encapsulation performed in the DLM.
Client Trib Terminal Loopbackis performed on the TAM. In this case the digital signal received
from the line is looped back to the line transmit side in the TAM. This loopback verifies the line side
optics on the DLM, the DTF and FEC Mapper/demapper in the DLM and the cross-point switch.
Figure 4-2 Loopbacks supported by the TN780

C
l
i
e
n
t
T
r
i
b
F
a
c
i
l
i
t
y
L
o
o
p
b
a
c
k
D
T
F
P
a
t
h
T
e
r
m
i
n
a
l
L
o
o
p
b
a
c
k
D
T
F
P
a
t
h
F
a
c
i
l
i
t
y
L
o
o
p
b
a
c
k
C
l
i
e
n
t
T
r
i
b
T
e
r
m
i
n
a
lL
o
o
p
b
a
c
k

PRBS Test
The Pseudo Random Bit Sequence (PRBS) is a test pattern that is used to diagnose and isolate the
troubled spots in the network, without the requirement for valid data signal or customer traffic. This type of
test signal is used during the system turn-up or in the absence of a valid data signal from the customer
equipment. The test is primarily aimed to watch out and sectionalize the occurrence of bit errors in the data
path. Since the PRBS test affects the normal data traffic flow, it must be invoked only when the associated
facility is in administrative maintenance state.

TN780 System Description

Release 1.2

UTStarcom Inc.

IQ Network Operating System

Page 4-13

IQ provides access to the PRBS generation and monitoring capabilities supported by the TN780 network
element. The TN780 supports PRBS generation and monitoring for testing circuit quality at both the DTF
Section and DTF Path layers as described below. The PRBS test can be enabled or disabled remotely
through the management applications.
DTF Section-level PRBS Testhere the PRBS signal is generated by the near end DLM and it is
monitored by the adjacent TN780 network elements. This test verifies the quality of the digital link
between two adjacent TN780 network elements.
DTF Path-level PRBS Testhere the PRBS signal is generated by the near end TAM and it is monitored at the far end TAM where the digital path is terminated. This test verifies the quality of the endto-end digital path.
Figure 4-3 PRBS Tests Supported by the TN780
C
lie
n
t

C
lie
n
t

D
T
F
S
e
c
tio
n
P
R
B
S

M
D
T
F
P
a
th
P
R
B
S

M
GP
R
B
S
G
e
n
e
r
a
to
r

MP
R
B
S
M
o
n
ito
r

Note: The PRBS tests can be coupled with loopback tests so that the pre-testing of the quality of
the digital link or end-to-end digital path can be performed without the need for an external
PRBS test set. While this is not meant as a replacement for customer-premise to customer-premise circuit quality testing, it does provide an early indicator of whether or not the
transport portion of the full circuit is providing a clean signal.

Hairpin Circuits
A hairpin circuit refers to a special circuit where the source and destination tributary ports are located on
the same network element and the same DLM. In other words, the client signal received by the DLM on
one tributary port is looped back to another tributary port on the same DLM, without going through the line.
The source and destination tributary ports could be on the same TAM or a different TAM, but they must be
on the same DLM.

Trace Messaging
IQ provides access to the trace messaging feature supported by the TN780 network element. The TN780
supports the following trace messaging functions:
Trace messaging at the DTF Section and DTF Path (see Figure 4-4 on page 4-14). The DTF Section trace messaging is utilized to detect any mis-connections between the TN780 network elements

UTStarcom Inc.

TN780 System Description

Release 1.2

IQ Network Operating System

Page 4-33

PM Thresholding
The PM thresholding provides an early detection of faults before significant effects are felt by the end
users. Degradation of service can be detected by monitoring error rates. Threshold mechanisms on
counters and gauges allow the detection of such trends and provide a warning to users when the error rate
becomes high.
IQ supports thresholding for both optical PM gauges and digital PM counters. During the PM period, if the
current value of a performance monitoring parameter reaches or exceeds corresponding configured
threshold value, threshold crossing notifications are sent to the management applications.
Optical PM ThresholdingIQ performs thresholding on some optical PM parameters by utilizing
high and low threshold values. Note that the thresholds are configurable for some PM parameters,
for others, system utilizes pre-defined threshold values. An alarm is reported when the measured
value of an optical PM parameter is outside of its threshold values. The alarms are automatically
cleared by IQ when the recorded value of the optical PM parameter is within the acceptable range.
Digital PM ThresholdingIQ performs thresholding on some digital PM data utilizing high threshold
values which are user configurable. The Threshold Crossing Alert (TCA) is reported when a PM
counter, within a collection period, exceeds the corresponding threshold value. When a threshold is
crossed, IQ continues to count the errors during that accumulation period. As with PM counters,
TCAs are transient in nature and are reported as events which are logged in the event log buffers as
described in Event Log on page 4-10. The TCAs do not have corresponding clearing events since
the PM counter is reset at the beginning of each period.
Note that the PM thresholding is supported for some of the PM parameters, but not for all.

Suspect Interval Marking


IQ marks the PM data for a given managed object instance collected in 15-minute and 24-hour periods as
suspect or invalid by maintaining an invalid data flag (IDF). The IDF is maintained per managed object
instance per period basis. The IDF is retrievable by management applications and is used to communicate
to the user the validity of the collected PM data. The PM data is marked invalid under the following
conditions:
User resets the PM counter through management applications.
The period of PM data accumulation changes by +/-10secs (e.g., the network elements time of day
setting was changed during the period).
Loss of PM data due to system restart or hardware failure.

PM Data Transfer
IQ stores the entire PM data in flat files in CSV format. Users (customers) can use these flat files to
integrate PM data analysis into their management applications or simply view the PM data through the
spreadsheet applications.

UTStarcom Inc.

TN780 System Description

Release 1.2

Page 4-34

Performance Monitoring and Management

Users can schedule the TOD (time of day) at which the network element automatically transfers the PM
data to the user specified FTP server. Users can configure primary and secondary FTP server addresses.
If the data transfer to the primary FTP server fails, the PM data is transferred to the secondary FTP server.

PM Data Configuration
IQ allows users to customize the PM data collection. Users can configure the PM data collection through
management applications. IQ supports the following configuration options:
Reset the current 15-minute and 24-hour counters at any time per managed object instance.
Change the default threshold values according to the customers error monitoring needs.
Enable or disable the PM threshold crossing alarm and TCA reporting per attribute per managed
object instance.
Configure the frequency of PM flat file uploading to the FTP servers as configured.
User configures periodic uploading of PM data to the client machine

TN780 System Description

Release 1.2

UTStarcom Inc.

Page 4-16

Equipment Management and Configuration

Figure 4-5 Managed Object Entities and Hierarchy


Network
element

Chassis

Hardware / circuit packs


BMM

DLM

Chassis

MCM

TAM
OW M
Physical ports

DCF

Span

C-band

OCG

Logical termination

TOM

OCG

OCG

Trib port

L-band

GMPLS
Link

OSC

points

Optical
channel

Orderwire
channel
Line DTF 1
Path (2.5G)
Containment relationship

23

23

56

78

10
9

Line DTF
Path (10G)

Optical
channel
1

23

56

78

Line DTF
Path (10G)

10G X connect
(line to line)

Trib DTF
path

10
9

Client Trib
(Sonet/SDH/
10GbE

10G X connect
(trib to line)

Supported/supporting
relationship

System Discovery and Inventory


IQ automatically discovers the system resources and maintains an inventory which is retrievable by the
management applications. IQ discovers the following automatically:
Multi-Chassis configuration:
Main-Chassis
Expansion Chassis

TN780 System Description

Release 1.2

UTStarcom Inc.

IQ Network Operating System

Page 4-17

All circuit packs in the TN780 and Optical Line Amplifier network elements (see Circuit Pack Discovery on page 4-17)
The termination points, including physical ports and logical termination points in a TN780 and Optical Line Amplifier network element
The Digital Optical Network topology including Physical Topology and Service Provisioning topology
(see Network Topology on page 4-48)
The optical data plane connectivity which includes the connectivity between the DLM and BMM in a
TN780 network element (see Optical Data Plane Autodiscovery on page 4-17)
IQ maintains the inventory of all the automatically discovered resources, as described above, and also the
user provisioned services which includes:
Cross-connects provisioned using Manual Cross-connect Provisioning mode
Circuits provisioned using Dynamically Signaled SNC Provisioning mode
Cross-connects that are automatically created while creating circuits utilizing Dynamically Signaled
SNC Provisioning mode
Protection groups that have been provisioned
Refer to Service Provisioning on page 4-23 for more details.

Circuit Pack Discovery


IQ provides the ability to automatically detect circuit packs in the TN780 and Optical Line Amplifier. IQ also
discovers the detailed manufacturing information including:
Hardware revision
Circuit pack type
Serial ID
CLEI code
Manufacturing date
Software version
Last reboot time
The manufacturing information is maintained in the inventory and it is retrievable by the management
applications.

Optical Data Plane Autodiscovery


The UTStarcom TN780 includes optical connections between DLMs and BMMs. The connectivity is
facilitated through a front-accessible optical patch cord that is used to transport the 100Gbps OCG signal
between the BMM and DLM.

UTStarcom Inc.

TN780 System Description

Release 1.2

Page 4-38

Security and Access Management

Table 4-1 Access Privilege Permissions


Managed Object
Entity
PEM

Operation
Create, delete and
update

SA
No

NA
Yes

NE
Yes

PR

TT

MA

No

No

No

Termination Point (physical ports or logical ports) Management


OTS

Update

No

Yes

No

Yes

Yes

No

Band

Update

No

Yes

No

Yes

Yes

No

OCG - BMM

Update

No

Yes

No

Yes

Yes

No

OCG - DLM

Update

No

Yes

No

Yes

Yes

No

Channel

Update

No

Yes

No

Yes

Yes

No

DTF Path

Update

No

Yes

No

Yes

Yes

No

Trib

Update

No

Yes

No

Yes

Yes

No

Client

Update

No

Yes

No

Yes

Yes

No

OSC

Update

No

Yes

No

Yes

Yes

No

DCF

Update

No

Yes

No

Yes

Yes

No

Cross-connect

Create, update and


Delete

No

Yes

No

Yes

No

No

SNC circuit

Create, update and


Delete

No

Yes

No

Yes

No

No

Protection Group

Create

No

Yes

No

Yes

Yes

No

Services

System Administration and Software Maintenance Functions


Periodic PM data
transfer

Update

No

Yes

No

Yes

Yes

No

System date and


time

Update

No

Yes

No

No

No

No

Software download

Update

No

Yes

No

No

No

No

Database download

Update

No

Yes

No

No

No

No

Database upload

Update

No

Yes

No

No

No

No

ASAP (Alarm Severity Assignment Profile)

Update

No

Yes

No

No

No

No

Alarm acknowledgment

Update

No

Yes

Yes

Yes

Yes

No

Yes

No

No

No

No

No

Network Element Security Administration


Users

TN780 System Description

Create, update and


delete

Release 1.2

UTStarcom Inc.

IQ Network Operating System

Page 4-39

Table 4-1 Access Privilege Permissions


Managed Object
Entity
Security parameters

Operation
Update

Managed Object
Entity

Operation

SA
Yes

NA
No

SA

NA

NE
No
NE

PR
No
PR

TT
No

MA
No

TT

MA

Security Audit Log


IQ maintains an independent and persistent circular audit log that records all system configuration
activities and security related events, such as unauthorized attempts and excessive authentication
attempts. The audit log provides traceability of all system-impacting changes. The supported features
include:
The audit logs include system configuration activities and security related activities performed by the
user. These activities include:
Creating and deleting managed object entities
Updating an attribute of the managed object entity
Invalid login attempts
Unauthorized attempts to access resources due to restrictions imposed by the user access privilege
Updates to the user's security parameters, such as the password, user access privilege, password aging time, etc.
Updates to the network element security parameters such as maximum number of invalid login
attempts, and inactivity time-out interval
The audit logs are maintained in a circular buffer and hence the oldest records are overwritten.
The audit logs are preserved when system reboots
Each audit log entry includes the following minimum set of information:
User login ID of the user who performed the action, along with terminal, port and network
address information
Date and Time of the operation
Action performed
Instance of the managed object entity on which the action was performed
Result of the operation performed
Users cannot modify the audit logs

UTStarcom Inc.

TN780 System Description

Release 1.2

Page 4-40

Security and Access Management

Users (with any access privilege) can view the audit logs through the management applications

Security Administration
IQ defines a set of security administration functions and parameters that are used to implement sitespecific policies. Security administration can be performed only by users with security administrator
privilege. The supported features include:
View all users currently logged on
Disable and enable a user account (this operation is allowed only when the user is not logged on)
Modify user account parameters, including access privilege and password expiry time
Delete a user account and its attributes, including password
Reset any user password to system default password
Monitor security audit logs to detect unauthorized access
Monitor the security alarms and events raised by the network element and take appropriate actions
Configure system-wide security administration parameters:
Default password
Inactivity time-out period
Maximum number of invalid login attempts allowed
Number of history passwords
Advisory warning message displayed to the user after successful login to the network element

TN780 System Description

Release 1.2

UTStarcom Inc.

Page 4-22

Equipment Management and Configuration

Out-of-service (OOS)indicates that the managed object entity is not providing normal end-user
services either due to its operational state is disabled or the administrative state of its ancestor
object is locked, or the operational state of its ancestor object is disabled.
Out-of-service Maintenance (OOS-MT)indicates that the managed object entity is not providing
normal end-user services, but it can be used for maintenance test purposes. Its operational state is
enabled and its administrative state is maintenance.
Out-of-service Maintenance, Locked (OOS-MT, Locked)indicates that the managed object entity
is not providing normal end-user services, but it can be used for maintenance test purposes. Its
operational state is enabled and its administrative state is locked.

TN780 System Description

Release 1.2

UTStarcom Inc.

IQ Network Operating System

Page 4-23

Service Provisioning
IQ provides service provisioning capabilities which includes establishing data path connectivity between
endpoints for delivery of end-to-end capacity. The services are originated and terminated in a TN780
network element. The services are provisioned at 2.5G and 10G granularity and are full-duplex,
bidirectional services. IQ defines following types of endpoints:
DTF Path Endpointsare the line-side endpoints which are DTF (refer to Digital Transport on
page 3-21 for the description of DTF) encapsulated 10G or 2.5G channels. The line-side endpoints
are sourced and terminated in a DLM. As described in Digital Line Module (DLM) on page 3-9,
each DLM supports one OCG which includes ten 10G optical channels.
Trib-side Endpointsare client payload specific and can be any of the payload type described in
Client/Trib Interfaces on page 3-18.
IQ automatically creates the endpoints on configuring the circuit packs as described in Circuit Pack
Configuration on page 4-19.
IQ supports two service provisioning modes to meet diverse customers needs as described in:
Manual Cross-connect Provisioning on page 4-23
Dynamically Signaled SNC Provisioning on page 4-26
Protection Group Provisioning on page 4-28
As with equipment configuration, services can also be pre-provisioned as described in on page 4-30.

Manual Cross-connect Provisioning


IQ supports manual cross-connect provisioning mode where the cross-connects are manually configured
in each TN780 network element along the circuits route. This mode provides users full control over which
network elements are traversed for a given circuit. The cross-connects created using this mode of
provisioning is referred to as the static cross-connects. The static cross-connects can be assigned circuit
ID to correlate multiple cross-connects in multiple TN780 network elements forming an end-to-end circuit.
Three types of cross-connects are supported by the TN780 network element:
Express Cross-connectassociates one line-side DTF endpoint to another line-side DTF endpoint
by establishing connectivity between the optical channels of two different OCGs (DLMs) within a
TN780 network element. As described in Bandwidth Grooming on page 3-26, in Release 1.2, the
cross-connects can be established between adjacent DLMs within a DTC.
Between slots 3 and 4; and slots 5 and 6, one hundred Gbps of bandwidth can be cross-connected.
Between slots 3 and 5; and slots 4 and 6, sixty Gbps of bandwidth can be cross-connected.
This cross-connect is transparent to the payload type encapsulated in the DTF. A typical application
for this cross-connect is to establish a data path through a Digital Repeater (see Digital Repeater
Configuration on page 2-3) site.

UTStarcom Inc.

TN780 System Description

Release 1.2

Page 4-24

Service Provisioning

Figure 4-6 Express Cross-connect


D L M ( in s lo t 3 )
MAP/
FEC

BM M

L in e fib e r
( w e s t)

D L M ( in s lo t 4 )
M AP/
FEC

OSC

D L M ( in s lo t 5 )
MAP/
FEC
BM M

D L M ( in s lo t 6 )
L in e fib e r
( e a s t)

M AP/
FEC
OSC

Add/Drop Cross-connectassociates the trib-side endpoint to the line-side endpoint by establishing


connectivity between the TOM tributary port to a line-side optical channel within a DLM. Any tributary
port can be connected to any of the line-side optical channels. However, a given tributary port must
be associated with a line-side optical channel of the same DLM. It cannot be associated with a lineside optical channel of the adjacent DLM (see Figure 4-7 on page 4-25).
This type of cross-connect is used to add/drop traffic at a Digital Add/Drop site or to source/terminate
traffic at a Digital Terminal site.

TN780 System Description

Release 1.2

UTStarcom Inc.

IQ Network Operating System

Page 4-25

Figure 4-7 Add/Drop Cross-connect

TOM

DLM (in slot 3)


TAM

TOM

TOM

MAP/
FEC

BMM

TAM

TOM

TOM
TOM

TOM

Line fiber
(west)

DLM (in slot 4)


TAM
MAP/
FEC

OSC

TAM

TOM

TOM

DLM (in slot 5)


TAM

TOM

TOM

MAP/
FEC
BMM

TAM

TOM

TOM

DLM (in slot 6)


TAM

TOM

TOM
TOM

Line fiber
(east)

MAP/
FEC
TAM

OSC

Hairpin Cross-connectare used to cross-connect two tributary ports within a TN780 network element. In Release 1.2, hairpinning is supported within a DLM between two tributary ports, in the
same or different TAMs (see Figure 4-8 on page 4-26). Such hairpin cross-connects do not use the
line-side optical channel resource.
The hairpin cross-connects are used in Metro applications for connecting two buildings within a short
reach without laying new fibers.

UTStarcom Inc.

TN780 System Description

Release 1.2

Page 4-26

Service Provisioning

Figure 4-8 Hairpin Cross-connects

TOM

DLM (in slot 3)


TAM

TOM

TOM

MAP/
FEC

BMM

TAM

TOM

TOM
TOM

TOM

Line fiber
(west)

DLM (in slot 4)


TAM
MAP/
FEC

OSC

TAM

TOM

TOM

DLM (in slot 5)


TAM

TOM

TOM

MAP/
FEC
BMM

TAM

TOM

TOM

DLM (in slot 6)


TAM

TOM

TOM

Line fiber
(east)

MAP/
FEC
TAM

TOM

OSC

Dynamically Signaled SNC Provisioning


IQ supports dynamically signaled Sub-Network Connection (SNC) provisioning where an end-to-end
transport service is automatically provisioned utilizing IQ GMPLS control protocol as described in IQ
GMPLS Control Plane Overview on page 4-47. In this mode, user identifies the source and destination
endpoints and IQ GMPLS control protocol computes the circuit route through the Digital Optical Network
and also establishes the circuit, referred to as a SNC, by automatically configuring the cross-connects in
each TN780 network element along the path. The cross-connects automatically configured by the GMPLS
protocol are called Signaled Cross-connects and an inventory of signaled cross-connects are retrievable
through the management applications.
IQ GMPLS control protocol enables:
Error-free, automatic end-to-end SNC provisioning resulting in automatic service turn-up.
The automatic retry mechanism allowing SNC setup to be tried periodically without manual intervention.
SNC monitoring and alarm reporting if a circuit experiences problems in the Digital Optical Network.

TN780 System Description

Release 1.2

UTStarcom Inc.

IQ Network Operating System

Page 4-27

Automatic re-establishment of a SNC after network problems are corrected (note that SNCs are not
automatically released on detecting network problems; the SNC must be released by the user at the
source node where the SNC was originated).
User configured circuit identifiers for easy correlation of alarms and performance monitoring information to the end-to-end circuit aiding in service level monitoring.
Circuit tracking by storing and making available to the management the hop-by-hop circuit route
along with the source endpoint of the SNC.
Refer to IQ GMPLS Control Plane Overview on page 4-47 for a detailed description of the GMPLS
functions.

Service Pre-provisioning
IQ supports pre-provisioning of circuits, enabling users to set up both manual cross-connects and SNCs in
the absence of DLMs and TAMs. Pre-provisioning of data plane connections keeps the resources in a
pending state until the DLM and/or TAM is inserted. IQ internally tracks resource utilization to ensure that
resources are not overbooked. The pre-provisioning of circuits requires that the supporting circuit packs
first be pre-configured.

UTStarcom Inc.

TN780 System Description

Release 1.2

Page 4-28

Protection Group Provisioning

Protection Group Provisioning


IQ supports the provisioning of protection groups, enabling users to establish TribY-cable protection on the
TN780 client tributary ports. Utilizing protection groups increases the overall reliability and service up-time
for the circuits while maintaining collection of digital PMs and path protection.
Utilizing the TribY-cable protection feature offered in R1.2 eliminates the need for expensive Add/Drop
Multiplexers in the network, reducing overall network operation cost.
TribY-cable enables 1+1 equipment protection of diverse paths through the Digital Optical Network for sub-50ms
switching. TribY-cable protection increases the overall reliability and service up-time of the optical path.

An additional license is required to utilize TribY-cable protection:


IQ Fast Protection Software RTU, a usage based software runtime license for enabling up to 5 Y-cable protection pairs on the TN780.

For information on how to purchase IQFast Protection Software RTU license contact an UTStarcom
Customer Service and Technical Support resource (see Technical Assistance on page xiv).
In order to provision TribY-cable protection, these rules are applied:
Two tributary ports are required to form a protection group
Trib ports must be on separate DLMs
Trib ports must be in the same chassis
Trib ports must be the same service type (for example, OC-192)
Trib ports cannot be associated with an existing SNC or cross-connect
Protection Groups supports the following operations:
Allows for the provisioning of the preferred working protection unit (PU) upon creation of the protection group
Lockout of working
A user initiated switch that when invoked causes the traffic that was on the working line to be
switched to the protect line.
Traffic cannot be moved back to the working until the Lockout of working has been cleared.
Note: If a failure occurs on the protect while there is a lockout of working, traffic cannot switch to
the working until the lockout is cleared. This can result in a loss of traffic.
Lockout of protect
A user initiated switch that when invoked causes the traffic that was on the protect line to be
switched to the working line.
Traffic cannot be moved back to the protect until the Lockout of protect has been cleared.

TN780 System Description

Release 1.2

UTStarcom Inc.

IQ Network Operating System

Page 4-29

Note: If a failure occurs on the working while there is a lockout of protect, traffic cannot switch to
the protect until the lockout is cleared. This can result in a loss of traffic.
Clear lockout of working
Clears the lockout of working, enabling the working to carry traffic
Traffic will remain on the protect unless a failure occurs, or a user initiated switch in invoked
Clear lockout of protect
Clears the lockout of protect, enabling the protect to carry traffic
Traffic will remain on the working unless a failure occurs, or a user initiated switch in invoked
Manual Switch
A user initiated switch that when invoked on the working, moves the customer traffic to the protect
A user initiated switch that when invoked on the protect, moves the customer traffic to the working
Note: If a higher priority switch is in effect, then the manual switch command will be denied.

Note: The invoking of a lockout of working, a lockout of protect, or an automatic switch will override a manual switch.
Automatic switching <50ms
Automatic switching is invoked by the TN780 and is caused by the following triggers:
Loss of Frame (LOF)
Bit Error Rate based Signal fail (BER-based SF)
Alarm Indication Signal (AIS)
Loss of Signal (LOS)
Loss of Light (LOL)
Equipment failure

Note: All switches (lockout of working, lockout of protect, manual, and automatic) will result in a
<50ms interruption in customer traffic.

UTStarcom Inc.

TN780 System Description

Release 1.2

Page 4-30

Protection Group Provisioning

Note: All switches are non-revertive. In a non-revertive switch the traffic is switched from working
to protect, the traffic will stay on protect until there is a failure (resulting in an automatic
switch), or a user initiated switch is invoked (manual switch, lockout of working, lockout of
protect).
Creation of new protection groups
Deletion of protection groups
The provisioning of TribY-cable protection eliminates the need for traditional ADMs used for protection,
which lowers overall network costs to UTStarcom customers.
Figure 4-9 TribY-cable Protection

Y-Cable
Protect
Line

Y-Cable
Working
Line

TN780 System Description

Release 1.2

UTStarcom Inc.

IQ Network Operating System

Page 4-31

Performance Monitoring and Management


IQ provides extensive performance monitoring to provide early detection of service degradation before a
service outage occurs. The performance monitoring capabilities allow customers to pro-actively detect
problems and correct them before end-user complaints are registered. Performance monitoring is also
needed to ensure contractual Service Level Agreements between the customer and the end-user.
IQ provides performance monitoring functions in compliance with GR-820. The following features are
supported:
Extensive performance data collection at every node, including,
Optical performance monitoring (PM) data within the optical domain (see Optical PM Parameters and Thresholds on page A-2)
Client signal agnostic DTF PM data at every TN780 network element (see DTF PM Parameters
and Thresholds on page A-10)
FEC PM data enabling BER calculation (see FEC PM Parameters and Thresholds on page A15)
Native client signal PM data at the tributary ports (see Client Signal PM Parameters and
Thresholds on page A-16)
Optical supervisory channel performance monitoring data (see OSC PM Parameters on
page A-20)
Comprehensive PM data collection functions, including,
Real-time PM data collection for real-time troubleshooting (see Real-time PM Data Collection
on page 4-32)
Historical PM data collection for service quality trend analysis (see Historical PM Data Collection on page 4-32)
Threshold crossing notifications for early detection of degradation in service quality (see PM
Thresholding on page 4-33)
Invalid data flag indicator per managed object instance per period (see Suspect Interval Marking on page 4-33)
Flexible PM data reporting and customizing options to meet diverse customers needs, including,
Automatic and periodic transfer of PM data in CSV format enabling customers to integrate with
their management applications (PM Data Transfer on page 4-33)
Customization of PM data collection (see PM Data Configuration on page 4-34)

UTStarcom Inc.

TN780 System Description

Release 1.2

Page 4-32

Performance Monitoring and Management

PM Data Collection
IQ collects digital PM data and optical PM data.
IQ utilizes gauges to collect optical PM data. The gauge attribute type, as defined in ITU X.721
specification, indicates the current value of the PM parameter and is of type float. The gauge value may
increase or decrease by arbitrary amount and it does not wrap around. It is a read-only attribute.
The counters are utilized to collect the digital PM data. The counter value is a non-negative integer. The
value of the counter is reset to zero at the beginning of the PM period and it is counted in upward direction
with an increment of 1. The counter size is selected in a such a way that the counter does not rollover
within the collection period.

Real-time PM Data Collection


IQ supports real-time PM data retrieval which is useful for real-time troubleshooting. The real-time PM data
represents the state at the time of its retrieval. The real-time data can be retrieved by the management
applications at any time.
IQ provides the real-time PM data for some of the optical and digital PM parameters. The real-time optical
PM data provides the state of the hardware (value of the PM parameter) at the time of its retrieval. The
real-time digital PM data is essentially the value of the digital PM counter at the time of its retrieval.

Historical PM Data Collection


In addition to the real-time PM data, IQ provides the historical PM data archived locally in the network
element enabling service quality trend analysis. IQ collects the historical PM data at the following intervals:
15-minute
24-hour
IQ maintains the following historical counters/gauges:
Current 15-minute and ninety-six previous 15-minute counters/gauges
Current 24-hour and seven previous 24-hour counters/gauges
The historical PM data is not asynchronously reported to the management applications. It must be
retrieved by the users through management applications.
Note that the historical counters/gauges are supported only for some PM parameters, but not for all.
The historical (current and previous) optical PM data is derived by taking several snapshots of the
hardware status. In other words, the optical PM parameter value is read from the hardware every five
seconds within a PM period, and minimum, maximum and average values are derived from all the
readings. Thus the historical optical PM data is the minimum, maximum and average of the PM parameter
values within a given period.
The historical digital PM data is essentially the value of the counter at the end of the given PM period.

TN780 System Description

Release 1.2

UTStarcom Inc.

IQ Network Operating System

Page 4-33

PM Thresholding
The PM thresholding provides an early detection of faults before significant effects are felt by the end
users. Degradation of service can be detected by monitoring error rates. Threshold mechanisms on
counters and gauges allow the detection of such trends and provide a warning to users when the error rate
becomes high.
IQ supports thresholding for both optical PM gauges and digital PM counters. During the PM period, if the
current value of a performance monitoring parameter reaches or exceeds corresponding configured
threshold value, threshold crossing notifications are sent to the management applications.
Optical PM ThresholdingIQ performs thresholding on some optical PM parameters by utilizing
high and low threshold values. Note that the thresholds are configurable for some PM parameters,
for others, system utilizes pre-defined threshold values. An alarm is reported when the measured
value of an optical PM parameter is outside of its threshold values. The alarms are automatically
cleared by IQ when the recorded value of the optical PM parameter is within the acceptable range.
Digital PM ThresholdingIQ performs thresholding on some digital PM data utilizing high threshold
values which are user configurable. The Threshold Crossing Alert (TCA) is reported when a PM
counter, within a collection period, exceeds the corresponding threshold value. When a threshold is
crossed, IQ continues to count the errors during that accumulation period. As with PM counters,
TCAs are transient in nature and are reported as events which are logged in the event log buffers as
described in Event Log on page 4-10. The TCAs do not have corresponding clearing events since
the PM counter is reset at the beginning of each period.
Note that the PM thresholding is supported for some of the PM parameters, but not for all.

Suspect Interval Marking


IQ marks the PM data for a given managed object instance collected in 15-minute and 24-hour periods as
suspect or invalid by maintaining an invalid data flag (IDF). The IDF is maintained per managed object
instance per period basis. The IDF is retrievable by management applications and is used to communicate
to the user the validity of the collected PM data. The PM data is marked invalid under the following
conditions:
User resets the PM counter through management applications.
The period of PM data accumulation changes by +/-10secs (e.g., the network elements time of day
setting was changed during the period).
Loss of PM data due to system restart or hardware failure.

PM Data Transfer
IQ stores the entire PM data in flat files in CSV format. Users (customers) can use these flat files to
integrate PM data analysis into their management applications or simply view the PM data through the
spreadsheet applications.

UTStarcom Inc.

TN780 System Description

Release 1.2

Page 4-34

Performance Monitoring and Management

Users can schedule the TOD (time of day) at which the network element automatically transfers the PM
data to the user specified FTP server. Users can configure primary and secondary FTP server addresses.
If the data transfer to the primary FTP server fails, the PM data is transferred to the secondary FTP server.

PM Data Configuration
IQ allows users to customize the PM data collection. Users can configure the PM data collection through
management applications. IQ supports the following configuration options:
Reset the current 15-minute and 24-hour counters at any time per managed object instance.
Change the default threshold values according to the customers error monitoring needs.
Enable or disable the PM threshold crossing alarm and TCA reporting per attribute per managed
object instance.
Configure the frequency of PM flat file uploading to the FTP servers as configured.
User configures periodic uploading of PM data to the client machine

TN780 System Description

Release 1.2

UTStarcom Inc.

IQ Network Operating System

Page 4-35

Security and Access Management


IQs security and access management features comply with Telcordia GR-815-CORE standard. The
supported features include:
User identification to indicate the logged in user or process (see User Identification on page 4-35).
User authentication to verify and validate the authenticity of the logged in user (see Authentication
on page 4-36).
User access control to prevent intrusion (see Access Control on page 4-36).
Resource access control by defining multiple access privileges (see on page 4-36).
Security audit logs to monitor unauthorized activities (see Security Audit Log on page 4-39).
Security functions and parameters to implement site-specific security policies (see Security Administration on page 4-40).

User Identification
Each network element user is assigned a unique user ID. The user ID is case-sensitive and contains 4 to
10 alphanumeric characters. The user specifies this ID (referred to as user login ID) to log into the network
element.
By default, IQ creates three user accounts with the following user login IDs:
secadmin with security administrator privilege enabled. The default password is Infinera1 and the
user is required to change the password at first login. This user login ID is used for initial login to the
network element.
netadmin with network administrator privilege enabled. The default password is Infinera1 and the
user is required to change the password at first login. Additionally, this account is disabled by
default. It must be enabled by the user with security administrator privilege through the TL1 Interface
or MPower GNM. This account is used to turn-up the network element.
emsadmin with all privileges enabled. The default password is Infinera1. This account is disabled by
default. It must be enabled by the user with security administrator privilege through the TL1 Interface
or MPower GNM. MPower EMS Server communicates with the network element using this account,
referred to as MPower EMS account when it is started without requiring additional configuration.
Users can create additional MPower EMS accounts which MPower EMS Server can use to connect
to the network element. These accounts must have the EMS access capability enabled during creation.
A single user can open multiple sessions. IQ maintains a list of all current active sessions.
Note: IQ supports a maximum of 30 active user sessions at any given time. All login attempts
beyond 30 sessions will be denied and a warning message is displayed.

UTStarcom Inc.

TN780 System Description

Release 1.2

Page 4-36

Security and Access Management

Authentication
IQ supports standards-based authentication features. These features ensure that only authorized users log
into the network element through management interfaces.
Each time the user logs in, the user must enter a user ID and password. For the initial login, the user
specifies the default password set by the security administrator. The user must then create a new
password based on the following requirements.
The password must contain
Six to ten alphanumeric characters
At least one alphabetic and one numeric or one special character
The password may contain these special characters: @ # $ % ^ ( ) _ + | ~ { } [ ] ? The password must not contain:
The associated user ID
Blank spaces
The passwords are case-sensitive and must be entered exactly as specified.
The password is stored in the network element database in a one-way encrypted form.
The password rotation is implemented to prevent users from re-using the same password. The users are
forced to use passwords different from the previously used passwords. The number of history passwords
stored is configurable.

Access Control
In addition to user login ID validation and password authentication, IQ supports access control features to
ensure that the session requester is trusted, such as:
Detection of an unsuccessful user login and if the unsuccessful login exceeds the configured number of attempts, the session is terminated and a security event is logged in the security audit log.
User session is automatically terminated when the cable connecting the user computer and the network element is physically removed. The user must follow the regular login procedure after the cable
is reconnected.
The activity of each user session is monitored. If, for a configurable period of time, no data is
exchanged between the user and the network element, the user session is timed-out and the session is automatically terminated.

TN780 System Description

Release 1.2

UTStarcom Inc.

IQ Network Operating System

Page 4-37

Authorization
Multiple access privileges are defined to restrict user access to resources. Each access privilege allows a
specific set of actions to be performed. Assign one or more access privileges to each user account. For the
description of the actions allowed for each access privilege, see Table 4-1 on page 4-37. For the
description of the managed entities, see Managed Object Entities on page 4-15.
There are six levels of access privileges:
Monitoring Access (MA)allows the user to monitor the network element; cannot modify anything
on the network element (read-only privilege). The Monitoring Access is provided to all users by
default.
Security Administrator (SA)allows the user to perform network element server security management and administration related tasks.
Network Administrator (NA)allows the user to monitor the network element, manage equipment,
turn-up network element, provision services, administer various network-related functions, such as,
auto-discovery and topology.
Network Engineer (NE)allows the user to monitor the network element and manage equipment.
Provisioning (PR)allows the user to monitor the network element, configure facility endpoints, and
provision services.
Turn-up and Test (TT)allows the user to monitor, turn-up, and troubleshoot the network element
and fix network problems.
Table 4-1 Access Privilege Permissions
Managed Object
Entity

Operation

SA

NA

NE

PR

TT

MA

Equipment Management
Chassis

Create, delete and


update

No

Yes

Yes

No

No

No

DLM

Create, delete and


update

No

Yes

Yes

No

No

No

TAM

Create, delete and


update

No

Yes

Yes

No

No

No

BMM

Create, delete and


update

No

Yes

Yes

No

No

No

Alarm input and output contacts

Update

No

Yes

Yes

No

No

No

TOM

Create, delete and


update

No

Yes

Yes

No

No

No

OAM

Create, delete and


update

No

Yes

Yes

No

No

No

OMM

Create, delete and


update

No

Yes

Yes

No

No

No

UTStarcom Inc.

TN780 System Description

Release 1.2

Page 4-38

Security and Access Management

Table 4-1 Access Privilege Permissions


Managed Object
Entity
PEM

Operation
Create, delete and
update

SA
No

NA
Yes

NE
Yes

PR

TT

MA

No

No

No

Termination Point (physical ports or logical ports) Management


OTS

Update

No

Yes

No

Yes

Yes

No

Band

Update

No

Yes

No

Yes

Yes

No

OCG - BMM

Update

No

Yes

No

Yes

Yes

No

OCG - DLM

Update

No

Yes

No

Yes

Yes

No

Channel

Update

No

Yes

No

Yes

Yes

No

DTF Path

Update

No

Yes

No

Yes

Yes

No

Trib

Update

No

Yes

No

Yes

Yes

No

Client

Update

No

Yes

No

Yes

Yes

No

OSC

Update

No

Yes

No

Yes

Yes

No

DCF

Update

No

Yes

No

Yes

Yes

No

Cross-connect

Create, update and


Delete

No

Yes

No

Yes

No

No

SNC circuit

Create, update and


Delete

No

Yes

No

Yes

No

No

Protection Group

Create

No

Yes

No

Yes

Yes

No

Services

System Administration and Software Maintenance Functions


Periodic PM data
transfer

Update

No

Yes

No

Yes

Yes

No

System date and


time

Update

No

Yes

No

No

No

No

Software download

Update

No

Yes

No

No

No

No

Database download

Update

No

Yes

No

No

No

No

Database upload

Update

No

Yes

No

No

No

No

ASAP (Alarm Severity Assignment Profile)

Update

No

Yes

No

No

No

No

Alarm acknowledgment

Update

No

Yes

Yes

Yes

Yes

No

Yes

No

No

No

No

No

Network Element Security Administration


Users

TN780 System Description

Create, update and


delete

Release 1.2

UTStarcom Inc.

IQ Network Operating System

Page 4-39

Table 4-1 Access Privilege Permissions


Managed Object
Entity
Security parameters

Operation
Update

Managed Object
Entity

Operation

SA
Yes

NA
No

SA

NA

NE
No
NE

PR
No
PR

TT
No

MA
No

TT

MA

Security Audit Log


IQ maintains an independent and persistent circular audit log that records all system configuration
activities and security related events, such as unauthorized attempts and excessive authentication
attempts. The audit log provides traceability of all system-impacting changes. The supported features
include:
The audit logs include system configuration activities and security related activities performed by the
user. These activities include:
Creating and deleting managed object entities
Updating an attribute of the managed object entity
Invalid login attempts
Unauthorized attempts to access resources due to restrictions imposed by the user access privilege
Updates to the user's security parameters, such as the password, user access privilege, password aging time, etc.
Updates to the network element security parameters such as maximum number of invalid login
attempts, and inactivity time-out interval
The audit logs are maintained in a circular buffer and hence the oldest records are overwritten.
The audit logs are preserved when system reboots
Each audit log entry includes the following minimum set of information:
User login ID of the user who performed the action, along with terminal, port and network
address information
Date and Time of the operation
Action performed
Instance of the managed object entity on which the action was performed
Result of the operation performed
Users cannot modify the audit logs

UTStarcom Inc.

TN780 System Description

Release 1.2

Page 4-40

Security and Access Management

Users (with any access privilege) can view the audit logs through the management applications

Security Administration
IQ defines a set of security administration functions and parameters that are used to implement sitespecific policies. Security administration can be performed only by users with security administrator
privilege. The supported features include:
View all users currently logged on
Disable and enable a user account (this operation is allowed only when the user is not logged on)
Modify user account parameters, including access privilege and password expiry time
Delete a user account and its attributes, including password
Reset any user password to system default password
Monitor security audit logs to detect unauthorized access
Monitor the security alarms and events raised by the network element and take appropriate actions
Configure system-wide security administration parameters:
Default password
Inactivity time-out period
Maximum number of invalid login attempts allowed
Number of history passwords
Advisory warning message displayed to the user after successful login to the network element

TN780 System Description

Release 1.2

UTStarcom Inc.

IQ Network Operating System

Page 4-41

Software Configuration Management


IQ provides the following capabilities to manage software and database images on the TN780 and Optical
Line Amplifier network elements:
Software Download on page 4-41
Software Upgrade on page 4-41
Database: Download/Backup/Restoration/Rebranding on page 4-43

Software Download
The IQ, operating TN780 network element and Optical Line Amplifier network elements, is packaged into a
single software image. The software image includes the software components required for all the circuit
packs in the TN780 and Optical Line Amplifier network elements.
Users can remotely download the software image from a customer specified FTP server to the MCM of the
TN780 and OMM of the Optical Line Amplifier network element. Once users download the software image
to the MCM/OMM and initiate the software upgrade procedure, the software is automatically distributed to
the remaining circuit packs within the chassis.
The network element can store up to three versions of the software image at the same time.

Software Upgrade
The network elements support in-service software upgrade. The software upgrade procedure lets users
activate a different software version from the one currently active. The following software upgrade
operations are supported:
Install Softwarethis operation lets users activate the new software image version with an empty
database. The software image may be older or newer than the active version.
Upgrade Softwarethis operation lets users activate the new software image version with the previously active database. The previously active database version must be compatible with the new
software image version.
Activate Software and Databasethis operation lets users activate a new software image version
and a new database version. The database version must be compatible with the software image version. Before upgrading the software, the new database image must be downloaded to the network
element.
Restart Softwarethis operation lets users activate the current software image with an empty database.
In-Service Rollbackallows the system the ability to gracefully fall-back or down-grade to a prior
release in the rare event that a failure in experienced during the upgrade process.

UTStarcom Inc.

TN780 System Description

Release 1.2

Page 4-42

Software Configuration Management

In general, upgrading the software does not affect existing service. However, if the new software image
version includes a different Firmware/FPGA version than the one currently active, it could impact existing
services. If this occurs, a warning message is displayed.
Users must upgrade the software on a node-by-node basis. Therefore, at any given time, the network
elements within a network may be running at least two software image versions. These different images
must be compatible. In the presence of multiple software versions, the network provides functions that are
common to all the network elements.
The software upgrade procedure:
1.

Verifies that the software and database versions are compatible. If they are not compatible, the
upgrade procedure is not allowed.

2.

Validates the uncompressed software image. If the software image is invalid, the upgrade procedure is not allowed.

3.

Decompresses the software image. If there is not enough memory on the network element to store
the decompressed image, the upgrade procedure is aborted and software image reverts to the previously active software image version.

4.

Reboots the network element so that the new software image becomes active. If the reboot fails,
the upgrade procedure is aborted and software image reverts to the previously active software
image version.

5.

When the new software image is activated, the software upgrade procedure updates the format of
the Event Log and Alarm table alarms, if necessary.
Note: When the software is upgraded, the PM historical data is not converted to the new format (if
there is a change in the format) and it is not persisted. Therefore, before you upgrade the
software, you must upload and save the PM data in your local servers.

In general, if the upgrade procedure is aborted, the software reverts to the previously active version. The
procedure reports events and alarms indicating the cause of the failure.
The software upgrade is also supported when there is one MCM or OMM in the Node Controller chassis.
During the upgrade process, the communication with the clients and also with other network elements
within the network is interrupted.

Remote Hardware FPGA Upgrade


The TN780 hardware modules support the ability to be remotely upgraded including all types of:
TAMs
DLMs
BMMs.
The ability to remotely upgrade hardware using a controlled process is integrated in Software Release 1.2.

TN780 System Description

Release 1.2

UTStarcom Inc.

IQ Network Operating System

Page 4-43

UTStarcom has implemented Foppish within many of the TN780 hardware modules to take advantage of
the field-updatable features of the FPGA. These FPGAs support many different features and functions
within the hardware and can be remotely upgraded within the field to add features or correct design
inefficiencies without requiring replacement repair and return of the hardware modules.
Upgrade of the FPGAs is performed via updating the FPGA image which is a list of programmable
instructions that tell the FPGA how it should operate and what features it should provide. New FPGA
images may (or may not) be provided within a new software release, and any enhancements to FPGA
images will identified within the Software Release Notes describing the functional change to the hardware
that the FPGA image provides.
Note: Although the hardware upgrade can be performed from a remote location, the hardware
module will require a cold reboot.

Note: The FPGA image download may be service impacting to the targeted module.

Database: Download/Backup/Restoration/Rebranding
To ensure that the correct database is activated on a network element, the database image includes this
information:
The database version. This is used to check its compatibility with the software image version. The
database image version must be older or equal to the software image version.
The backplane ID of the network element on which the database was created.
The following database operations are supported:
Database Download on page 4-43
Database Backup on page 4-43
Database Restoration on page 4-44

Database Download
Users can download the previously backed up database file to the network element from a specified FTP
server. Up to three database versions can be stored on the network element at a time. The downloaded
database file does not change the current active database. It is simply stored in the persistent memory of
the network element.

Database Backup
There are two database backup modes:

UTStarcom Inc.

TN780 System Description

Release 1.2

Page 4-44

Software Configuration Management

Manual Database BackupUsers can manually backup the current database image to the specified
FTP server at any time.
Scheduled Database BackupUsers can schedule the database to be backed up automatically, at
either daily or weekly intervals. Users can also specify a primary and secondary FTP server to store
the backup. By default, the database is backed up to the primary server; however, if that server is not
available, the database is backed up to the secondary server.
The database file that has been backed up contains:
Database file, which includes configuration information stored in the persistent memory on the network element.
Alarm table stored in the persistent memory of the network element.
Event Log stored in the persistent memory of the network element.

Database Restoration
Users can perform the restore operation to activate a new database image file with the current active
software image version. The new database image file must be compatible with both the software image
version and the network element. The restore operation restarts the network element and activates the
new database image. Users can restore the database at system reboot time or at time any during normal
operation.
If the restore operation fails, the software rolls back to the previously active database image and an alarm
is raised indicating the failure of the restore operation. When the database is successfully restored, the
alarm is cleared. Users can manually restore the database.
Depending on the differences between the two databases, the database restore operation could affect
service. The database restoration procedure:
Restores the configuration data as per the restored database. The configuration data in the restored
database may differ from the current hardware configuration. In such scenarios, in general, the configuration data takes precedence over the hardware.
Restores the alarms in the Alarm table by verifying the current alarm condition status. For example,
if there is an alarm entry in the restored Alarm table but the condition is cleared, that alarm is cleared
from the current Alarm table. On the other hand, if the alarm condition still exists, the corresponding
alarm entry is stored in the current Alarm table with the original timestamp.
Note: The data in the Event Log is not restored.
The database image can be restored at system reboot time or at time any during normal operation.
Following is the description of some scenarios where the configuration data in the restored database
differs from the current hardware configuration and how they are handled:
Scenario 1: The restored database contains a managed equipment entity but there is no corresponding hardware present in the chassis. In this scenario, the corresponding equipment is considered to be pre-configured (refer to Circuit Pack Pre-configuration on page 4-19).

TN780 System Description

Release 1.2

UTStarcom Inc.

IQ Network Operating System

Page 4-45

For example, consider the following sequence of operations:


Backup database
Remove a circuit pack from the chassis
Restore the previously backed up database.
After the database restoration, the removed circuit pack is pre-configured.
Scenario 2: If the restored database does not contain a managed equipment entity but the hardware is present in the network element, the managed equipment entity is created in the database as
in equipment auto-configuration (refer to Circuit Pack Auto-configuration on page 4-19).
For example, consider the following sequence of operations:
Backup database
Install a new circuit pack
Restore the previously backed up database.
In this case, after database restoration, the newly inserted circuit pack is auto-configured.
Scenario 3: If the managed equipment entity exists in the database and the corresponding hardware equipment is present in the network element, but there is a configuration mismatch, an equipment mismatch alarm is reported and the operational state of the equipment is changed to out-ofservice (see Operational State on page 4-21).
Scenario 4: If the restored database contains a manual cross-connect configuration information but
there is no cross-connect configured in the hardware, then IQ provisions the corresponding manual
cross-connect (provided the required data path resources exist) according to the configuration information in the restored database.
For example, consider the following sequence of operations:
Backup the database
Delete a manual cross-connect
Restore the database
In this case, the manual cross-connect was deleted after database backup is recreated.
Scenario 5: If the restored database does not contain a manual cross-connect configuration, but a
manual cross-connect is provisioned in the hardware, then the manual cross-connects is torn down
(deleted) as per the configuration information in the restored database.
For example, consider the following sequence of operations:
Backup the database
Create a manual cross-connect
Restore the database
In this scenario the manual cross-connect, that was created after the database backup, is deleted.
Scenario 6: If the restored database does not contain SNC configuration information, but an SNC is
provisioned in the hardware, then the SNC is torn down (released) by releasing the signaled cross-

UTStarcom Inc.

TN780 System Description

Release 1.2

Page 4-46

Software Configuration Management

connects (see Dynamically Signaled SNC Provisioning on page 4-26) along the SNC path. However, it takes approximately 45mins to release the signaled cross-connects. Note that the SNC configuration information is stored on the source node only. The intermediate nodes contain only the
signaled cross-connects.
For example, consider a SNC that spans 3 nodes: Node A, Node B and Node C and Node A is the
source node. Consider the following sequence of operations:
Backup the database on Node A
Create an SNC from Node A to Node C passing through Node B which results in corresponding
signaled cross-connects being created on Node B and Node C
Restore the database on Node A
In this case, the restored database on Node A does not contain the SNC configuration information.
However, Node B and Node C have signaled cross-connects which are released after 45mins to
match the restored database in the Node A.
Consider the following sequence of operation for the same network configuration as in the previous
example,
Backup the database on Node B
Create an SNC from Node A to Node C passing through Node B which results in corresponding
signaled cross-connects being created on Node B and Node C
Restore the database on Node B which results in signaled cross-connect corresponding to the
SNC created after database backup being deleted.
In this scenario, since Node A contains the SNC configuration, the corresponding, deleted signaled
cross-connect in Node B is recreated. However, it may take up to 15mins for the SNC to come back
up.

Database rebranding
The database from one network element can be restored into another network element by re-branding. When a MCM is inserted into a chassis there are two options; if the MCM was not commissioned
previously, then the MCM will boot normally; if the MCM was commissioned previously but used in
another network element, then the MCM should be re- branded. For more information on re-branding refer to the UTStarcom TN780 Turn-up and Test Guide.

TN780 System Description

Release 1.2

UTStarcom Inc.

IQ Network Operating System

Page 4-47

IQ GMPLS Control Plane Overview


IQ provides an intelligent GMPLS control plane architecture that enables automated end-to-end
management of transport capacity across Digital Optical Network resulting in a rapid, error-free service
turn-up and operational simplicity. With a simple point-and-click approach to provisioning, users need
only identify the A and Z service endpoints, and the intelligent control plane automatically configures the
intermediate network elements to route the transport capacity, without manual intervention.
The GMPLS control plane provides several benefits, including:
Rapid, real-time end-to-end service provisioning.
Traffic engineering/bandwidth management at optical layer.
Multi-service support.
Simplified service provisioning independent of network topology.
Automatic protection and restoration capabilities utilizing
The UTStarcom GMPLS control plane implementation is based on two key industry standard protocols:
OSPF-TE, an IP routing protocol, and RSVP-TE, a GMPLS signaling protocol. The OSPF-TE performs
network topology discovery and route computation. The RSVP-TE signaling protocol establishes a circuit
along the route computed by the OSPF-TE. An end-to-end circuit setup by GMPLS control plane within a
routing domain is referred to as a subnetwork connection (SNC).
The GMPLS control plane supports the following features:
Supports dynamically signaled SNC provisioning.
Supports 10G and 2.5G SNCs to be established. A 10G SNC is setup for the SONET OC-192, SDH
STM-64, 10G Clear Channel, 10GbE LAN Phy, 10GbE WAN Phy services and 2.5G SNC is setup
for the SONET OC-48, SDH STM-16 and 1GbE services.
Allows SNC to be provisioned between any two tributary ports of the same type.
Supports point-to-point, linear add/drop, junction site, and ring topologies.
Supports service pre-provisioning; pre-provisioned service becomes operational on installing the
hardware equipment.
Traffic engineering control utilizing constraint-based source routing.

OSPF-TE Routing Protocol


IQ utilizes OSPF-TE routing protocol to discover the Digital Optical Network topology, and to perform route
computation utilizing Constrained Shortest Path First (CSPF) algorithm. The OSPF-TE implementation is
based on OSPF v2 (IETF RFC 2178 and RFC 3630).

UTStarcom Inc.

TN780 System Description

Release 1.2

Page 4-48

IQ GMPLS Control Plane Overview

Network Topology
IQ utilizes OSPF-TE to discover Digital Optical Network topology. It models the Digital Optical Network
topology by defining the following elements:
A routing node which corresponds to a network element within the Digital Optical Network.
A control link which corresponds to OSC control between the adjacent routing nodes or network elements. There is one control link for each fiber. So, in the case of a multi-chassis, multi-fiber sites,
there will be multiple control links between adjacent network elements.
A GMPLS link which corresponds to transport capacity between the adjacent TN780s. There is one
GMPLS control link for each fiber. So, in the case of a multi-chassis, multi-fiber sites, there will be
multiple GMPLS links between adjacent network elements. Each GMPLS link supports up to
400Gbps transport capacity which maps to four OCGs or four Traffic Engineering (TE) links.
Within the Digital Optical Network, a routing node corresponds to a network element which could be a
TN780 or an Optical Line Amplifier, a control link corresponds to OSC communication between the
adjacent network elements (TN780 or Optical Line Amplifier) and GMPLS link corresponds to the digital
link between adjacent TN780 network elements.
IQ defines two topology maps:
Physical Network TopologyThe physical network topology is defined by the topology of the OSC,
which provides the communication path for the routing and signaling protocols between network elements. The physical network topology mirrors the physical fiber connectivity between the network
elements, and thus the topology elements include all network elements, TN780 and Optical Line
Amplifier, and control links which corresponds to the fiber connecting the network elements. (See
Figure 4-10 on page 4-48.)
Figure 4-10 Physical Network Topology

F
ib
e
r
/c
o
n
t
r
o
llin
k
(
O
S
C
)

However, independent of the physical fiber connectivity, customers can create topology partitions
where each partition represents a continuous routing and signaling domain. The topology partitions
are created by disabling the OSPF interface. In Figure 4-11 on page 4-49, Domain 1 and Domain 2
are two topology partitions created by disabling GMPLS between network element C and network
element D. Note that in Release 1.2, SNC spanning two topology partitions are not supported and
they are operated as two separate networks.

TN780 System Description

Release 1.2

UTStarcom Inc.

IQ Network Operating System

Page 4-49

Figure 4-11 Single Network with Topology Partition

Service Provisioning Topologythe service provisioning topology is a higher layer logical topology
providing users a view of topological nodes where services can be terminated, groomed or amplified, and the associated digital links between them. In a Digital Optical Network, the service provisioning topology consists of TN780 network elements and digital links between them. Thus, in a
service provisioning topology, all Optical Line Amplifiers are eliminated. Figure 4-12 on page 4-49
illustrates the service provisioning topology of the physical topology shown in Figure 4-10 on
page 4-48.
Figure 4-12 Service Provisioning Topology

Users can view the physical network topology, referred to as physical view, and service provisioning
topology, referred to as provisioning view, through the management applications.
Thus, the physical topology represents the topology of the control plane traffic (e.g. OSPF-TE messages)
and management plane traffic (messages exchanged

UTStarcom Inc.

TN780 System Description

Release 1.2

Page 5-14

MPower Graphical Node Manager

Users can view the current route of a circuit.


Users can assign a circuit ID to each cross-connect or SNC for end-to-end circuit management.
The circuit ID is a logical name given to the circuit. Users can manage a circuit spanning multiple
network domains (Digital Optical Network or any other network) by assigning circuit IDs.
For the Protection Group Manager, the following functions are supported:
The Protection Group Manager can be launched from the Equipment Manager and the top level
menu bar.
The available termination points are displayed, allowing users to select end-points in order to
create preferred working and standby end points.
Users can assign a name to each protection group. Users can manage a protection group easier
by assigning unique names to each protection group.
Protection group validation. An EMS feature that allows the user to validate that the protection
units selected for local and remote nodes are available.
Ease of troubleshooting with the ability to:
Launch context sensitive menus, such as alarms, facilities, and cross connects
Filter protection groups.
Note: Users can export the cross-connect, circuit and protection groups inventory in TSV file format.

Performance Management
The MPower GNM provides a user interface to support performance management functions supported by
IQ as described in Performance Monitoring and Management on page 4-31. In addition:
Users can reset PM counters locally and view the delta between the current value and last reset
value.
Automatically refresh the PM data at configured intervals.
Users can monitor the PM data from the Circuit Manager.
Both real-time and historical PM data are displayed to the user.

Security Management
The MPower GNM provides a user interface to perform user access and security management procedures
supported by the IQ as described in Security and Access Management on page 4-35.

TN780 System Description

Release 1.2

UTStarcom Inc.

IQ Network Operating System

Page 4-51

Usable capacity of the link based on the hardware and software state
Available capacity of the link for the new service requests
Additionally, users can provision the admin weight or cost for the control link. The control link cost denotes
the desirability of the link to route control traffic and management traffic. The lower (numerically) the cost,
the more desirable the link is.
All the traffic engineering parameters described above are exchanged between the network element as
part of the topology database updates.

Constrained Shortest Path Route Computation


The OSPF-TE performs SNC route computation utilizing CSPF (constrained shortest path first) algorithm.
The CSPF provides the following benefits:
Route SNCs around known bottlenecks or points of congestion in the network.
Provide precise control over how traffic is rerouted when the primary path is faced with single or
multiple failures.
Provide more efficient use of available aggregate bandwidth and long-haul fiber by ensuring that
subsets of the network do not become overutilized while other subsets of the network along potential
alternate paths are under utilized.
The CSPF considers all the traffic engineering parameters described in Traffic Engineering on page 4-49
while performing SNC route computation. In the presence of multiple routes, the least cost (based on the
cost of the GMPLS link configured by the user) route is selected.

GMPLS Signaling (RSVP-TE)


The RSVP-TE signaling protocol is used to establish a SNC along the route computed by the OSPF-TE.
The computed route is specified as explicit route object in the RSVP-TE signaling messages. The SNC is
established when the RSVP-TE signaling messages are exchanged successfully between all nodes. If the
SNC setup fails due to failures in the network, IQ reports appropriate error messages through the
management applications and retries the SNC setup periodically until the setup is successful or user
chooses to delete SNC. For every retry, a new route is computed and SNC is setup along the new route
computed by the OSPF-TE.
Once the SNC is established, the SNC is not deleted unless the user explicitly requests the SNC to be
deleted.

Handling Fault Conditions


The GMPLS control plane monitors and detects fault conditions that impacts service availability and takes
necessary precautions. Following are some faults that are detected by the GMPLS control plane:
Lower layer hardware or connectivity failures resulting in reduction in bandwidth availability: such
fault conditions result in OSPF-TE protocol advertise the new available bandwidth. However, the
UTStarcom Inc.

TN780 System Description

Release 1.2

Page 4-52

IQ GMPLS Control Plane Overview

SNCs which are already established are neither deleted nor rerouted. When the fault condition is
cleared, the SNCs resume their operation.
The faults, such as fiber cuts, resulting in topology partition: such fault conditions result in topology
database updates. However, the SNCs that span partitioned topologies will not provide service. The
SNC becomes operational after the fault condition is cleared.

Topology Configuration Guidelines


The OSPF V2 (RFC 2178) does not specify any guidelines for the number of routers in an area or the best
way to architect an OSPF network. Customers must design the OSPF networks based on their specific
application and/or constraints. Every network element in the network adds routing control traffic to OSPF
and increases the load on CSPF computation algorithm.
Note that all Control and GMPLS links, by default, are associated with area 0.0.0.0. The area ID is not
configurable.

Control Link Configuration


The control link between adjacent network element is enabled by:
Provisioning the BMMs on each network element.
Provisioning the OSC IP address on either side of the control link. The OSC IP address has to be
routable and unique within a routing and signaling domain. However, it can be an internal (unregistered) IP address. Also, the sub-network mask has to be identical on both ends of the control link.
This ensures that both ends of the control link are on the same subnet.
Provisioning control link (OSPF) cost as per the desired network design. Note that the configuration
of OSPF cost is optional. A default value of 100 is assigned for every control link.
Enabling OSPF interface on either side of the control link.

GMPLS Link Configuration


The GMPLS link includes the various configurable traffic engineering parameters as described in Traffic
Engineering on page 4-49.

TN780 System Description

Release 1.2

UTStarcom Inc.

IQ Network Operating System

Page 4-53

IQ Management Plane Overview


IQ provides a highly available, reliable and redundant management plane communications path which
connects the network operations centers (NOCs) to the physical transport network and meets the diverse
customers needs. The management plane includes:
Direct DCN (Data Communications Network) access where the NOC is connected to the network
element through a DCN network which is typically an IP-based network. The DCN is designed in
such a way that there is no single point of failure within the DCN network. (See DCN Communication Path on page 4-53.)
In-band access through a Gateway Network Element (GNE) where a network element is accessed
through another network element which acts as a gateway to transport the management traffic over
the OSC control link between the network elements. (See Management Application Proxy on
page 4-56)
Static routing to access external networks that are not within the DCN network. (See Static Routing on page 4-58.)
Telemetry access utilizing a dial-up modem which provides users remote access through the serial
port on the network element.
IQ management plane supports Network Timing Protocol (NTP) to provide accurate time stamping of
alarms, events and reports from the network element. (See Time-of-Day Synchronization on page 4-59.)

DCN Communication Path


As described in Management Interfaces on page 3-17, the TN780 and Optical Line Amplifier network
elements provide two redundant, auto-negotiating 10/100Mbps Ethernet RJ45 interface, referred to as the
DCN ports.
In a redundant configuration the DCN-A port is controlled by the MCM-B in slot 7A of the DTC (referred to
as the Primary MCM), or OMM in slot 1A of the OTC (referred to as the Primary OMM). Similarly, DCN-B is
controlled by the MCM-B in slot 7B of the DTC (referred to as the Secondary MCM) or OMM in slot 1B of
the OTC (referred to as the Secondary OMM).
As shown in Figure 4-14 on page 4-54, ethernet cables from each of the DCN ports must be connected to
a single ethernet switch or hub. No other physical connectivity from the DCN port is supported at this time.
In the presence of both Primary and Secondary MCM-B or OMM, only one MCM-B or OMM will be active.
The active MCM-B or OMM processes the management traffic received from the DCN. IQ supports only
one DCN IP address to be specified. The management traffic is received either through the DCN-A port or
DCN-B port. The DCN IP address maps to the MAC address of the Primary or Secondary MCM-B or OMM
based on the active DCN port through which the management traffic is received. The DCN IP address is
configurable through the CCLI application during network element turn-up.

UTStarcom Inc.

TN780 System Description

Release 1.2

Page 4-54

IQ Management Plane Overview

Figure 4-14 Redundant DCN Connectivity


SD

MAGELLAN

F T P S e rver

M P o w er E M S

R o uter A

R ou ter D

R o ute r C

R outer B

S w itch / H ub
D C N -A

N ode A

N ode B

N o de D

N o de C

D C N -A
MCM
A c tive M C M o f
th e M ain
C h as sis

D C N -B

CPU

N od e E

D C N -B
MCM

CPU

S tand -b y M C M
o f th e M ain
C hassis

D T N M a in C h as s is

DCN Link Failure Recovery


In the example shown in Figure 4-14 on page 4-54, in the presence of the active MCM-B or OMM and
stand-by MCM-B or OMM the active MCM-B or OMM will be processing the management traffic received
through the DCN-A port. The DCN IP address is mapped to the MAC address of the active MCM-B or
OMM.
When there is a failure in the link between the DCN-A port and the switch/hub, as shown in Figure 4-15 on
page 4-55, the active MCM-B or OMM detects failure by monitoring DCA-A port link status. On detecting
link failure, the active MCM-B or OMM disables its ethernet link to the DCN-A port and enables the
ethernet link between itself and the stand-by MCM-B or OMM. Then the active MCM-B or OMM sends
gratuitous ARP (i.e. an ARP request for the network elements DCN IP address) request through the
Stand-by MCM-B or OMM in order to refresh the ARP entry in the switch/hub so that the DCN IP address
maps to the MAC address of the Stand-by MCM-B or OMM. At this point the active MCM-B or OMM is
receiving the management traffic through the DCN-B port.

TN780 System Description

Release 1.2

UTStarcom Inc.

IQ Network Operating System

Page 4-55

Figure 4-15 DCN Link Failure Recovery

Note: The link failures between the switch/hub and the DCN routers is not detected by the network element nor will any redundant path be provided by the network element. It is
assumed that the customer will deploy routers which provide the necessary redundancy to
take care of such failures.

MCM-B/OMM Failure Recovery


As described in DCN Link Failure Recovery on page 4-54, assume that the active MCM-B or OMM is
receiving the management traffic through the DCN-A port. If the active MCM-B or OMM fails, as shown in
Figure 4-16 on page 4-56, the Stand-by MCM-B or OMM becomes active, and sends gratuitous ARP (i.e.
an ARP request for the network elements DCN IP address) in order to refresh the ARP entry in the switch/
hub so that the DCN IP address maps to the MAC address it. At this point the now active MCM-B or
OMM is receiving the management traffic through the DCN-B port and is also processing the packets.

UTStarcom Inc.

TN780 System Description

Release 1.2

Page 4-56

IQ Management Plane Overview

Figure 4-16 MCM/OMM Failure Recovery


SD

MAGELLAN

FT P Server

M Power E M S

R outer A

R outer D

R outer B

R outer C

S witch / H ub
D C N -A

N ode A

N ode B

D C N -B

N ode D

N ode C

N ode E

M C M Failed
D C N -A
MCM
Stand-by M C M
of the M ain
C hassis

CPU

D C N -B
MCM

C PU

Active M C M of
the M ain
C hassis

D T N M ain C hassis

Management Application Proxy


IQ provides GNE capability, similar to the one defined in the GR-253 specification, in order to support inband access to the network element as opposed to DCN access. In-band access is typically used where
either DCN access is not available (e.g., intermediate huts where a Digital Repeater might be installed) or
when DCN bandwidth needs to be conserved.
Additionally, IQ has enhanced the GNE capability in order to support a variety of management protocols.
The enhanced GNE capability provided by IQ is called Management Application Proxy, often referred to as
MAP. Hence, the MAP provides the ability to manage those network elements that are not directly DCN
addressable through the network elements that are directly DCN addressable.
The MAP supports the following functions (also see Figure 4-17 on page 4-57):
GNEthe GNE is a network element that is directly IP addressable from the DCN. The GNE provides management proxy services to any network element within the same routing domain as the

TN780 System Description

Release 1.2

UTStarcom Inc.

MPower Management Software

Page 5-21

Dynamic Seed File Editor


In the Tools menu of EMS client, the dynamic Seed File Editor allows the user to:
Addition of NodeInfo
Deletion of NodeInfo
Addition of a ManagementDomain
Deletion of ManagementDomain
Two or more users can simultaneously view the seed file but only one can save the changes made.
Note: Only a user with security administrator privilege can open the seed file editor from the menu
item.

Discovery Key Ring


The MPower server discovers the network elements and the topology by establishing connectivity to all the
network elements within the purview of the administrative domains specified in the configuration database
file. For example, for the WestRoute administrative domain shown in Figure 5-8 on page 5-16, the MPower
server establishes connectivity to Node 14, Node 15, Node 26 and Node 27.
The MPower server requires that a user-ID and password be configured on the network element in order
to establish connectivity. This user account, referred to as the MPower server account, is reserved for the
MPower server to communicate with the network element. The user must ensure that at least one MPower
server account is configured on each network element and it must meet the following requirements:
Enable all privileges
Disable the password change enforcement
Do not lock the account
Disable inactivity timer
Disable password expiration
Enable MPower server to access this account
Note: A default MPower server specific user account (with user-ID emsadmin and password
Infinera1) is created in the network element. However, by default, the account is disabled.
The user may enable this pre-defined account or create a new MPower server specific
account using the management interfaces, such as MPower GNM or TL1.
The MPower server must be provided with a list of MPower server accounts created on the network
elements to which it must establish connectivity. The MPower server provides a user interface so that an
EMS User with Security Administrator privilege can configure this list of user-ID and password, referred to
as the discovery key ring.

UTStarcom Inc.

TN780 System Description

Release 1.2

Page 4-58

IQ Management Plane Overview

launched from the MPower GNM to access all network elements within the purview of a network element through the Craft Ethernet and Craft Serial interfaces.
FTP ProtocolThe MAP service on GNE and SNE relays the FTP protocol messages by listening to
a dedicated FTP Proxy port 10021. This capability enables the communication between the FTP client on the SNE and the EMS or external FTP Server through the GNE. The FTP client will be used to
upload performance monitoring data, downloading software, etc.

Configuration Settings
IQ provides several configuration options so that the customers can design their DCN and management
communication access to meet their needs. Following are the various configuration options provided:
MAP Enabledusers must set this option to enable MAP services on a network element.
Primary GNE IP Addressthe Primary GNE IP Address is configured on SNEs that do not have a
DCN IP address assigned. The Primary GNE IP Address is the Router ID (also known as the
GMPLS Node ID) of the GNE in the same domain as this SNE. If more than one GNE exists in the
same domain, it is recommended that the closest GNE, in terms of hops, from this SNE should be
selected as the primary GNE. The primary GNEs main function is to upload the historical Performance Monitoring data.
Secondary GNE IP Addressas with Primary GNE IP Address parameter, the Secondary GNE IP
Address is configured on SNEs. The Secondary GNE IP Address is the Router ID (also known as
the GMPLS Node ID) of the GNE within the same domain as this SNE. The SNE accesses the Secondary GNE if the Primary GNE is not available. It is recommended to choose the Secondary GNE
as the GNE which:
Is the next closest network element in terms of number of hops from the SNE
Provides a completely separate path to the management station from the SNE. In other words
the inability to reach the Primary GNE should never mean that the Secondary GNE is also
unreachable and vice-versa.

Static Routing
IQ provides the static routing capability. One application of static routes is to enable the network elements
to reach external networks that are not part of the DCN network. As shown in Figure 4-18 on page 4-59,
the NTP Server may be located in external networks, outside of the DCN network. In this scenario, users
can configure the static routes to external networks.

TN780 System Description

Release 1.2

UTStarcom Inc.

IQ Network Operating System

Page 4-59

Figure 4-18 Using Static Routing to Reach External Networks

R o u te r D
R o u te r A

R o u te r B

R o u te r C

Time-of-Day Synchronization
IQ provides accurate and synchronized timestamps on events and alarms, ensuring proper ordering of
alarms and events at both the network element and network levels. The synchronized timestamp eases
the network-level debugging and eliminates the in-accuracies caused by the manual configuration of
system time on each network element. Additionally, the timestamp complies with UTC format, found in ISO
8601, and includes granularity down to seconds.
IQ supports the Time-of-Day Synchronization by implementing NTP Client which ensures that IQs system
time is synchronized with the specified NTP Server operating in the customer network and also
synchronized to the Universal Coordinated Time (UTC). IQ also implements NTP Server, so that one
network element may act as an NTP Server to the other network elements that do not have access to the
external NTP Server. As shown in Figure 4-19 on page 4-60, typically a GNE (GNE-A node) is configured
to synchronize to an external NTP Server in the customer network and the SNEs (SNE-A, SNE-B, and
SNE-C nodes) are configured to synchronize to the GNE.

UTStarcom Inc.

TN780 System Description

Release 1.2

Page 4-60

IQ Management Plane Overview

Figure 4-19 NTP Server Configuration


SD

MAGELLA
N

NTPServer

DCN

SNE
GNE

SNE

SNE

GNE

SNEuses the GNENTPServer


GNEuses the external NTPServer

The TN780 and Optical Line Amplifier network elements also provide local clock with the accuracy of
23ppm or about a minute per month. If the GNE (with NTP enabled) fails to access the external NTP
Server, IQ NTP (Client and Server) uses the local clock as a time reference. When the connectivity to the
external NTP Server is restored, IQ NTP Client and Server on the GNE re-synchronizes with the external
NTP Server, and the new synchronized time is propagated to all the network elements within the routing
domain.
Following are some recommendations for configuring the NTP Server within a Digital Optical Network:
Configure one external NTP Server with Stratum Level 4 or higher for each routing domain of a Digital Optical Network.
Configure the GNE and SNE network element to point to the external NTP Server. If required configure static routes on the GNE and SNE network elements to reach the external NTP Server through
the DCN port.
Configure the SNEs to point to the GNE as the NTP Server.

TN780 System Description

Release 1.2

UTStarcom Inc.

MPower Management Software

Page 5-25

Alarms (raise and clear) reported on the control link or GMPLS link (e.g. alarms reported due to fiber
cut)
Loss of connectivity between the MPower server and the network element
During topology re-discovery the MPower server discovers all the network elements specified in the
configuration database and also the dynamically discovered network elements stored in the persistent
database. If any of the network elements or links are not available, it dynamically updates the nework view
displayed to the user along with a color coding providing visual indication of the network problems. When
the nework problem is corrected, it performs network re-discovery to discovers the changes in the network
and displays the updated network view to the user.

Network Topology Views


The MPower server provides several network views so that users can perform various management
functions easily. The network views provide the following features:
Hierarchical (see Hierarchical Network Topology View on page 5-25) and functional view (see
Functional Network Topology View on page 5-26) of the managed network.
Ability to launch context sensitive applications and tools such as alarm manager, equipment manager, performance management, etc., from various points in the topology view.
Real-time updates to the network topology based on the configuration changes and alarm status in
the network, such as addition/deletion of network elements, addition/deletion of control/GMPLS
links, change in the alarm severity on a network element and control/GMPLS link, etc.
User customizable background maps in the topology view

Hierarchical Network Topology View


The hierarchical view of the network enables users to perform operations and manage the network at
different levels. The network hierarchy includes:
Entire network managed by the MPower server
Administrative domains within the network
The network elements within each administrative domain
The chassis within each network element
The circuit packs and other hardware equipment within each chassis
The MPower server provides a context sensitive user interface so that users can launch tools and
applications at various levels. For example, when the alarm manager application is launched at the
network level, users can view and manage the alarms for all the network elements managed by the
MPower server. When the alarm manager application is launched at the circuit pack level, only the alarms
reported by that circuit pack are displayed.

UTStarcom Inc.

TN780 System Description

Release 1.2

Page 5-26

MPower EMS

Functional Network Topology View


The functional network view provides an intuitive interface to the users to perform OAM&P functions. The
following two views are provided:
Physical Viewdisplays the physical topology of the network which includes all network elements
(both TN780 and Optical Line Amplifier) and the control links between them. The control link display
in the physical view is color coded to indicate the current alarm state of the control link. The alarm
state is determined by the alarms active on the OTS and OSC termination points in the network elements associated with the link. In addition to the alarm state, the link color also conveys the reacheability information which indicates ability of the MPower server to communicate with the network
elements and the OSC status between the network elements.
Provisioning View displays the TN780 network elements and the GMPLS links that participate in
service provisioning. Each GMPLS link represents eight unidirectional (four in each direction)
100Gbps traffic engineering links on which services are provisioned. Users can view the various
GMPLS link utilization information, such as maximum bandwidth, available bandwidth and used
bandwidth. Also, the GMPLS links are color coded to indicate the alarm state. The alarm state is
determined by the alarms active on the OTS, OSC and DLM OCG termination points in the network
elements associated with the link. Release1.2 also gives the user the added option of saving the provisioning map view within MPower EMS.

Network-level OAM&P Functions


The MPower server provides OAM&P functions which can be performed at the network-level, in addition to
all network element-level functions provided by the MPower GNM as described in MPower Graphical
Node Manager on page 5-3. The network-level functions include:
Network wide real-time fault management and monitoring, including current alarm summary, historical event logs, and threshold crossing alerts (see Network Level Fault Management on page 5-26)
Network wide inventory management, including equipment, facility, circuit layout inventory, and state
information (see Network Level Inventory Management on page 5-27)
Point-and click end-to-end provisioning and circuit inventory views, with correlated alarm status (see
Network Level Fault Management on page 5-26)
Web-accessible historical network performance reports (see Cross-Connect Circuit Trace on
page 5-30

Network Level Fault Management


The MPower EMS server supports all the network element-level functions. Following are the additional
network-level functions supported:
Ability to monitor network wide alarms and events which have synchronized time stamping using
NTP protocol supported by the network elements.

TN780 System Description

Release 1.2

UTStarcom Inc.

MPower Management Software

Page 5-27

Provides a visual display of the current alarm summary at the entire network-level (refers to the
entire UTStarcom network managed by a given MPower server), administrative domain level or network element level.
Provides historical (up to 90 days by default) listing of alarms and events which can be optionally
exported in TSV file format. Users can configure the event log size before starting the MPower
server.
Ability to define custom filters which helps in analyzing the historical data and therefore, quick problem resolution.
Integrated search and sorting.
Provides the ability to export:
All alarms to file
All events to file
Current view of alarms to file
Current view of events to file

Network Level Inventory Management


MPower EMS provides users various inventory information:
Network-wide inventory reporting of all managed resources, including:
Equipment inventory
Termination point (facility) inventory
Circuit inventory
Cross-connect inventory
TE link inventory where each link refers to 100Gbps logical link within a digital link
Network element inventory
All inventory information can be exported in TSV flat file format.
Context sensitive launching of applications from the inventory window
Detailed physical-layer circuit tracing (display of all supporting TN780 cross-connects)

End-to-end Circuit Provisioning


MPower EMS provides a simple point-click interface to provision an end-to-end circuit between two
network elements within the purview of the EMS. MPower EMS also displays a list of all network elements,
list of possible destination network elements for a given source network element, list of possible
destination endpoints for a given source endpoint.

UTStarcom Inc.

TN780 System Description

Release 1.2

Page 5-4

MPower Graphical Node Manager

The following sections provide highlights of the features supported by the MPower GNM. For a detailed
description of how to use MPower GNM to manage the network element, refer to UTStarcom MPower
GNM User Guide.
Graphical User Interface on page 5-4
Inventory Manager on page 5-10
Network Topology Display on page 5-11
Software Configuration Management on page 5-11
Service Provisioning on page 5-13
Performance Management on page 5-14
Security Management on page 5-14

Graphical User Interface


The MPower GNM provides an intuitive, easy-to-use Graphical User Interface (GUI). The Java-based
implementation of MPower GNM provides a native look-and-feel on Windows and Solaris platforms.

TN780 System Description

Release 1.2

UTStarcom Inc.

MPower Management Software

Page 5-5

Figure 5-2 MPower GNM Main View


Main Menu

Equipment Tree

Equipment View

Workspace Area

Quick View Browser


Alarm Manager

Status Bar

MPower GNM Features in Release 1.2:


The graphical user interface includes a graphical representation of the TN780 and Optical Line
Amplifier in the Equipment View window, drill-down tree view of the network element, chassis and
circuit packs in Equipment Tree window, a quick view displaying the summary information of the
hardware equipment selected in the Equipment Tree or Equipment View in the Quick View Browser,
list of current outstanding alarms in the Alarm Manger window, etc.
The support for multi-window display for many of the features and functions. There are two types of
windows:
Modal window: Once opened the action must be completed prior to opening another window
Non-modal windows: Allows the user to open other windows for viewing multiple objects. Nonmodal windows are:
BMM properties
BMM span

UTStarcom Inc.

TN780 System Description

Release 1.2

Page 5-6

MPower Graphical Node Manager

BMM Optical Carrier Group properties


DLM properties
DLM optical channels
DLM Optical Carrier Group properties
Circuit properties
Cross-connect properties
Figure 5-3 Multi-window display

The topology view of all the network elements that are in the same network neighborhood as the target network element the MPower GNM is logged into.
Support for MCM redundancy. Redundancy state will be displayed on the card (act and stby). New
pop-up menu items, Switchover and Make Stand by have been added. The quick view browser will
indicate the redundancy of the MCM that is selected.

TN780 System Description

Release 1.2

UTStarcom Inc.

MPower Management Software

Page 5-7

Figure 5-4 MCM Redundancy Support

Redundancy state of the MCMs are shown on the card

When MCM is
selected its
redundancy state
will be shown in the
quick view Panel

New pop-up items added to the MCM; Switchover and Make Standby

Support for TAM-4-1G, allows for provisioning, and pre-provisioning.


Tributary support for 10G clear channel. When the provisioned service type of the tributary port set
to 10G Clear Channel, the client termination point will carry transparent traffic and the termination
point created on the TN780 is of GIGE type. When a user launches the client termination point properties the dialog that will be launched is the same as 10GBE_LAN. The difference is the configured
service type is 10G Clear Channel.

UTStarcom Inc.

TN780 System Description

Release 1.2

Page 5-8

MPower Graphical Node Manager

Figure 5-5 10G Clear Channel Service Type

Protection Group Manager window. Allows for the creation, and deletion of protection groups. The
protection manager features a right-click accessible menu options for individual protection groups.

TN780 System Description

Release 1.2

UTStarcom Inc.

MPower Management Software

Page 5-9

Figure 5-6 Protection Group Manager

Support for 80 channel BMM. When selecting the BMM properties for an 80 channel BMM the BMM
OCG Port field will number 1 through 8.
Support for Nodal Control and Timing (NCT) ports used in a multi-chassis configuration.

UTStarcom Inc.

TN780 System Description

Release 1.2

Page 5-10

MPower Graphical Node Manager

Figure 5-7

NCT ports on MPower GNM

NCT ports

NCT in
Equipment Tree

Support for new TOMs:


TOM-10G-IR2
TOM-2.5G-IR1
TOM-1G-LX
User-friendly context sensitive application launching to perform actions with fewer mouse clicks.

Inventory Manager
The MPower GNM includes Inventory Manager applications through which users can monitor and also
manage various resources in the network element. The following inventory applications are provided:
Equipment Managerto view and manage the equipment inventory including chassis and circuit
packs.

TN780 System Description

Release 1.2

UTStarcom Inc.

MPower Management Software

Page 5-11

Facility Managerto view and manage the inventory of termination points including physical ports
and logical ports.
Cross-connect Managerto view and manage manual static cross-connects and signaled crossconnects which are described in Service Provisioning on page 4-23.
Circuit Managerto view and manage dynamically signaled SNCs described in Dynamically Signaled SNC Provisioning on page 4-26.
Protection Group Manager-to view and manage the protection groups described in Protection
Group Provisioning on page 4-28
The inventory information is displayed as a table from which users can perform context-sensitive launching
of other applications.

Network Topology Display


The MPower GNM displays the physical topology of the Digital Network which includes TN780 and Optical
Line Amplifier network elements that are in the same routing domain as the network element the MPower
GNM is logged into.
Topology is displayed under the Network Neighborhood tree.
Linear, junction site and ring topologies are supported.
Topology is displayed as digital segments where each digital segment consists of two TN780 network elements and all Optical Line Amplifiers between the two TN780 network elements.
When the topology nodes are selected the corresponding network element summary information is
displayed in the quick view window.
Users can right-click on the nodes in the topology view and launch MPower GNM for those network
elements.

Software Configuration Management


The MPower GNM includes Software Configuration manager application supporting the following
functions:
Upload/download software image. Up to three software versions can be stored on the network element.
Delete software image.
Compress software image.
Download database.
Configure periodic database backup by uploading to the user configured FTP server.
Compress database.
Fresh install a new software image.

UTStarcom Inc.

TN780 System Description

Release 1.2

Page 5-12

MPower Graphical Node Manager

Upgrade to a new software image which will use the currently active database.
Restart the currently running Software Image with a new empty database.
Activate new software image and new database in one click.
Back up the database locally on the network element.
In-service software rollback
Displays if a particular software image upgrade/downgrade is service affecting or non-service affecting. In general the software upgrade/downgrade is non-service affecting.

Fault Management
The MPower GNM supports Alarm Manager application to manage and view the alarms reported
asynchronously by the network element, and Event Manager to monitor the Event Log maintained by the
IQ, as described in Event Log on page 4-10. The MPower GNM also provides user interfaces and user
access to all the fault management functions provided by the IQ as described in Fault Management on
page 4-2. Additionally, the MPower GNM supports following features:
Alarm manager application to view and manage all outstanding alarms along with alarm details probable cause, severity, source, time of occurrence, etc.
Provides the ability to export:
All alarms to file
All events to file
Current view of alarms to file
Current view of events to file
Event log application to view the events logged by the network elements.
Real-time updates to the current alarms and events in the alarm manager and event log, respectively.
Context sensitive alarm summary display based on selected managed object entity. For example,
users can right-click on the chassis and circuit packs, and select the 'Show Alarms' and 'Show
Events' menu options. The Alarm Manager and Event Manager tables are updated to show the
alarms or events, respectively, for the selected Equipment.
Color coded alarm display based on the alarm severity.
Several pre-defined display filters so that users can monitor a specific category of alarms.
Ability to acknowledge alarms. Alarms that have been acknowledged will have a check mark in the
Ack field of the alarm.
Ability to navigate from an active alarm display in the alarm manager window to the source of the
alarm.
Users can export the alarms and events in TSV file format.

TN780 System Description

Release 1.2

UTStarcom Inc.

MPower Management Software

Page 5-13

Equipment Configuration and Management


The MPower GNM provides user interfaces to configure and manage network element equipment which
includes chassis, circuit packs and termination points, as described in Equipment Management and
Configuration on page 4-15. In addition, the MPower GNM supports the following functions:
Enables users to configure and view the physical chassis/rack deployment.
Template based equipment configuration to automate and simplify the equipment configuration procedures.
Graceful handling of the scenarios where multiple users access and configure the same managed
object instance. When a user tries to modify a managed object instance which is modified by
another user at the same time, the network element warns the user of possible overwriting and it
performs the action only if user accepts the overwriting.
Users can export the equipment inventory in TSV file format.

Service Provisioning
The MPower GNM provides user interfaces to provision and manage services supported by the IQ as
described in Service Provisioning on page 4-23. It includes Cross-connect Manager to provision and
manage manual cross-connects, Circuit Manager to provision and manage Dynamically Signaled SNCs,
and the Protection Group Manager to provision and manage protection groups. The following functions are
supported to simplify the service provisioning and management procedures:
For the Cross-connect Manager:
The Cross-connect Manager can be launched from the Equipment Manager and the top level
menu bar.
The available end points are displayed, allowing users to select end-points in order to create
cross-connects.
Users can view PMs for a selected cross-connect.
Users can assign a circuit ID to each cross-connect for end-to-end management. The circuit ID
is a logical name given to the cross-connect.
For the Circuit Manager
The Circuit Manager can be launched from the Equipment Manager and the top level menu bar.
The available termination points are displayed, allowing users to select end-points in order to
create circuits.
Users can select to use pre-provisioned capacity, feature that allows a circuit to be provisioned
with a minimum of pre-provisioned equipment.
User can select to use local DLM route only, feature that when enabled allows route computation
to utilize equipped and unequipped DWDM capacity.

UTStarcom Inc.

TN780 System Description

Release 1.2

Page 5-14

MPower Graphical Node Manager

Users can view the current route of a circuit.


Users can assign a circuit ID to each cross-connect or SNC for end-to-end circuit management.
The circuit ID is a logical name given to the circuit. Users can manage a circuit spanning multiple
network domains (Digital Optical Network or any other network) by assigning circuit IDs.
For the Protection Group Manager, the following functions are supported:
The Protection Group Manager can be launched from the Equipment Manager and the top level
menu bar.
The available termination points are displayed, allowing users to select end-points in order to
create preferred working and standby end points.
Users can assign a name to each protection group. Users can manage a protection group easier
by assigning unique names to each protection group.
Protection group validation. An EMS feature that allows the user to validate that the protection
units selected for local and remote nodes are available.
Ease of troubleshooting with the ability to:
Launch context sensitive menus, such as alarms, facilities, and cross connects
Filter protection groups.
Note: Users can export the cross-connect, circuit and protection groups inventory in TSV file format.

Performance Management
The MPower GNM provides a user interface to support performance management functions supported by
IQ as described in Performance Monitoring and Management on page 4-31. In addition:
Users can reset PM counters locally and view the delta between the current value and last reset
value.
Automatically refresh the PM data at configured intervals.
Users can monitor the PM data from the Circuit Manager.
Both real-time and historical PM data are displayed to the user.

Security Management
The MPower GNM provides a user interface to perform user access and security management procedures
supported by the IQ as described in Security and Access Management on page 4-35.

TN780 System Description

Release 1.2

UTStarcom Inc.

MPower Management Software

Page 5-15

MPower EMS
The MPower EMS is a robust, real-time management software used to administer and manage Digital
Optical Networks. MPower EMS provides end-users in the NOC with an integrated network-level and
network-element-level functions including, fault and performance management, circuit provisioning,
configuration, topology and inventory management, testing and maintenance functions, and security
management. The MPower EMS provides the following functions:
Ability to manage the network independent of physical network deployment.
Automated network topology discovery and drill-down topology displays with integrated real-time
alarm status updates (see Release Compatibility on page 5-16).
Enhanced network-level OAM&P functions (see Network-level OAM&P Functions on page 5-26).
MPower server security and access management based on Telcordia GR-815-CORE standard (see
MPower EMS Security and Access Management on page 5-31).
Scalable and reliable software architecture (see MPower EMS Architecture on page 5-34).
The MPower server is certified to be deployed on a Sun Microsystems Solaris server platform and
the MPower client is certified to run on Microsoft Windows and Sun Microsystems Solaris platform
(see MPower EMS Platform Requirements on page 5-36).

Administrative Domains
The administrative domain enables a group of network elements to be managed as a single network entity
independent of the underlying GMPLS routing domain (see Network Topology on page 4-48 for details).
For instance, in Figure 5-8 on page 5-16, at the network-element level, two separate networks are defined
(GMPLS Routing Domain 1 and GMPLS Routing Domain 2). At the management level, three
administrative domains: EastRoute Domain, NorthRoute Domain and WholeNet Domain are defined. Each
administrative domain includes a subset of network elements from the GMPLS Routing Domain 1 and
GMPLS Routing Domain 2 networks. Thus, the scope of the administrative domain is separated from the
scope of the GMPLS routing domain. For example, one can define the administrative domains along the
organizational boundaries, functional boundaries or geographic boundaries. In Figure 5-8 on page 5-16,
the administrative domains are defined along the geographic boundaries.
Each user can be assigned to manage one or more administrative domains.
A given network element can be included in one or more administrative domains. For example, in Figure 58 on page 5-16, Node 15 is included in EastRoute Domain, NorthRoute Domain and WholeNet Domain.
The MPower EMS provides a user interface to create, modify and delete administrative domains (see
Network Element Information File Editor on page 5-18).

UTStarcom Inc.

TN780 System Description

Release 1.2

Page 5-16

MPower EMS

Figure 5-8 MPower EMS Administrative Domains


W
h
o
le
N
e
tD
o
m
a
in

N
o
d
e1
0

N
o
d
e1
2

N
o
d
e1
1

N
o
d
e1
3

N
o
d
e1
4

N
o
d
e1
5

N
o
d
e2
0

N
o
d
e2
7
N
o
d
e2
1

N
o
d
e2
6

N
o
d
e2
2
N
o
d
e2
3

N
o
d
e2
4

N
o
d
e2
5

EastR oute Dom ain


Node 14

Node 16

Node 27
Node 26

A
d
m
i
n
i
s
t
r
a
t
i
v
e
D
o
m
a
i
n
V
i
e
w
s
i
n
M
P
o
w
e
r
E
M
S
N
o
d
e
1
0

N
o
d
e
1
1

N
o
d
e
1
2

N
o
d
e
1
3

N
o
d
e
1
4

N
o
d
e
1
5

G
M
P
L
S
R
o
u
t
i
n
g
D
o
m
a
i
n
1

N
o
d
e
2
0

N
o
d
e
2
7
N
o
d
e
2
1

N
o
d
e
2
6

N
o
d
e
2
2
N
o
d
e
2
3

N
o
d
e
2
4

N
o
d
e
2
5

g
G
M
P
L
S
R
o
u
t
i
n
D
o
m
a
i
n
2

Release Compatibility
MPower EMS manages UTStarcom Digital Optical Networking systems, which include UTStarcom TN780
and UTStarcom Optical Line Amplifier. UTStarcom Digital Optical Networking systems are supported by

TN780 System Description

Release 1.2

UTStarcom Inc.

MPower Management Software

Page 5-17

the IQ Network Operating System (IQ NOS) software. Table 5-1 on page 5-17 specifies the compatibility
between the IQ NOS and MPower EMS version.
Table 5-1 MPower EMS and IQ NOS Version Compatibility
MPower EMS Version
1.1.1

Compatible IQ NOS Version


IQ NOS version 1.1.1
IQ NOS version 1.1.2
IQ NOS version 1.1.3

1.2.1

IQ NOS version 1.1.1


IQ NOS version 1.1.2
IQ NOS version 1.1.3
IQ NOS version 1.2.1

Java Web Start manages the compatibility between MPower server and MPower client software versions.

Network Topology Discovery


The MPower server automatically discovers the network elements and the topology in order to provide
users the network view and the network-level management capabilities. The network topology discovery
involves the following functions:
Configure the network element information using a stand-alone GUI application (see Network Element Information File Editor on page 5-18)
Configure the user account and password to be used by the MPower server to log into the network
elements (see Dynamic Seed File Editor on page 5-21)
The MPower server discovers the network topology as a two-step process as described in the following
sections:
Discovers the network elements, GMPLS links and control links (see Topology Shallow Discovery
on page 5-22)
Discovers all the managed entities within the network element (see Junction Site Topology on
page 5-23)
The MPower server monitors the changes that occurred in the network and automatically rediscovers the
new topology as described in the following section:
Topology Discovery on page 5-24
The MPower server provides multiple network topology views and are dynamically updated to display the
changes caused by the configuration updates and alarm state as described in the following section:
Network Topology Views on page 5-25

UTStarcom Inc.

TN780 System Description

Release 1.2

Page 5-18

MPower EMS

Network Element Information File Editor


The Network Element Information File Editor is a stand-alone application provided for administrators to
configure the DCN IP address and other information of all the network elements to be managed by the
MPower server. This application is automatically installed when the MPower server is installed on the
customer specified server machine (Solaris workstation). It can be run on the server platform when the
MPower server is online and when not actively running by an administrator with SU access to the Solaris
workstation.
Using this application, online users can:
Create, and modify administrative domains which are managed by the MPower server.
Create, modify network elements within the administrative domains. The network elements are
specified by providing the DCN IP address.
Optionally enable the auto-discovery for the configured network element which enables the automatic discovery of all the network elements within that network elements routing domain.
Using this application, offline users can:
Create, modify and delete administrative domains which are managed by the MPower server.
Create, modify, and delete network elements within the administrative domains. The network
elements are specified by providing the DCN IP address.
Optionally enable the auto-discovery for the configured network element which enables the automatic discovery of all the network elements within that network elements routing domain.
Note: If the file is edited offline, then the EMS server must be cold started.

TN780 System Description

Release 1.2

UTStarcom Inc.

MPower Management Software

Page 5-19

Figure 5-9 Network Information File Editor

UTStarcom Inc.

TN780 System Description

Release 1.2

Page 5-20

MPower EMS

Figure 5-10 Add Administrative Domain Menu

Consider an example network shown in Figure 5-8 on page 5-16. As shown, two networks (GMPLS
Routing Domain 1 and GMPLS Routing Domain 2) are deployed and three administrative domains
(EastRoute Domain, NorthRoute Domain and WholeNet Domain) are defined in the MPower EMS.
The user must configure the EastRoute domain by specifying the DCN IP address of all the nodes in that
domain (Node 14, Node 15, Node 26 and Node 27) since only a partial GMPLS routing domain is included
in the administrative domain.
The NorthRoute Domain can be defined by specifying the DCN IP address of Node 10, Node 20 and Node
27. In addition, the auto-discovery option can be enabled on Node 10 so that the remaining nodes in the
corresponding GMPLS Routing domain are automatically discovered and included in the administrative
domain.
The WholeNet Domain can be defined by specifying the DCN IP address of Node 10 and Node 20 with the
auto-discovery option enabled so that all nodes in the corresponding GMPLS Routing domains are
automatically discovered and included in the administrative domain.

TN780 System Description

Release 1.2

UTStarcom Inc.

MPower Management Software

Page 5-21

Dynamic Seed File Editor


In the Tools menu of EMS client, the dynamic Seed File Editor allows the user to:
Addition of NodeInfo
Deletion of NodeInfo
Addition of a ManagementDomain
Deletion of ManagementDomain
Two or more users can simultaneously view the seed file but only one can save the changes made.
Note: Only a user with security administrator privilege can open the seed file editor from the menu
item.

Discovery Key Ring


The MPower server discovers the network elements and the topology by establishing connectivity to all the
network elements within the purview of the administrative domains specified in the configuration database
file. For example, for the WestRoute administrative domain shown in Figure 5-8 on page 5-16, the MPower
server establishes connectivity to Node 14, Node 15, Node 26 and Node 27.
The MPower server requires that a user-ID and password be configured on the network element in order
to establish connectivity. This user account, referred to as the MPower server account, is reserved for the
MPower server to communicate with the network element. The user must ensure that at least one MPower
server account is configured on each network element and it must meet the following requirements:
Enable all privileges
Disable the password change enforcement
Do not lock the account
Disable inactivity timer
Disable password expiration
Enable MPower server to access this account
Note: A default MPower server specific user account (with user-ID emsadmin and password
Infinera1) is created in the network element. However, by default, the account is disabled.
The user may enable this pre-defined account or create a new MPower server specific
account using the management interfaces, such as MPower GNM or TL1.
The MPower server must be provided with a list of MPower server accounts created on the network
elements to which it must establish connectivity. The MPower server provides a user interface so that an
EMS User with Security Administrator privilege can configure this list of user-ID and password, referred to
as the discovery key ring.

UTStarcom Inc.

TN780 System Description

Release 1.2

Page 5-22

MPower EMS

The MPower server walks through the user-IDs configured in the discovery key ring to establish
connectivity with the network elements. If none of the user-IDs (and password) configured in the key ring
are accepted by the network element, then the network element is marked as unreachable and the
MPower server retries continuously to establish connectivity until it is successful. The user must either fix
the key ring configuration or the MPower server account in the network element that is unreachable.

Topology Shallow Discovery


The MPower server initiates the topology discovery when it is first launched. The MPower server first
discovers the network elements, and the control and the GMPLS links within each administrative domain
as specified in the configuration database file. If the auto-discovery option is enabled for any network
element in the configuration database file, then the remaining network elements within the same routing
domain are automatically discovered. The dynamically discovered network elements are maintained in the
persistent database and MPower server establishes and maintains connectivity to all of them. A map view
of the discovered network is displayed to the user as shown in Figure 5-11 on page 5-23.

TN780 System Description

Release 1.2

UTStarcom Inc.

MPower Management Software

Page 5-23

Figure 5-11 Network Topology Map View

Junction Site Topology


Release1.2 enables MPower EMS the ability to discover and display junction site topology. A junction site
can have up to 12 control and the GMPLS links. Figure 5-12 on page 5-24 shows an example of a
discovered junction site topology.

UTStarcom Inc.

TN780 System Description

Release 1.2

Page 5-24

MPower EMS

Figure 5-12 Junction Site Topology

Topology Deep Discovery


The topology deep discovery refers to the discovery of all the managed entities, which includes chassis,
circuit packs, physical ports and logical termination points, within each discovered network element. The
network element information is displayed to the user in the equipment view. The MPower GNM and the
MPower server provide the same GUI interface to manage the network element.

Topology Discovery
The MPower server initiates topology discovery when it detects events and alarms that cause changes to
the network topology. Following are some examples of events and alarms that trigger the topology
discovery:
Addition or deletion of a control link or GMPLS link
Addition or deletion of network elements

TN780 System Description

Release 1.2

UTStarcom Inc.

MPower Management Software

Page 5-25

Alarms (raise and clear) reported on the control link or GMPLS link (e.g. alarms reported due to fiber
cut)
Loss of connectivity between the MPower server and the network element
During topology re-discovery the MPower server discovers all the network elements specified in the
configuration database and also the dynamically discovered network elements stored in the persistent
database. If any of the network elements or links are not available, it dynamically updates the nework view
displayed to the user along with a color coding providing visual indication of the network problems. When
the nework problem is corrected, it performs network re-discovery to discovers the changes in the network
and displays the updated network view to the user.

Network Topology Views


The MPower server provides several network views so that users can perform various management
functions easily. The network views provide the following features:
Hierarchical (see Hierarchical Network Topology View on page 5-25) and functional view (see
Functional Network Topology View on page 5-26) of the managed network.
Ability to launch context sensitive applications and tools such as alarm manager, equipment manager, performance management, etc., from various points in the topology view.
Real-time updates to the network topology based on the configuration changes and alarm status in
the network, such as addition/deletion of network elements, addition/deletion of control/GMPLS
links, change in the alarm severity on a network element and control/GMPLS link, etc.
User customizable background maps in the topology view

Hierarchical Network Topology View


The hierarchical view of the network enables users to perform operations and manage the network at
different levels. The network hierarchy includes:
Entire network managed by the MPower server
Administrative domains within the network
The network elements within each administrative domain
The chassis within each network element
The circuit packs and other hardware equipment within each chassis
The MPower server provides a context sensitive user interface so that users can launch tools and
applications at various levels. For example, when the alarm manager application is launched at the
network level, users can view and manage the alarms for all the network elements managed by the
MPower server. When the alarm manager application is launched at the circuit pack level, only the alarms
reported by that circuit pack are displayed.

UTStarcom Inc.

TN780 System Description

Release 1.2

Page 5-26

MPower EMS

Functional Network Topology View


The functional network view provides an intuitive interface to the users to perform OAM&P functions. The
following two views are provided:
Physical Viewdisplays the physical topology of the network which includes all network elements
(both TN780 and Optical Line Amplifier) and the control links between them. The control link display
in the physical view is color coded to indicate the current alarm state of the control link. The alarm
state is determined by the alarms active on the OTS and OSC termination points in the network elements associated with the link. In addition to the alarm state, the link color also conveys the reacheability information which indicates ability of the MPower server to communicate with the network
elements and the OSC status between the network elements.
Provisioning View displays the TN780 network elements and the GMPLS links that participate in
service provisioning. Each GMPLS link represents eight unidirectional (four in each direction)
100Gbps traffic engineering links on which services are provisioned. Users can view the various
GMPLS link utilization information, such as maximum bandwidth, available bandwidth and used
bandwidth. Also, the GMPLS links are color coded to indicate the alarm state. The alarm state is
determined by the alarms active on the OTS, OSC and DLM OCG termination points in the network
elements associated with the link. Release1.2 also gives the user the added option of saving the provisioning map view within MPower EMS.

Network-level OAM&P Functions


The MPower server provides OAM&P functions which can be performed at the network-level, in addition to
all network element-level functions provided by the MPower GNM as described in MPower Graphical
Node Manager on page 5-3. The network-level functions include:
Network wide real-time fault management and monitoring, including current alarm summary, historical event logs, and threshold crossing alerts (see Network Level Fault Management on page 5-26)
Network wide inventory management, including equipment, facility, circuit layout inventory, and state
information (see Network Level Inventory Management on page 5-27)
Point-and click end-to-end provisioning and circuit inventory views, with correlated alarm status (see
Network Level Fault Management on page 5-26)
Web-accessible historical network performance reports (see Cross-Connect Circuit Trace on
page 5-30

Network Level Fault Management


The MPower EMS server supports all the network element-level functions. Following are the additional
network-level functions supported:
Ability to monitor network wide alarms and events which have synchronized time stamping using
NTP protocol supported by the network elements.

TN780 System Description

Release 1.2

UTStarcom Inc.

MPower Management Software

Page 5-27

Provides a visual display of the current alarm summary at the entire network-level (refers to the
entire UTStarcom network managed by a given MPower server), administrative domain level or network element level.
Provides historical (up to 90 days by default) listing of alarms and events which can be optionally
exported in TSV file format. Users can configure the event log size before starting the MPower
server.
Ability to define custom filters which helps in analyzing the historical data and therefore, quick problem resolution.
Integrated search and sorting.
Provides the ability to export:
All alarms to file
All events to file
Current view of alarms to file
Current view of events to file

Network Level Inventory Management


MPower EMS provides users various inventory information:
Network-wide inventory reporting of all managed resources, including:
Equipment inventory
Termination point (facility) inventory
Circuit inventory
Cross-connect inventory
TE link inventory where each link refers to 100Gbps logical link within a digital link
Network element inventory
All inventory information can be exported in TSV flat file format.
Context sensitive launching of applications from the inventory window
Detailed physical-layer circuit tracing (display of all supporting TN780 cross-connects)

End-to-end Circuit Provisioning


MPower EMS provides a simple point-click interface to provision an end-to-end circuit between two
network elements within the purview of the EMS. MPower EMS also displays a list of all network elements,
list of possible destination network elements for a given source network element, list of possible
destination endpoints for a given source endpoint.

UTStarcom Inc.

TN780 System Description

Release 1.2

Page 5-28

MPower EMS

Circuit Layout
MPower EMS provides a circuit layout record for every end-to-end circuit. The circuit layout feature allows
the user to view every object that comprises a circuit. MPower EMS supports the creation of signalled and
manual cross-connect based circuits, allowing the circuit layout record to be launched with a circuit or a
cross-connect as the context.
The circuit layout record displays state and alarm conditions of all the objects comprising the circuit
drastically improving troubleshooting and fault isolation.
For a given end-to-end circuit the order of object display is from trib-port to trib-port. The following
intermediate points will also be displayed:
Trib DTF Path
Cross-Connect
Line DTF Path
DLM Channel
DLM OCG Port
BMM OCG Port
BMM OTS Port (egress)
BMM OTS Port (ingress)
BMM OCG Port
DLM OCG Port
DLM Channel
Line DTF Path
Cross-Connect
Trib DTF Path

TN780 System Description

Release 1.2

UTStarcom Inc.

MPower Management Software

Page 5-29

Figure 5-13 Circuit Layout Record

UTStarcom Inc.

TN780 System Description

Release 1.2

Page 5-30

MPower EMS

Figure 5-14 Cross-Connect Circuit Trace

Performance Management
The MPower server supports all the network element-level functions described in Performance
Management on page 5-14. Following are the additional network-level functions supported:
Provides historical (up to 90 days by default) archiving of all historical 15min and 24hr PM data for
each network element which can be optionally exported in CSV file format.
Provides End-to-end Circuit PM view for viewing intermediate PM across a whole circuit.
Includes a network performance reporting tool for parsing all historical PM data in the database for
generating web-based reports, including:
List of all SONET/SDH circuits based on the pre- and post- FEC BER from highest to lowest.
List of all SONET client circuits sorted based on the ES-S (errored seconds section) from highest
to lowest. Only the ES encountered within th

TN780 System Description

Release 1.2

UTStarcom Inc.

MPower Management Software

Page 5-31

List of all SONET client circuits based SEFS-S (severely errored frame seconds section) from
highest to lowest. Only the SEFS-S encountered within the digital optical network is considered.
List of all SONET client circuits based RS-LOSS (regenerator section LOSS) from highest to
lowest. Only the LOSS encountered within the digital optical network is considered.
Ability to generate customized PM reports for each termination point.

MPower EMS Security and Access Management


MPower servers security and access management features comply with Telcordia GR-815-CORE
standard. The MPower server security model is integrated with the UTStarcom Digital Optical Networking
systems security model. The supported features include:
User identification to indicate the logged in user (see User Identification on page 5-31)
User authentication to verify and validate the authenticity of the logged in user (see Authentication
on page 5-32)
User access control to prevent intrusion (see Access Control on page 5-32)
Resource access control by defining multiple access privileges (see Authorization on page 5-33)
Security audit logs to monitor unauthorized activities (see Security Audit Log on page 5-33)
Security functions and parameters to implement site-specific security policies (see Security Administration on page 5-34)

User Identification
Each MPower user is assigned a unique MPower user ID. The MPower user ID is case-sensitive and
contains 6 to 10 alphanumeric characters. The user specifies this ID to log into MPower server.
Note that the MPower user ID is not passed to the target network element (network element managed by
the user using MPower EMS). MPower server uses the network element user ID to log into the target
network element (see Dynamic Seed File Editor on page 5-21) to log into the target network element.
MPower is equipped with a user account that allows for an initial login. The user ID is admin, the
password is infinera1, and the account has the security administrator privilege enabled.
This default account differs from the typical user account in that:
It cannot be disabled or deleted
The Security Administrator privilege cannot be removed
Password expiration cannot be set (it is set to 0 by default which means, it never expires)
A user may open multiple active sessions. MPower server maintains a list of all current active users, but
not active sessions.

UTStarcom Inc.

TN780 System Description

Release 1.2

Page 5-32

MPower EMS

Authentication
MPower server supports standards-based authentication features. These features ensure that only the
authorized users can log into the MPower server through the MPower client interface.
Each time the MPower user logs in, the user must enter a user ID and password. For the initial login, the
user specifies the default password. The user must then change the password based on the following
requirements.
The password must contain
Six to ten alphanumeric characters
At least one alphabetic and one numeric or one special character
The password may contain these special characters: ! @ # $ % ^ ( ) _ + | ~ { } [ ] ? The password must not contain:
The associated user ID
Blank spaces
The passwords are case-sensitive and must be entered exactly as specified.
The password is stored in the MPower server database in a one-way encrypted form.
Password aging is enabled by default. When the password expires, the user must create a new one. The
security administrator can configure the password aging interval -- the length of time the password is valid.
Password aging can also be disabled by setting the aging interval to 0.

Access Control
In addition to user-ID validation and password authentication, MPower server supports access control
features to ensure that the session requester is trusted.
The activity of each user session is monitored. If, for a configurable period of time, no data is exchanged
between the user (MPower client) and MPower server, the user session is declared inactive.The MPower
server defines two system-wide inactivity timeout intervals:
Lockout IntervalWhen the user session is inactive for this interval, the user is locked out. To reactivate the session, the user must re-enter the password.
Idle IntervalWhen the user session is inactive for this interval, the session is terminated. The user
must launch a new session.
User session activity monitoring is disabled by default. A user with security administrator privileges can
enable monitoring and also configure the lockout period and the idle period based on the needs of the
particular site.

TN780 System Description

Release 1.2

UTStarcom Inc.

MPower Management Software

Page 5-33

Authorization
Multiple access privileges are defined to restrict user access to resources. The access privileges defined
in MPower server are in synchronization with the access privileges defined in Digital Optical Networking
systems. Each MPower EMS access privilege is directly mapped to the access privilege defined at the
network element level. In other words, a MPower User with a given access privilege can perform the
actions allowed for that privilege on the target network element.
As described in, there are six levels of access privileges. The following description provides the actions
allowed for each access privilege within MPower server.
Monitoring Access (MA)provides read-only access to various MPower EMS logs and inventory
screens.
Security Administrator (SA)allows the user to perform MPower server security management and
administration related tasks, to shut down MPower server, and to configure the Discovery Key Ring
(see Dynamic Seed File Editor on page 5-21).
Network Administrator (NA)there are no MPower EMS specific tasks defined for this privilege.
Network Engineer (NE)there are no MPower EMS specific tasks defined for this privilege.
Provisioning (PR)there are no MPower EMS specific tasks defined for this privilege.
Turn-up and Test (TT)there are no MPower EMS specific tasks defined for this privilege.

Security Audit Log


MPower server maintains an audit log that records all access and security administration related actions
performed by the MPower user on MPower server. The audit log provides traceability of all systemimpacting changes. Users can view the audit logs through the user interface. The supported features
include:
The audit logs include system configuration activities and security related activities. These activities
include:
Creating and deleting of MPower user accounts.
Updating the MPower user's security parameters, such as password, user access privileges,
password aging time, and administrative domains assigned to the user.
Updating the system-wide security parameters such as inactivity time-out interval.
The audit logs are preserved during a MPower server warm restart. However, the logs are lost on a
cold restart.
In addition, all actions performed on the network element are stored in the network elements audit log.
You can view the network element generated audit logs.

UTStarcom Inc.

TN780 System Description

Release 1.2

Page 5-34

MPower EMS

Security Administration
MPower server defines a set of security administration functions and parameters that are used to
implement site-specific policies. Security administration can be performed only by the MPower user with
security administration privilege. The supported features include:
View all users currently logged on
Disable and enable a MPower user account
Note: Disabling an MPower user account automatically terminates all active sessions corresponding to this account.
Modify user account parameters, including access privilege, password expiry time, and administrative domains.
Delete a MPower user account and all its attributes, including its password
Reset any user password to the
MPower server default password
Monitor security audit logs to detect unauthorized access to MPower server
Monitor the security alarms and events raised by MPower server and take appropriate actions
Configure the security administration parameters applicable to all MPower users
Default password
Inactivity time-out intervals
Advisory warning message displayed to the user after successful login to the network element

MPower EMS Architecture


The MPower EMS architecture is based on robust, distributed computing technologies that allow the
MPower EMS to scale as the network size increases. The MPower EMS is comprised of distinct server
applications and client applications (see Figure 5-15 on page 5-35).

TN780 System Description

Release 1.2

UTStarcom Inc.

MPower Management Software

Page 5-35

Figure 5-15 MPower EMS Architecture


MPower
Client

RMI / HTTP / XML


MPower UI
Frontend Server
MPower
Server

Oracle
Database
Server

MPower EMS
Core Server

MPower PM
Server

XML Mediator

Customer
DCN
Network

XML

FTP
DCN
DCN

Infinera Digital
Optical Network

MPower EMS Client


The MPower EMS client (referred to as MPower client) provides a web-based graphical user interface to
remotely manage the Digital Optical Network. The MPower client communicates with the MPower server
to provide the services to the user.
The MPower client application is Java Web Start enabled. When a user invokes the MPower client for the
first time (on a given user computer), the Java Web Start automatically downloads the MPower client
application from the MPower server. The Java Web Start then caches the MPower client application on the
users computer for future launch through the browser link. The MPower client is automatically
downloaded if the version present on the users computer is not compatible with the MPower server to
which the connection is established.

MPower EMS Server


The MPower EMS server (referred to as MPower server) communicates with the IQ NOS (Network
Operating System) software operating UTStarcom Digital Optical Networking systems and serves the
MPower clients to provide users the ability to manage the Digital Optical Network.
The MPower server architecture is highly scalable in terms of the number of network elements and
MPower clients supported and also its performance. As shown in Figure 5-15 on page 5-35, the MPower
server is comprised of multiple applications which are architected to run on separate hardware platforms.

UTStarcom Inc.

TN780 System Description

Release 1.2

Page 5-36

MPower EMS

MPower Frontend ServerThe MPower Frontend server processes the requests from the MPower
clients and it interacts with the Oracle database server directly for all the read operations. However,
if a user request requires a write operation to the database, it passes the request to the MPower
Core server. Thus the database read-only operations are processed separate from the database
read/write operations.
MPower Core ServerThe MPower Core server manages and processes the information from the
network elements and performs all the management tasks. It interacts with the Oracle database
server in order to manage the information. The MPower Core server is architected to support multiple MPower Frontend servers each running on separate hardware platforms. This allows multiple
MPower Frontend servers to be deployed depending on the number of MPower EMS Clients
deployed.
MPower PM ServerThe MPower PM server collects, processes and manages the performance
monitoring data from the network elements. It provides a variety of pre-defined reports to the users
so that the network problems can be quickly isolated. User customizable reports are also supported.
Note: In Release 1.2, by default, MPower Frontend server, MPower Core server, and MPower PM
server are automatically installed on the same hardware platform. The Oracle database
server must also be installed on the same hardware platform as the MPower server. Users
must launch the MPower Core server, which also includes MPower Frontend server. Users
can optionally launch MPower PM server.

MPower EMS Platform Requirements


MPower Server Requirements
As shown in Figure 5-15 on page 5-35, MPower server is architected for a distributed environment where
MPower Core server, MPower PM server, MPower Frontend server and Oracle database can be deployed
on multiple hardware platforms to support large networks. However, in Release 1.2, all these components
must be installed on a single server platform. The server platform must have two disk drives, one dedicated
to the Oracle database and one for the MPower server.
In Release 1.2, the MPower server is certified to run on Sun Solaris 9 (SunOS 5.9) with Oracle 9i
database, on a Sun SPARC-based server enumerated below:
Sun Fire V210
Sun Fire V240
Sun Fire V250
Sun Fire V440
Sun Fire V880
The MPower server performance depends on the hardware platform, and the size of the network and the
usage patterns. Typical MPower server installation shall have a Sun Fire V210 server (for small networks),

TN780 System Description

Release 1.2

UTStarcom Inc.

MPower Management Software

Page 5-37

or a Sun Fire V880 server (for large networks), configured as shown in Table 5-2 on page 5-37. It also
shows the maximum number of network elements and MPower clients supported in each configuration for
optimal performance.

Table 5-2 MPower Server Platform Recommendations


Number of
Network
Elements

Number of
MPower
clients

Sun Server
Platform

Processors

RAM in GB

Hard Disk
(GB)

<100

<20

Sun Fire V210

2x36

<100

<20

Sun Fire V240

2x73

<100

<20

Sun Fire V250

2x73

<200

<50

Sun Fire V440

4x36

<500

<50

Sun Fire V440

4x36

<500

<100

Sun Fire V440

16

4x36

<100

<20

Sun Fire V210

2x36

MPower Client Requirements


The web-based MPower client is certified to run on Microsoft Windows and Sun Microsystems Solaris
platforms. For optimal performance, the client machine must meet the following requirements:
Windows clients:
Processor speed: 1GHz
Memory: 384MB
Hard disk: 250MB
Operating systems: Windows 2000 with Windows Service Pack 2
Browser requirements: Microsoft Explorer 6.0, Netscape Navigator 4.7
Java Runtime Environment 1.4.2
Solaris clients
Processor speed: 650-700 MHz (SunBlade or UltraSparc platform)
Memory: 1.2MB
Hard disk: 250MB
Operating systems: Solaris 9 (SunOS 5.9)
Browser requirements: Netscape Navigator 4.7
Java Runtime Environment 1.4.2

UTStarcom Inc.

TN780 System Description

Release 1.2

Page 5-38

MPower EMS

MPower Simple Network Management Protocol (SNMP) Agent


UTStarcom MPower SNMP Agent provides a V2c standard SNMP protocol. The MPower SNMP Agent
provides monitoring capabilities in the multi-vendor network environment enabling integration with 3rd
party and tools developed in-house. UTStarcom MPower SNMP Manager Plugin enables the MPower
EMS server to forward Network Element/EMS alarms as SNMP Traps to SNMP managers registered with
the EMS. UTStarcom MPower SNMP Agent provides a standard mechanism for a network configuration
management solution to receive traps for all raise and clear alarms on all monitored Network Elements.

MPower SNMP Trap Agent Features


UTStarcom MPower SNMP Agent consists of the following broad features and capabilities.

Alarm Trap Generation


UTStarcom MPower SNMP supports forwarding of traps from SNMP agent to SNMP Managers.
UTStarcom MPower SNMP Agent generates traps for the SNMP mangers that have been registered to it.
The Trap Generation is a feature supported in the MPower SNMP Trap Agent.
The following information are sent as a traps.
Perceived Severity
The current severity of the alarm.
Asserted Severity
The severity of the alarm when it was asserted earlier. for e.g. if there is an alarm that is raised
as CR and if we raise a clear for that, the perceived severity of the current alarm shall be
Clear and the asserted severity shall be CR.
Timestamp
There are two Time attributes.
neTime - The Network element time at which the trap (Alarm) is generated.
emsTime - The EMS time at which the trap is generated.
EMS Notification ID
A unique ID assigned to each alarm in the EMS. It is sent as EMS notification ID as part of a trap
attribute.
Event/Trap description
The description of the alarm.

TN780 System Description

Release 1.2

UTStarcom Inc.

MPower Management Software

Page 5-39

MPower SNMP Licensing


A separate license is required for MPower SNMP Agent. To obtain the license, contact UTStarcom
technical support (see Technical Assistance on page xiv).
Note: Prior to installing MPower SNMP Manager plugin, MPower EMS must be installed on your
machine.
The EMS provides a single registration point for non-robust SNMP Trap notifications and support Northbound SNMP v2c trap notification. Only alarms reported by network elements are notified as SNMP traps.
SNMP Manager
A Software which resides on the machine which is managing the devices. It is the console
through which an administrator performs management related functions.
SNMP Agent
A Software which resides on the device to be managed. In our case, it resides on the EMS
Server. The device can be a bridge, router, Network Element (as in our case), hub etc.
Object
The objects in MIBs are identified by the object identifiers.

MPower SNMP Configurable Features


On successful installation of the SNMP Trap Agent Plug-in, EMS users having Sec admin privilege
should configure MPower SNMP Trap Manager by choosing Configure SNMP Trap Manager menu
item under Security main menu. Using this, EMS user can add a manager by providing:
SNMP Trap manager host information.
SNMP Trap Port - Listening for traps.
A boolean to specify whether he wants the outstanding alarms to be generated as traps or not.
EMS user can delete a manager.
EMS user can modify a manager information.

Configurable Parameters
The parameters below can be configured through InfineraSnmp.conf file which is located under
EMS_INSTALL_DIR/conf/Infinerasnmp directory. All these parameters will get affected only after
restarting (Cold/Warm) the EMS Server.

UTStarcom Inc.

TN780 System Description

Release 1.2

Page 5-40

MPower EMS

EMSName can be set through this. EMSName is default to MPower, which get propagated in all the
traps that are generated by the system. If you want to change, you can do so using configurable
parameter.
IsCorrelationIDSupportNeeded
IsEmsTimeWhenReceived
GenerateOutstandingAlarm

Configuring MPower SNMP Trap Managers


The Configure MPower SNMP Trap Managers feature allows the user to:
Create SNMP Trap Manager
SNMP Trap Manager function is used to create a new SNMP Trap Manager on UTStarcom EMS
Server.
Modify SNMP Trap Manager Attributes
Modify SNMP Trap Manager function is used to modify an existing SNMP Trap Manager.
Delete SNMP Trap Manager
Delete SNMP Trap Manager function is used to delete an existing SNMP Trap Manager

SNMP MIBs
MIB rules define the object ID and provide them a valid name. Typically, objects that can be managed by
SNMP are defined in MIBs, which are ASCII text files in a structured format.

Standard MIB Files


SNMPv2c-SMI,
SNMPv2c-TC

UTStarcom Enterprise MIBs


UTStarcom MPower SNMP Trap Agent supports the following two MIB files.
INFINERA-REG-MIB
INFINERA-TRAP-MIB

TN780 System Description

Release 1.2

UTStarcom Inc.

Appendix A

TN780 PM Data
UTStarcom TN780 and Optical Line Amplifier network elements collect extensive PM data, including
Optical performance monitoring (PM) data within the optical domain (see Optical PM Parameters
and Thresholds on page A-2)
Client signal agnostic DTF PM data at every TN780 network element (see DTF PM Parameters and
Thresholds on page A-10)
FEC PM data enabling BER calculation (see FEC PM Parameters and Thresholds on page A-15)
Native client signal PM data at the tributary ports (see Client Signal PM Parameters and Thresholds on page A-16)
Optical supervisory channel performance monitoring data (see Client Signal PM Parameters and
Thresholds on page A-16)

UTStarcom Inc.

TN780 System Description

Release 1.2

Page A-2

Optical PM Parameters and Thresholds

Optical PM Parameters and Thresholds


The network element collects extensive optical analog PM data at each optical transport layer, including
OTS layer, OMS Band (OMSb) layer (referred to as C-band), OMS Optical Carrier Group (OMSo) layer
(referred to as OCG) and Optical Channel (OCh) layer. The optical PM data is collected within the optical
domain in the TN780 and Optical Line Amplifier network elements.
Within the TN780, OTS, C-band and OCG layer PM data is collected in the BMM (see Figure A-1 on
page A-2) and OCh layer PM data is collected in the DLM.
Within the Optical Line Amplifier, OTS and C-band PM data is collected in the OAM.
The optical PM parameters are essentially gauges, snapshot of the current condition. The optical PM
parameters, such as Optical Power Received (OPR) and Optical Power Transmitted (OPT), are the
measures of the average optical power of the received and transmitted optical signals, respectively, in
dBm.
Figure A-1 Optical PM Parameters Collected in the BMM
OSA Monitor OUT

OCG 1

IN
OUT

SC

1.C-Band Total OPT,


2.C-Band Tx EDFA LBC

Total OCG OPR


7
SC
6

TxEDFA

MUX

MUX

MUX

SC

Line OUT

Total OCG OPT

OCG 3

IN
OUT

7
SC

OSC Tx
6

OCG 5

IN
OUT

OTS OPT

C-Band Normalized OPT

1.C-Band Rx EDFA LBC1


2.C-Band Rx EDFA OPT

SC

OSC Rx

C-Band Total
OPR

IN
OUT L-band

OTS OPR

SC
6

DEMUX

RxEDFA

DEMUX

DEMUX

SC

Line IN

VOA
OCG 7

IN
OUT

1.C-Band Normalized
OPR

1.C-Band Rx EDFA LBC2


2.C-Band Measured DCM Loss

SC

SC

SC

SC

M
e
a
s
u
r
e
d
O
p
t
i
c
a
lP
M
D
a
t
a
P
o
i
n
t
s
D
e
r
i
v
e
d
O
p
t
i
c
a
lP
M
D
a
t
a
P
o
i
n
t
s

TN780 System Description

Release 1.2

OSA RX
AMP OUT

IN OUT
DCM

OSA Monitor IN
Optional

UTStarcom Inc.

TN780 PM Data

Page A-3

Table A-1 on page A-3 captures the optical PM parameters supported at each layer. The historical data is
maintained for some PM parameters. For the rest, only the real-time data is maintained.
Table A-1 Optical PM Parameters Supported by the BMM, OAM and DLM

PM Parameter as
displayed in GNM/
EMS

PM Parameter while
exporting the file to
FTP server

Description

Unit

Realtime
data

Current
&
historical
(15-min
& 24hour)
data

OTS Layer Parameters (collected in BMM/OAM):


Optical Power Transmitted

Average optical output power


transmitted onto the Line output.
This is the sum of C-Band + LBand (when L-Band supported)
+ OSC output power.

dBm

Yes

No

OPT to OSA Ratio

Expected ratio of OTS Optical


Power Transmitted and the
power measured at the OSA
Monitor Out port

dB

Yes

No

Optical Power
Received

Average optical power received


from the Line input. This is the
sum of C-Band + L-Band (when
L-Band supported) + OSC
received power.

dBm

Yes

No

OPR to OSA Ratio

Expected ratio of OTS Optical


Power Received to the power
expected at the OSA Monitor In
port.

dB

Yes

No

dBm

Yes

No

Band Layer Parameters (collected in BMM/OAM)


C-Band Total Optical
Power Received

Total C-Band optical power


received from the OTS input.

C-Band Total Optical


Power Received Min

BandOprMin

C-Band Total Optical


Power Received Avg

BandOprAve

C-Band Total Optical


Power Received Max

BandOprMax

UTStarcom Inc.

TN780 System Description

Release 1.2

Page A-4

Optical PM Parameters and Thresholds

Table A-1 Optical PM Parameters Supported by the BMM, OAM and DLM

PM Parameter as
displayed in GNM/
EMS

PM Parameter while
exporting the file to
FTP server

Current
&
historical
(15-min
& 24hour)
data

Description

Unit

Realtime
data

C-Band Normalized
Optical Power
Received

Normalized per C-Band channel


optical power received. Derived
from the C-Band Optical Power
Received and the number of
active channels. Each sample
within any given 15 minute is
adjusted automatically according
to the number of active receive
channels.

dBm

Yes

Yes

C-Band Rx Number of
Active Channels

Number of active C-Band


receive channels.

int

Yes

No

C-Band Total Optical


Power Transmitted

Total C-Band optical power


transmitted onto the OTS output.

dBm

Yes

No

C-Band Tx EDFA LBC

Measured laser bias current of


the EDFAs optical transmitter
towards the OTS output.

mA

Yes

No

C-Band Normalized
Optical Power Transmitted

Normalized per C-Band channel


optical power transmitted.
Derived from the C-Band Optical
Power Transmitted and the number of active channels. Each
sample within any given 15
minute is adjusted automatically
according to the number of
active transmit channels.

dBm

Yes

Yes

C-Band Tx Number of
Active Channels

Number of active C-Band transmit channels.

int

Yes

No

C-Band Total Optical


Power Transmitted
Min

BandOptMin

C-Band Total Optical


Power Transmitted
Avg

BandOptAve

C-Band Total Optical


Power Transmitted
Max

TN780 System Description

BandOptMax

Release 1.2

UTStarcom Inc.

TN780 PM Data

Page A-5

Table A-1 Optical PM Parameters Supported by the BMM, OAM and DLM

PM Parameter as
displayed in GNM/
EMS

PM Parameter while
exporting the file to
FTP server

Description

Unit

Realtime
data

Current
&
historical
(15-min
& 24hour)
data

C-Band Span Loss

Measured per C-Band channel


span loss between the adjacent
nodes (approximate difference
between C-Band Optical Power
Transmitted and C-Band Optical
Power Received between the
adjacent network elements).

dB

Yes

Yes

C-Band Rx EDFA
LBC1

Measured laser bias current of


the EDFAs optical transmitter
towards the DLM.

mA

Yes

No

Measured laser bias current of


the EDFAs optical transmitter
towards the mid-stage DCM.

mA

Yes

No

Measured laser bias current of


the EDFAs optical transmitter
towards the Line output.

mA

Yes

No

Measured laser bias current of


the EDFAs optical transmitter
towards the mid-stage DCM.

mA

Yes

No

Average C-Band power transmitted toward the DLMs.

dBm

Yes

No

dB

Yes

No

(BMM only)

Expected ratio of C-Band Rx


EDFA Optical Power Transmitted to the power measured at the
OSA RX AMP OUT monitor
port.

C-Band Expected Dispersion Compensation

Expected dispersion compensation based on DCM model number.

ps/
nm

Yes

No

(BMM only)
C-Band Rx EDFA
LBC2
(BMM with DCM midstage access only)
C-Band Rx EDFA
LBC1
(OAM only)
C-Band Rx EDFA
LBC2
(OAM with DCM midstage access only)
C-Band Rx EDFA
Optical Power Transmitted
(BMM only)
C-Band Rx Expected
OSA Ratio

(BMM/OAM with DCM


mid-stage access
only)

UTStarcom Inc.

TN780 System Description

Release 1.2

Page A-6

Optical PM Parameters and Thresholds

Table A-1 Optical PM Parameters Supported by the BMM, OAM and DLM

PM Parameter as
displayed in GNM/
EMS

PM Parameter while
exporting the file to
FTP server

C-Band Expected
DCM Loss
(BMM/OAM with DCM
mid-stage access
only)
C-Band Measured
DCM Loss

Unit

Realtime
data

Current
&
historical
(15-min
& 24hour)
data

Expected dispersion compensation loss based on DCM model


number.

db

Yes

No

Dispersion compensation loss


as measured by the EDFA.

db

Yes

No

Total OCG optical power leaving


the BMM towards its attached
DLM. One attribute for each
OCG.

dBm

Yes

Yes

Total OCG optical power arriving at the BMM from the local
DLM. One attribute for each
OCG.

dBm

Yes

Yes

Description

(BMM/OAM with DCM


mid-stage access
only)
OCG Layer Parameters (collected in BMM)
OCG Total Optical
Power Transmitted
OCG Total Optical
Power Transmitted
Min
OCG Total Optical
Power Transmitted
Avg
OCG Total Optical
Power Transmitted
Max

BMMOcgOptMin

BMMOcgOptAvg
BMMOcgOptMax

OCG Total Optical


Power Received
OCG Total Optical
Power Received Min

BMMOcgOprMin

OCG Total Optical


Power Received Avg

BMMOcgOprMax

OCG Total Optical


Power Received Max

BMMOcgOprAve

TN780 System Description

Release 1.2

UTStarcom Inc.

TN780 PM Data

Page A-7

Table A-1 Optical PM Parameters Supported by the BMM, OAM and DLM

PM Parameter as
displayed in GNM/
EMS

PM Parameter while
exporting the file to
FTP server

Description

Unit

Realtime
data

Current
&
historical
(15-min
& 24hour)
data

OCG Rx Number of
Active Channels

N/A

Number of active channels


within the OCG in the receive
direction from DLM to BMM. One
attribute for each OCG.

int

Yes

No

OCG Rx Number of
Active Channels Min

N/A

Min, max and average number


of active channels within the
OCG in the receive direction
from DLM to BMM. One attribute
for each OCG.

int

No

Yes

OCG Rx Number of
Active Channels Max
OCG Rx Number of
Active Channels Avg

OCG Layer Parameters (collected in DLM)


OCG Total Optical
Power Transmitted

Total OCG optical power transmitted by the DLM to the BMM.

dBm

Yes

Yes

OCG Total Optical


Power Received

Total OCG optical power


received by the DLM to the BMM
(has reading inaccuracy of
+2.5dB/-1.0dB).

dBm

Yes

Yes

dBm

Yes

Yes

OCh Layer Parameters (collected in DLM)


OCh Optical Power
Received
Och Optical Power
Received Min

ChanOchOprMin

Och Optical Power


Received Avg

ChanOchOprAve

Och Optical Power


Received Max

ChanOchOprMax

UTStarcom Inc.

Average optical channel power


received by the DLM. One measurement for each optical channel.

TN780 System Description

Release 1.2

Page A-8

Optical PM Parameters and Thresholds

Table A-1 Optical PM Parameters Supported by the BMM, OAM and DLM

PM Parameter as
displayed in GNM/
EMS

PM Parameter while
exporting the file to
FTP server

Current
&
historical
(15-min
& 24hour)
data

Description

Unit

Realtime
data

Average optical channel power


transmitted by the DLM. One
measurement for each of the ten
optical channels within an OCG.
One measurement for each optical channel.

dBm

Yes

Yes

Measured laser bias current of


the channel optical transmitter.
One measurement for each optical channel.

mA

Yes

Yes

OCh Measured Wavelength

Measured channel wavelength


of the channel. One measurement for each optical channel.

Ghz

Yes

No

Q-Value

The current Q-factor of the channel. One measurement for each


optical channel.

NA

Yes

No

OCh Optical Power


Transmitted
Och Optical Power
Transmitted Min

ChanOchOptMin

Och Optical Power


Transmitted Avg

ChanOchOptAve

Och Optical Power


Transmitted Max

ChanOchOptMax

OCh Laser Bias Current


Och Laser Bias Current Min

ChanOchLBCMin

Och Laser Bias Current Avg

ChanOchLBCAve

Och Laser Bias Current Max

ChanOchLBCMax

TN780 System Description

Release 1.2

UTStarcom Inc.

TN780 PM Data

Page A-9

Thresholding is supported for some of the optical PM parameters. Table A-2 on page A-9 lists those PM
parameters, corresponding thresholds and alarms reported when thresholds are exceeded.
Table A-2 Optical PM Thresholds

PM Parameter

PM Parameter as
displayed in file
exported to FTP server

Ranges

Alarms

Band PM Thresholds (BMM and OAM)


C-Band Expected Span Loss
(ESL)
C-Band Expected Span Loss
Threshold Low

OchSpanLossMin

C-Band Expected Span Loss


Threshold High

OchSpanLossMax

UTStarcom Inc.

Provisionable by the
user., but recommended by
UTStarcom based on
the customer networks characteristics.

Span Loss Out of


Range - Low (SLOORL)

Span Loss Out of


Range - High (SLOORH)

TN780 System Description

Release 1.2

Page A-10

DTF PM Parameters and Thresholds

DTF PM Parameters and Thresholds


UTStarcom TN780 supports extensive digital PM data collection at DTF Section layer and DTF Path layer.
The digital PM data is analogous to the SONET/SDH PM data and it is collected in the DLM in the TN780.
Thresholding is supported for all system digital PM data. Since digital PM data is transient in nature, TCAs
are reported when PM parameters exceed the provisioned threshold values within a collection period.
Figure A-2 on page A-11 gives a summary of the Digital PM and FEC PM parameters collected by the
TN780.

TN780 System Description

Release 1.2

UTStarcom Inc.

TN780 PM Data

Page A-11

Figure A-2 DTF PM Data Collected in the DLM and TAM


System
clock ref

TAM-2-10G

Facility
clock ref

DLM

Client Clock Gen

VOA

Tx
PIC
TOM

DTF
Mapper
N
UT

TOM

SerDes
System
Clock ref
Client Clock Gen

F
FE
EC
C// D
DT
TF
F
Ma
ap
pp
pe
er
r
M

Cross
point

SerDes

DLMMidplaneConnector

N
UT

OUT

Rx
PIC

IN

(li
sid

PMData collectedbythe Mapper:

FECPMData

DTFCV-S
DTFES-S
DTFSES-S

DTFSection
Level

DTFCV-P
DTFES-P
DTFSES-P
DTFUAS-P

DTFPath
Level

FECUncorrected BER
FECCorrected BER
FECCorrected Bits
FECUncorrectable Codewords
FECTotal Codewords

Table A-3 on page A-11 captures the PM parameters and corresponding thresholds defined for the DTF
Section and DTF Path layers.
Table A-3 DTF PM Parameters and Thresholds Supported by the DLM

PM
Parameter

Real-time
data

Description

15-min
and 24-hr
data

TCA
Reportin
g
supporte
d?

Default Threshold
Values
15-min
24-hour

Yes

1500

DTF Section Layer Parameters


DTF CV-S

UTStarcom Inc.

Count of BIP errors


detected at the DTF Section layer (i.e., using the B1
byte in the incoming signal). Up to 8 BIP errors can
be detected per frame, with
each error incrementing
the DTF CV-S current register.

Yes

Yes

TN780 System Description

15000

Release 1.2

Page A-14

DTF PM Parameters and Thresholds

Table A-3 DTF PM Parameters and Thresholds Supported by the DLM

PM
Parameter

Description

Real-time
data

15-min
and 24-hr
data

TCA
Reportin
g
supporte
d?

Default Threshold
Values
15-min
24-hour

DTF SES-P

Count of the seconds during which K (= 2,400 as


specified in GR-253-CORE
Issue 3 specification) or
more DTF Path layer BIP
errors were detected or an
AIS-P, TIM-P, OCI-P, or
BDI-P defect was present.

Yes

Yes

Yes

DTF UAS-P

Count of the seconds during which the DTF Path is


considered unavailable. A
DTF Path becomes
unavailable at the onset of
10 consecutive seconds
that qualify as DTF SES-P,
and continues to be
unavailable until the onset
of 10 consecutive seconds
that do not qualify as DTF
SES-P.

Yes

Yes

No

NA

NA

a. Note that the DTF Path path PM data is available only when a circuit is provisioned. The DTF Path PM data collected in
TAM is nearly identical to the ones collected in DLM. The difference is due to errors introduced on the backplane between
the FEC chips in the DLM and BMM.

TN780 System Description

Release 1.2

UTStarcom Inc.

TN780 PM Data

Page A-15

FEC PM Parameters and Thresholds


The TN780 network element performs FEC (Forward Error Correction) encoding/decoding function for
every optical channel on the Line side. The TN780 network element supports FEC PM data collection to
compute the effective BER on the channel along each digital link. The following table captures the FEC
PM data collected for every channel in the DLM.
Table A-4 FEC PM Parameters and Thresholds Supported by the DLM

FEC PM Parameter
FEC UnCorrected
BER

FEC PM Parameter as
displayed in the file
exported to FTP
server
FecUncorrectedRows

FEC Corrected BER

FEC Corrected Bits

FecCorrectedBits

FEC UnCorrectable
Codewords
Total CodeWords

FecTotalCodeWords

Description

Real-time
data

Uncorrected bit error


rate prior to FEC correction.

Yes

Corrected bit error


rate

Yes

Corrected number of
zeros and ones

15-min and
24-hr data
Yes

(integrated
over one
second)

Threshold
Supported
Yes
Default Value
= 10e-9

Yes

No

Yes

Yes

No

Uncorrected number
of codewords

Yes

Yes

No

Total number of
codewords

Yes

No

No

(integrated
over one
second)

The thresholding is supported only for the pre-FEC BER. If the BER before error correction is equal to
greater than the user configured value over an interval associated with the configured value, a Pre-FEC
BER-based Signal Degrade alarm is reported. The alarm is cleared when the pre-FEC BER is below the
threshold.

UTStarcom Inc.

TN780 System Description

Release 1.2

Page A-16

Client Signal PM Parameters and Thresholds

Client Signal PM Parameters and


Thresholds
The TN780 network element supports SONET OC-192, SDH STM-64, 10GbE LAN Phy, 10GbE WAN Phy,
SONET OC-48 and SDH STM-16 interfaces.
The TN780 network element supports PM data collection for the SONET OC-192/OC-48 and SDH STM64/STM-16 trib/client interfaces as listed in Table A-5 on page A-17. The PM data collection for the 10GbE
LAN Phy and 10GbE WAN Phy interfaces are not supported.
The PM data is collected for the client signals received at the ingress port (referred to as the Rx PM
parameters) and also the client signals transmitted at the egress port (referred to as the Tx PM
parameters). Rx and Tx PM data can be used to determine the number of errors occurred in the various
segments of the network:
Between the client equipment and the ingress port
Within the Digital Optical Network
Between the egress port and the client equipment
Figure A-3 on page A-17 gives a summary of the SONET and SDH client signal PM data collected by the
TN780 network element.

TN780 System Description

Release 1.2

UTStarcom Inc.

TN780 PM Data

Page A-17

Figure A-3 Client Signal (SONET and SDH) PM Parameters


T A M -2 -1 0 G

TOM

TOM

S e rD e s

IN
OUT

S e rD e s

DTF
M apper
IN
OUT

D L M M id p la n e C o n n e c t o r

C l ie n t C lo c k G e n

S y s te m
C lo c k r e f
C l ie n t C lo c k G e n
S O N E T C lie n t
S ig n a l P M D a ta :
R
R
R
R

x
x
x
x

C V -S
E S -S
S E S -S
S E F S -S

Tx
Tx
Tx
Tx

C V -S
E S -S
S E S -S
S E F S -S

S D H C lie n t S ig n a l
P M D a ta :
R
R
R
R
R

x
x
x
x
X

R S -B E
R S -E S
R S -S E S
R S -O F S
R S -L O S S

Tx
Tx
Tx
Tx
TX

R S -B E
R S -E S
R S -S E S
R S -O F S
R S -L O S S

Table A-5 Client Signal PM Parameters Supported by the TAM

PM
Parameter

PM Parameter as
displayed in file to
export to FTP server

Description

Realtime
data

Default
Threshold
Values
15-min
24-hour

15-min
and 24hr data

SONET Section Rx Parameters Collected in the TAM for SONET OC-192/OC-48 Trib Interfaces
Rx CV-S

RxCV

UTStarcom Inc.

Count of BIP errors detected at


the Section layer incoming in the
incoming clients SONET signal).
Up to eight Section BIP errors
can be detected per STS-N
frame, with each error incrementing the Sonet-Rx-CV-S current second register.

Yes

Yes

1500

TN780 System Description

15000

Release 1.2

Page A-18

Client Signal PM Parameters and Thresholds

Table A-5 Client Signal PM Parameters Supported by the TAM

PM
Parameter

PM Parameter as
displayed in file to
export to FTP server

Description

Realtime
data

Default
Threshold
Values
15-min
24-hour

15-min
and 24hr data

Rx ES-S

RxES

Count of the number of seconds


during which (at any point during
the second) at least one Section
layer BIP error was detected or
an LOS or SEF defect was
present.

Yes

Yes

120

1200

Rx SES-S

RxSE

Count of the seconds during


which K (=10000) or more Section layer BIP errors were
detected or an LOS or SEF
defect was present.

Yes

Yes

Rx SEFSS

RxSEFS

Count of the seconds during


which (at any point during the
second) an SEF defect was
present.

Yes

Yes

SONET Section Tx Parameters Collected in the TAM for SONET OC-192/OC-48 Trib Interfaces
Tx CV-S

TxCV

Count of BIP errors detected at


the Section layer in the SONET
signal received from the line/system side and to be transmitted to
the receiving client. Up to eight
Section BIP errors can be
detected per STS-N frame, with
each error incrementing the
Sonet-Rx-CV-S current second
register.

Yes

Yes

1500

15000

Tx ES-S

TxES

Count of the number of seconds


during which (at any point during
the second) at least one SONET
Tx BIP error was detected or an
LOS or SEF defect was present.

Yes

Yes

120

1200

Tx SES-S

TxSES

Count of the seconds during


which K (=10000) or more
SONET TX BIP errors were
detected or an LOS or SEF
defect was present.

Yes

Yes

Tx SEFSS

TxSEFS

Count of the seconds during


which (at any point during the
second) an SEF defect was
present.

Yes

Yes

TN780 System Description

Release 1.2

UTStarcom Inc.

TN780 PM Data

Page A-19

Table A-5 Client Signal PM Parameters Supported by the TAM

PM
Parameter

PM Parameter as
displayed in file to
export to FTP server

Description

Realtime
data

Default
Threshold
Values
15-min
24-hour

15-min
and 24hr data

SDH Regenerator Section Rx Parameters Collected in the TAM for SDH STM-64/STM-16Trib Interfaces
Rx RS-BE

RxBE

Count of the number of errors


within a block in the incoming clients SDH signal.

Yes

Yes

1500

15000

Rx RS-ES

RxES

Count of the number of seconds


during which (at any point during
the second) at least one RS
block error was detected or an
LOS or SEF defect was present.

Yes

Yes

120

1200

Rx RSSES

RxSES

Count of the seconds during


which30% or more RS block
errors were detected or an LOS
or SEF defect was present.

Yes

Yes

Rx RSOFS

RxOFS

Yes

Yes

Rx RSLOSS

RxLOSS

Yes

Yes

SDH Regenerator Section Tx Parameters Collected in the TAM for SDH STM-64/STM-16 Trib Interfaces
Tx RS-BE

TxBE

Count of the number of errors


within a block in the SDH signal
received from the network and to
be transmitted to the receiving
client.

Yes

Yes

1500

15000

Tx RS-ES

TxES

Count of the number of seconds


during which (at any point during
the second) at least one Tx RS
block error was detected or an
LOS or SEF defect was present.

Yes

Yes

120

1200

Tx RSSES

TxSES

Count of the seconds during


which30% or more Tx RS block
errors were detected or an LOS
or SEF defect was present.

Yes

Yes

Tx RSOFS

TxOFS

Yes

Yes

Tx RSLOSS

Yes

Yes

UTStarcom Inc.

TN780 System Description

Release 1.2

Page A-20

OSC PM Parameters

OSC PM Parameters
UTStarcom TN780 and Optical Line Amplifier network elements support OSC, a dedicated 1510nm optical
channel, to carry traffic and management traffic between adjacent network elements. The OSC is
terminated on the BMM on the TN780 and OAM on Optical Line Amplifier.
Table A-6 OSC PM Parameters Supported by the BMM and OAM

PM Parameter

PM Parameter as
displayed in file
exported to FTP server

Description

Unit

Realtime
data

Current
&
historic
al
(15-min
& 24hr) data

OSC optical PM parameters


Laser Bias Current
Laser Bias Current
Min

OscLBCMin

Laser Bias Current


Avg

OscLBCAve

Laser Bias Current


Max

OscLBCMax

Optical Power
Transmitted
Optical Power
Transmitted Min

OscOPRMin

Optical Power
Transmitted Avg

OscOPRAve

Optical Power
Transmitted Max

OscOPRMax

Optical Power
Received
Optical Power
Received Min

OscOprMin

Optical Power
Received Avg

OscOprAve

Optical Power
Received Max

OcsOprMax

TN780 System Description

Release 1.2

Measured laser bias current of the OSC optical


transmitter.

mA

Yes

Yes

Average optical output


power transmitted by the
OSC optical transmitter.

dBm

Yes

No

Average optical power


received by the OSC optical
receiver from the Line input.

dBm

Yes

Yes

UTStarcom Inc.

TN780 PM Data

Page A-21

Table A-6 OSC PM Parameters Supported by the BMM and OAM

PM Parameter

PM Parameter as
displayed in file
exported to FTP server

Description

Unit

Realtime
data

Current
&
historic
al
(15-min
& 24hr) data

OSC Ethernet packet PM data


Transmitted Bytes

The number of bytes transmitted by this network element on the OSC channel.

Bytes

Yes

No

Transmitted Packets

The number of Ethernet


packets transmitted by this
network element on the
OSC channel.

Packets

Yes

No

Packets Dropped at
Transmitter

The number of transmit


Ethernet packets dropped
by this network element.

Packets

Yes

No

Received Bytes

The number of bytes


received by this network
element on the OSC channel.

Bytes

Yes

No

Received Packets

The number of Ethernet


packets received by this
network element on the
OSC channel.

Packets

Yes

No

Packets Dropped at
Receiver

The number of received


Ethernet packets dropped
by this network element.

Packets

Yes

No

UTStarcom Inc.

TN780 System Description

Release 1.2

Page A-22

TN780 System Description

OSC PM Parameters

Release 1.2

UTStarcom Inc.

Appendix B

Optical Channel Plan


This chapter describes the TN780 optical channel plan:
TN780 Optical Channel Plan on page B-2

UTStarcom Inc.

TN780 System Description

Release 1.2

Page B-2

TN780 Optical Channel Plan

TN780 Optical Channel Plan


Table B-1 on page B-2 lists the 40-channel C-band channel plan supported by the TN780.
Table B-1 TN780 Optical Channel Plan
OCG Number

Channel Number

Center Wavelength
(nm)

Center Frequency
(THz)

1563.455

191.75

1561.826

191.95

1560.200

192.15

1558.578

192.35

1556.959

192.55

1555.343

192.75

1553.731

192.95

1552.122

193.15

1550.517

193.35

10

1548.915

193.55

1563.047

191.80

1561.419

192.00

1559.794

192.20

1558.173

192.40

1556.555

192.60

1554.940

192.80

1553.329

193.00

1551.721

193.20

1550.116

193.40

10

1548.515

193.60

1562.640

191.85

1561.013

192.05

1559.389

192.25

1557.768

192.45

1556.151

192.65

1554.537

192.85

1552.926

193.05

1551.319

193.25

1549.715

193.45

10

1548.115

193.65

TN780 System Description

Release 1.2

UTStarcom Inc.

Optical Channel Plan

Page B-3

Table B-1 TN780 Optical Channel Plan


OCG Number

Channel Number

Center Wavelength
(nm)

Center Frequency
(THz)

1562.233

191.90

1560.606

192.10

1558.983

192.30

1557.363

192.50

1555.747

192.70

1554.134

192.90

1552.524

193.10

1550.918

193.30

1549.315

193.50

10

1547.715

193.70

1545.720

193.95

1544.128

194.15

1542.539

194.35

1540.953

194.55

1539.371

194.75

1537.792

194.95

1536.216

195.15

1534.643

195.35

1533.073

195.55

10

1531.507

195.75

1545.322

194.00

1543.730

194.20

1542.142

194.40

1540.557

194.60

1538.976

194.80

1537.397

195.00

1535.822

195.20

1534.250

195.40

1532.681

195.60

10

1531.116

195.80

1544.924

194.05

1543.333

194.25

1541.746

194.45

1540.162

194.65

1538.581

194.85

UTStarcom Inc.

TN780 System Description

Release 1.2

Page B-4

TN780 Optical Channel Plan

Table B-1 TN780 Optical Channel Plan


OCG Number

Channel Number

Center Wavelength
(nm)

Center Frequency
(THz)

1537.003

195.05

1535.429

195.25

1533.858

195.45

1532.290

195.65

10

1530.725

195.85

1544.526

194.10

1542.936

194.30

1541.349

194.50

1539.766

194.70

1538.186

194.90

1536.609

195.10

1535.036

195.30

1533.465

195.50

1531.898

195.70

10

1530.334

195.90

TN780 System Description

Release 1.2

UTStarcom Inc.

Appendix C

Acronyms
Table C-1 List of Acronyms
Abbreviation

Description

A
ACLI

application command line interface

ACO

alarm cutoff

ACT

active

AD

add/drop

ADM

add/drop multiplexer

ADPCM

adaptive differential pulse code modulation

AGC

automatic gain control

AID

access identifier

AINS

administrative inservice

AIS

alarm indication signal

ALS

automatic laser shutdown

AMP

amplifier

ANSI

American National Standards Institute

AO

autonomous output

APD

avalanche photo diode

API

application programming interface

APS

automatic protection switching

ARC

alarm reporting control

ARP

address resolution protocol

UTStarcom Inc.

TN780 System Description

Release 1.2

Page C-2

Acronyms

Table C-1 List of Acronyms


Abbreviation

Description

ASAP

alarm severity assignment profile

ASE

amplified spontaneous emission

ASIC

application-specific integrated circuit

ATM

asynchronous transfer mode

AU

administrative unit

AUX

auxiliary port

AWG

array waveguide gating

AWG

american wire gauge

B
BDFB

battery distribution fuse bay

BDI

backward defect indication

BDI

backward defect indication

BEI

backward error indication

BER

bit error rate

BERT

bit error rate testing

BGA

ball grid array

BIP-8

bit interleaved parity

BITS

building-integrated timing supply

BLSR

bi-directional line switched ring

BMM-C

Band Mux Module - C band

BNC

Bayonet Niell-Concelman; British Naval Connector

BOL

beginning of life

BOM

bill of material

BOOTP

bootstrap protocol

bps

bits per second

BPV

bipolar violations

C
C

Celsius

CCITT

Consultative Committee on International Telegraph and Telephone

CCLI

commissioning command line interface

CDE

chromatic dispersion equalizer

CDR

clock and data recovery

CDRH

Center for Devices and Radiological Health

TN780 System Description

Release 1.2

UTStarcom Inc.

Acronyms

Page C-3

Table C-1 List of Acronyms


Abbreviation

Description

CFR

code for federal regulations

CH/Ch/ch

channel

CID

circuit identifier

CIT

craft interface terminal

CLEI

common language equipment identifier

CLI

command line interface

CO

central office

CODEC

coder and decoder

COM

communication

CORBA

common object request broker architecture

CPC

common processor complex

CPE

customer premises equipment

CPLD

complex programmable logic device

CPU

central processing unit

CRC

cyclic redundancy check

CSPF

constraint-based shortest path first algorithm

CSV

comma separated value

CTAG

correlation tag

CTP

client termination point

CTS

clear to send

CV

coding violation

CV-L

coding violation-line

CV-P

coding violation-path

CV-S

coding violation-section

D
DA

digital amplifier

dB

decibel

DB

database

DCC

data communications channel

DCE

data communications equipment

DCF

dispersion compensation fiber

DCM

dispersion compensation module

DCN

data communication network

DEMUX

de-multiplexing

UTStarcom Inc.

TN780 System Description

Release 1.2

Page C-4

Acronyms

Table C-1 List of Acronyms


Abbreviation

Description

DFB

distributed feedback

DFE

decision feedback equalizer

DGE

dynamic gain equalization

DHCP

dynamic host configuration protocol

DLM

digital line module

DMC

dispersion management chassis

DR

digital repeater

DSF

dispersion shifted fiber

DT

digital terminal

DTC

digital transport chassis

DTE

data terminal equipment

DTF

digital transport frame

DTL

digital transport line

DTMF

dual tone multi frequency

DTP

digital transport path

DTS

digital transport section

DWDM

dense wavelength division multiplexing

E
EDFA

erbium doped fiber amplifier

EEPROM

electrically-erasable programmable read only memory

EMC

electromagnetic compatibility

EMI

electro-magnetic interference

EMS

element management system

EOL

end-of-life

ESD

electrostatic discharge; electrostatic-sensitive device

ES-L

line-errored seconds

ES-P

path-errored seconds

ES-S

section-errored seconds

ETS

IEEE european test symposium

ETSI

European Telecommunications Standards Institute

F
F

fahrenheit

FA

frame alignment

TN780 System Description

Release 1.2

UTStarcom Inc.

Acronyms

Page C-5

FAS

frame alignment signal

FC

fiber channel; failure count

FCAPS

fault management, configuration management, accounting, performance monitoring, and security administration

FCC

Federal Communications Commission (USA)

FDA

Food and Drug Administration

FDI

forward defect indication

FEC

forward error correction

FIFO

first-in-first-out

FIT

failure in time

FLT

fault

FPGA

field programmable gate array

FRU

field replaceable unit

FTP

file transfer protocol

G
GbE

gigabit ethernet

Gbps

gigabits per second

GCC

general communication channel

GFP

general framing protocol

GHz

gigahertz

GMPLS

generalized multi protocol label switching

GNE

gateway network element

GNM

graphical node manager

GUI

graphical user interface

H/I
HTML

hypertext markup language

HTTP

hypertext transfer protocol

UTStarcom Inc.

TN780 System Description

Release 1.2

Page C-12

Acronyms

Table C-1 List of Acronyms


Abbreviation

Description

UAS-L

unavailable seconds, near-end line

UAS-P

unavailable seconds, near-end STS path

UDP

user datagram protocol

UPSR

unidirectional path switched ring

URL

universal resource locator

UTC

Coordinated Universal Time

volt

VGA

variable gain amplifier

VLAN

virtual local area network

VOA

variable optical attenuator

VPN

virtual private network

VSR

very short reach

W/X/Y/Z
WAN

wide area network

WDM

wavelength division multiplexing

XC

cross-connect

XFP

name of a small form factor 10 Gbps optical transceiver

XML

extensible markup language

MISC
1R

re-amplification

2R

re-amplification, re-shape

3R

re-amplification, re-shape, re-time

4R

re-amplification, re-shape, re-time, re-code

TN780 System Description

Release 1.2

UTStarcom Inc.

Вам также может понравиться